Data plays an ever-increasing role in today’s society and provides a wealth of possibilities for you and the organization you work for. As a BI professional and/or DBA you are probably used to working with data every day, but what if business requirements can no longer be met through traditional solutions? In those cases, more-and-more organizations look to data science to provide a solution.
But how and where do you start with things like exploratory data analysis or machine learning? Thankfully as a BI professional or DBA you have a head start when it comes to data, and through this full day workshop you will learn the skills you need to take your first steps in the area of data science!
What you will learn during the day:
- Understand the basic concepts of data science processes and why its different than traditional approaches
- Get more familiar with statistical terms and techniques and learn why they are important
- Making a start with programming in the R language
- How to use R and Azure Machine Learning to build your first machine learning models
- Operationalizing your models through various methods like in-database R using SQL Server 2016
For troubleshooting slow queries, looking at the execution plan is a good starting point. But sometimes, just looking at the plan does not help. Sometimes you need to dig deeper.
In this full-day workshop, you will learn everything you need to be take your understanding of execution plans to the next level. For almost every operator you can encounter in an execution plan, we will look at the inner workings, and look at properties that can affect their performance. We will also look at how operators in a plan interact with and affect each other.
Aside from explaining all of the common operators, we will also touch on several operators that were introduced or modified in the newer versions of SQL Server, and preview some changes that have been announced for future versions.
In short: After this workshop, you will be better prepared to look at execution plans, find the spo where it hurts, and then rewrite your query to get a faster execution plan.
If you have seen some execution plans but feel you need to bring your understanding to the next level, then this workshop is for you.Hugo Kornelis Precon
The team of RADACAD (Reza and Leila) will come to present a full day of deep dive training of Power BI. This day is not a typical training of Power BI. You will learn master tips of Power BI all through demos and hands on examples. We’ll take all 400 and 500 (deep dive) level content from our training and blog posts with live demos for this pre-con training day. Best would be brining your laptop with the latest version of Power BI Desktop installed to get the most of this course. Agenda is included, but not limited to:
- Power Query advanced data transformations
- M scripting tips and tricks
- Custom functions in Power Query
- Running Machine Learning R algorithms with Power BI
- Best practices of Modelling in Power BI
- DAX functions (Filters, Aggregations, Time Intelligence, Path, relationships) Deep Dive
- Advanced Data Visualization Tips
- Creating Custom Visuals with R and Power BI
- R and Power BI Deep Dive
- Gateway and types of Connections in Power BI; mastery tips
- Sharing and Row Level Security patterns (Static, Dynamic, Manager level access, users and profile, organizational hierarchy…)
"In which newspaper can you read that a woman throws her husband out of the apartment?
In "Better Homes and Gardens". What this sentence certainly must do in Microsoft SQL Server can be found in this one-day workshop on the individual security components of Microsoft SQL Server.
Starting with the correct security settings for the service account of Microsoft SQL Server, the same goes on in the staircase of the databases. What rights should you give to a login without endangering the security of the system or securing the data.
Once arrived in the "honourable tenement", one would of course like the nice rented apartment. But stop! - What am I allowed to do in my apartment? A lawyer is no longer required after attending the workshop. In this workshop, the following topics will be taught with examples and exercises:
Proper configuration of the service account of Microsoft SQL Server and SQL Server Agent
TDE and encrypted backup
The special features of sysadmin
Danger and Beauty from "xp_cmdshell"
Minimal Rights principal when creating logins
Database permissions and their consequences
Use of certificates for the execution of stored procedures
Schemes and objects
Owners of objects and access via stored procedures / views / functions
All security aspects of Microsoft SQL Server are covered in detail in this workshop and will be tested on your own laptops with extensive demos and exercises themselves.Uwe Ricken Precon
The number of data & analytics components available in Azure has exploded over the past couple of years - understanding which components should be in your toolbelt and what part each plays can be a daunting task, especially given the speed technology is advancing at. However, if you want to meet the challenges of the growing data landscape, you have to adopt distributed cloud architectures!
We have helped many of Microsoft's clients move their BI architectures to the cloud and we can guide you on that journey too, taking you through our recommended architecture and explaining the common pitfalls and mistakes to avoid. There are many learnings of traditional "Big Data" that we can apply when designing our platform, we'll take you through them so you're ready for any data problem - whether it's volume, variety or velocity!
Morning - The Core Modern Analytics Platform
In this first workshop, we want you to leave fully equipped to build a scalable, robust data platform that can support any analytical requirement. We’ll be providing an overview and practical demonstrations of:
• Cloud Architectures & Approaches
• Building and maintaining Azure Data Lakes
• Facilitating & Orchestrating data flow using Azure Data Factory V2
• Designing for performance with Azure SQLDW
These three key technologies will allow you to establish a platform on which many more analytical components can be added, and you will leave this session confident in taking this first step.
Afternoon - The Complete Modern Analytics Platform
The afternoon session will look at additional processing and integration components, enriching and augmenting the data you're collecting and curating:
• Applying the Lambda Architecture with Streaming Analytics
• Serverless Orchestration with Logic Apps & Azure Functions
• Tackling Big Data using Data Lake Analytics & U-SQL
• Reporting & Visualisation in the Modern Analytics Platform
With these sessions combined, you'll have a firm understanding of all the technologies needed to build an end-to-end analytics platform in a modern, scalable manner. This knowledge, and the patterns we will cover, will equip you to design a solid foundation which will support a vast variety of data analytics solutions.
Receiving signals from over 4000 endpoints every second results in a large amount of data to ingest, process, analyse, visualize and create predictive models on. We used Azure Databricks to process this data and during our talk we will discuss how Azure Databricks facilitated our data science process.
We’ll share insights on how we went from raw signal data to visualisations in PowerBI, combining different data science languages like Spark and R and benefiting from Azure Databricks features like auto scaling and job scheduling.
At Build! 2018, Microsoft & DJI announced the collaboration between the two by releasing an SDK (in private release) that makes it possible to let developers control a DJ drone.
In this session we want to combine the power of the SDK with Azure Cognitive Services and by making use of a custom build object detection model that will be running on a local machine (edge computing) in combination with WinML.
A full blown AI session that will show the power of the cloud and edge.
Interested in how to design your own serverless Azure data warehouse solution?
In this session you will learn howto:
* connect your on-prem datasources with the Microsoft Azure stack
* load data in an efficient & performant way
* enrich data
* prepare & populate your presentation layer
After this session you will have a clear understanding of which Azure services you have to combine to create a traditional data warehouse in the cloud.
In this session we go one step further than the basics and we will take a look at more advanced concepts of Biml:
* using annotations to store metadata inside your code
* how to re-use your BimlScript code
* how to troubleshoot your BimlScripts
After this session, you'll be able to take your Biml skills to the next level and create more reusable scripts for your toolbox. A working knowledge of Integration Services package development is necessary, as well as basic knowledge of the Biml language.
On the current Microsoft Data Platform we have several options to move our data to the cloud. However, finding the right fit for your environment is not that straightforward.
During this session, we will be showing you all the options you have for choosing your own cloud data strategy.
We will be handling Azure SQL databases, Azure Managed Instances, Azure VM’S and hybrid solutions.
We will show you what options you have and how you can move your on-prem systems to the cloud.
After this session you will be ready to start moving your environment to the cloud in an efficient manner.Pieter Vanhove Stijn Wynants
Many existing Data Factory solutions include a large number of workarounds due to limitations with the service. Now that Data Factory V2 is available, we can restructure our Data Factories to be lean, efficient data pipelines, and this session will show you how.
The initial Data Factory release was targeted at managing hadoop clusters, with a couple of additional integrations thrown in - it was mistakenly believed to be "the new SSIS" and subsequently there were a lot of very disappointed people. The new release remedies many of these complaints, adding in workflow management, expressions, ad-hoc triggers and many more features that open up a world of possibilities.
This session will run through the new features in ADFV2 and discuss how they can be used to streamline your factories, putting them in the context of real-world solutions. We will also look at the additional compute options provided by the new SSIS integration, how it works within the context of Data Factory and the flexibility it provides.
A working knowledge of ADF V1 is assumed.
It's not a secret that a deadlock - it's not very good. Definition of deadlock is very simple and quite clear: This is an exceptional situation when two concurrent queries request same resources but in a different order.
Classic deadlock can occur when two concurrent transactions are modifying data from the two tables in a different order. Unfortunately, in real life deadlocks often are more complex and unobvious. One of the rules, which I always keep in mind, sounds: "You can not design a database, in which the occurrence of deadlock is impossible". And we should deal with them. The algorithm is simple:
1. Catch the deadlock
2. Find the root cause
3. Design and implement a solution.
In practice, these steps can be tricky and may require different types of analysis.
In this session, we will look at ten various deadlocks scenarios and find the root cause of the deadlocks. Then, you will see how it will be easy to get the solutions for the deadlock issue if we did the second step.
Time series and Forecasting has been used in different fields to predict the trend and pattern of changing Data during time. IN this session I am going to show different ways of doing forecasting in Power BI. A demonstration on how to use Power BI custom visual for forecasting, how to explain a forecasting diagram, how to write R code for forecasting and how the forecasting algorithms work will be presentedLeila Etaati 300
Have you ever wondered if there are scored more goals when it rains or if Stoke wins more games on cold weekday evening at bet265 stadium? This session shows how you can use open data sources and web pages to gather data. How you can then manipulate and extend it and finally report on it.
We will look into how you can use open weather data and combine it with data from web pages to create the dataset you need. When we have our dataset we will manipulate and extend it using M and DAX so that we can get meaningful insights from it. We will then dive into the data to see if there is anything to report.
In this end to end Power BI Desktop demo we will use fun data that many can relate to as the English Premier League is one of the most popular football leagues in the world. The audience will take away many nuggets of information as they see how a real world example could look like. I will share with them all the obstacles and learning I got when creating this report so they will see both the limitation of Power BI Desktop and open data as well as its strength.
In the end the Power BI Desktop file will be available to download for the audience.
The system database TempDB has often been called a dumping ground, even the public toilet of SQL Server. (There has to be a joke about spills in there somewhere). In this session, you will learn to find those criminal activities that are going on deep in the depths of SQL Server that are causing performance issues. Not just for one session, but those that affect everybody on that instance.
After this session, you will have learned how to architect TempDB for better performance, how to create code more efficiently for TempDB, understand how space is utilized within TempDB, and have learned about queries and counters that will help diagnose where your bottlenecks are coming from.
Nearly every aspect of the modern music industry relies on Big Data, machine learning, and analytics. Karen Damen made a new record by using traditional methods,
I personally think I might do better by using the power of Analytics, combining Machine Learning and prediction on openly available datasets analyzing tracks, Fans and artists.
Don't worry I won't sing ;-)
Every new release of SQL Server brings a whole load of new features that an administrator can add to their arsenal of efficiency. SQL Server 2016 / 2017 has introduced many new features. In this 75 minute session we will be learning quite a few of the new features of SQL Server 2016 / 2017. Here is the glimpse of the features we will cover in this session.
• Adaptive Query Plans
• Batch Mode Adaptive Join
• New cardinality estimate for optimal performance
• Adaptive Query Processing
• Indexing Improvements
• Introduction to Automatic Tuning
This 75 minutes will be the most productive time for any DBA and Developer, who wants to quickly jump start with SQL Server 2016 / 2017 and its new features.
Microsoft Power Query is the transformation engine in Power BI. This is the engine that you do all data preparation before loading data into the model. This is the keystone of your Power BI solution. In this session, you will learn what Power Query can do. You will learn how M, the language behind the scene of Power Query, can be more beneficial than the Power Query graphical interface. You will see demos of transformations that you can do with M Code. You will learn about functions, parameters, generators and many other amazing features of this tool. Prepare to be amazed by what Power Query can do in this demo-filled session!Reza Rad 300
DML operations mean lots of work for the database engine of Microsoft SQL Server. Understanding the details of a transaction may give you great benefits when planning workloads for INSERT, UPDATE and DELETE operations.
This session will demonstrate the huge difference of allocation data in a heap and in a clustered index. If you don't know the benefits of correct record size and / or you are wondering how you could release allocated data pages in a professional way this session may give you all the answers. - data page allocation and its difference for heaps and clustered indexes - what is the amount of translog when you update data in a heap and/or in a clustered index - delete data from a heap and what is the huge difference to a clustered index? - speed up ETL processes by usage of the right strategy for INSERTS and DELETES - Page Splits vs. Forwarded Records - do you really need a clustered index in a table? Pro's and Con's of clustered indexes for DML operationsUwe Ricken 400
Have you ever tried to migrate your On Premise DataWareHouse (DataVault) solution to a Azure SQL DataWareHouse? Azure SQL DW is a MPP system and requires a lot of changes to your current solution.
What are the best practices to Migrate your solution with as little impact to users as possible? Why should you migrate to a Azure SQL DW? I did, and would like to share my experience.
I will take you in this session through all the steps you have to take care off. The following points will be discussed Distribution keys, Replicated Tables, identity Colums, CTAS, scaling, pausing, monitoring and much more.
Traditionally advanced analytical solutions, like machine learning, require you to bring your relational data to the machine learning model. Your model would then perform a prediction and return the results. While the process described above is reliable, it involves moving your data between the database where it is stored and the location where your model resides. This also means an increase in the complexity of your analytical solutions. For instance, how do you trigger the scoring of new data as soon as it enters the database? Or, how can you design this process for real-time scoring?
With the release of SQL Server 2016 Microsoft integrated a solution to the questions above, in-database analytics, allowing you to bring the analytics to your data instead of the other way around. Through in-database analytics we can design, train and score models directly from SQL Server without moving data out and back into the database. This creates a huge advantage, especially when working with real-time predictions, but how do you implement in-database analytics in your environment?
In this session we are going to explore the various methods available inside SQL Server 2016 & 2017 to perform in-database analytics. From building and storing our models directly inside SQL Server, to performing real-time scoring on data as soon as the data is stored inside a table.
After this session you will be able to understand the advantages and disadvantages of the various in-database analytics methods and you will be ready to start building your first in-database models!
Azure Analysis Services and SQL Server Analysis Services enable you to build comprehensive analytic solutions that supports your business delivering actionable insights.
This session will go through all the new features for large, enterprise models in the areas of performance, scalability, advanced calculations, model management, and monitoring.
Learn how to use these new features to deliver tabular models which performance and deliver the expected user experience, with easy data loading and simplified consumption. This will enable the best reporting experiences on your corporate data.
Best of all i will show you how to boost your development process using Tabular Editor utilizing these new features.
The SQL Server Query Optimizer makes its plan choices based on estimated rowcounts. If those estimates are wrong, the optimizer will very likely produce a poor plan. And there's nothing you can do about it. Or is there?
In this session, you will learn exactly where these estimates come from. You will gain intimate knowledge of how statistics are built and maintained and how they are used to estimate row counts. But you will also learn how filters and joins influence those estimates.
Though the focus of this session is on understanding the cause of bad estimates, you will also learn ways to fix the problems and get better estimates - and hence, better performing queries.
The session goes in depth about how you can use Power BI to visualize real-time data coming from IOT devices.
It will also show you that even advanced analytics with Azure Machine Learning can be used on real-time data. All of this will be done using Power BI and Azure (Eventhubs, Machine Learning, Stream Analytics)
Maintaining industrial machines is expensive: sensor data needs to be monitored and analyzed, technicians need to be sent out and defects need to be repaired. Predictive maintenance reduces these costs by transitioning from a break-fix model to a prevent-optimize one. To this extend, We propose an end to end Microsoft Azure solution. First, representative sensor data for industrial machines is ingested into Azure with Event Hubs. Using Streaming analytics and Azure ML Studio, we'll predict whether the machine approaches overheating in real-time. If overheating is predicted, the machine will be automatically shut down until cooled again to prevent hardware damage. The maintenance department will be notified by phone or mail and can further analyze the prediction or oppose the automatic shut down in a PowerBI dashboard.Breght Van Baelen Saar Gillis 400
Security? It's simple. We have Security Team... Security of our environment, application, development it's their security. We follow Best Practices, we implementing their's suggestions (or not...).
But maybe today, in June 2018, where GDPR is a fact, we should look a little bit more in details for the security aspects. Well know and less known risks, vulnerability assessments, secure coding, secure testing,
Let's discuss: SEC/DEV/OPS/SDLC/OSSTMM/OWASP/ITIL and few other acronyms. Use freely available knowledge and specially prepared environment to check and test our security before we touch out Visual Studio, PowerShell, CLI, Visual Studio Code, or even JSON. Be #SecureByDesign
Extended Events are much more powerful than any other monitoring technology available in SQL Server. Despite this potential, many DBAs have yet to abandon Traces and Profiler. Partially because of habit, but mostly because the tooling around Extended Events was less intuitive until recently.
Now, it's easier than ever to set up, control and inspect Extended Events sessions with dbatools! Not only does it simplify your basic interaction with XEvents, but it also helps solve your day-to-day problems, such as capturing and notifying deadlocks or blocking sessions.
Join PowerShell MVP Chrissy LeMaire and SQL Server MVP Gianluca Sartori to see how PowerShell can simplify and empower your Extended Events experience. Say goodbye to #TeamProfiler and join #TeamXEvents with the power of dbatools.
Everything in our world is located “somewhere” and is related to other things. Spatial analysis consists of studying these relationships to find out meaningful patterns and behaviors.
Figure out, you’re looking for the best position to open a new store. It´s not only a matter of “where”, but also there are more implications; is the area easily accessible by customers? Is there any parking? Is it easy to reach for suppliers? Are there any competitors store around? What is the volume of shopping for the same business in the area?
Here is where spatial analysis can help us collecting, comparing and matching data to build up a framework of possibilities.
Since 2008 release, SQL Server is supporting spatial data type. Now new amazing features are offered with the addition of R. R is shipped with a huge number of packages for performing spatial analysis, mapping, geocoding, etc . There virtually anything you can’t do with R: finding relationships, measuring spatial autocorrelation, interpolating point data, mapping point data, …
And, last but not least, we have Power BI that offers a full range of mapping capabilities. Not only bubble or choropleth maps, but visual for performing spatial analysis like ArcGIS, or for creating custom shape maps. And R scripts naturally.
In the session, we will show how the joint use of these three tools empowers us to analyze and query the spatial properties of data.
We’ll showcase a real-world example for a better understanding of the endless possibilities that are now offered to us.
Come, have fun and discover a world of information inside your data with Spatial Analytics!
SQL comes with a lot of security settings out of the box. Very few of them are enabled by default. We will learn how to set up and configure SQL in a secure way.
Some of the topics that will be covered:
• gMSA accounts
• Kerberos vs NTLM (Windows vs SQL auth)
• Granular rights (not everybody is a sysadmin)
• Hidden instance
More important, which configuration options provide you with a false idea of security:
• Putting SQL on a non-standard port
• Database encryption (in some cases)
If timing permits a live-demo or screen recording based on my following blogpost:
Running SQL Server in Docker containers brings benefits that data professionals shouldn't be ignoring. Over the last year the images provided by Microsoft have matured to a point where running SQL Server containers in production is a viable option.
However, running standalone Docker containers presents challenges so other technologies are need to support them. This session will provide an introduction to the various options for running SQL Server containers in Azure.
I'll cover the following different topics:-
The Azure Container Registry
Azure Container Instances
Azure Container Services
This session is aimed at SQL Server DBAs and Developers who have some experience with Docker and want to know the different options that are available in Azure.
Each topic will be backed up with demos which will show how simple it is to get up and running with these technologies
"Have you had performance tank despite the code working fine in another environment? Maybe heard that some SQL is bad but not why? If so, this is the session for you!
This session will start with a walkthrough of some of the basic settings in SQL Server and how they affect you as a developer. It follows with key tips on what settings to change, why some code will wreak havoc on your performance and how isolation levels matter (and why NOLOCK can be an exceptionally bad idea!) The session is led by a 20-year DBA veteran who decided to try to help developers understand performance issues by seeing things from his perspective.
If you want to explore how default settings kill your performance, investigate why harmless SQL might not be quite so harmless and gain insight into how isolation levels affect function and performance, then this session will provide you with the tools to think outside the box and incorporate database engine knowledge into your developer prowess!"
Basic T-SQL knowledge required, will be talking about:
Basic starting point (set based thinking)
Query tuning tools
Query tuning basics
Index Tuning basics
Extra things to keep in mind
There are different tools and approaches in developing ML models. But, how to use a model that is trained with a certain tool on a certain platform without having to rebuild it completely on another platform?
Many large companies have worked on establishing standards for model interchange. ONNX, PMML and NNEF are the most used. What are they? How do they work? When the use them?
In this session we try to give an answers on these questions.
How do you test your SSIS packages? Do you prepare them, set the parameters and variables, maybe get some sample or production data and run few times by hand in SSDT? It’s not a bad practice when you start your ETL journey, but after some time you probably think about automation. If not – you should. Perhaps you have some SQL scripts that verify data in the tables after package execution, but if you are like me, somewhere inside you feel that it’s not enough. It’s time you start automated SSIS unit and integration testing. On this session, I will show you why you should and how to start with automated testing of the packages and the whole projects using ssisUnit – the free SSIS testing library. I will start with some basics and move forward to the real-life examples of the unit and integration testing of the SSIS packages/ projects/solutions. You will see the benefits of automated testing and that testing is hard only at the beginning.Bartosz Ratajczyk 300
Data scientists conquering more and more territory in todays organizations. The insights they give have made a big contribution to the company’s success. The experimental and researching face is now shifting towards automated processes where business processes can benefit from this new knowledge. With Microsoft ML services we can build and integrate the data scientists work into our existing architecture directly on top of your database using the in-database integration facility.
In this session, I explain one possible solution for creating a supervised learning model on an existing SQL server 2017 installation, the automation for the model regeneration, retraining, rescoring, versioning of the model and finally the usage of it in your SQL code. Some architectural questions regarding the choice between Azure ML or In-Database implementation will be explained.
Have you got some data structures not playing well with the relational concepts? Data as it appears in the real world is naturally connected. Traditional data modeling focuses on entities and is not perfectly rendering this highly connected world. For many applications, there's a need to model both entities and relationships naturally. Azure Cosmos DB, with its Gremlin Graph API, supports this property graph model. During this session, you’ll learn the best use-cases and the great benefits of graph models but also how Azure Cosmos DB is supporting this. You’ll receive a good explanation of what’s TinkerPop and why it’s so important in the world of graph databases. The Gremlin language will be explained from the basics to some more advanced concerns in the form of recipes for common use-cases of graph databases!Cédric Charlier 200
Reading online news articles, blogs, etc... you would believe the Quantum Revolution is upon us. Apparently, it will solve all of our computing problems (allegedly...)! But what is a Quantum Computer anyway? How does it work? Most importantly, how will it affect me, and my daily job?
Walking out of this session, you will understand the core concepts of Quantum Computing and possible impact on both classic BI and Data Science. Going hands-on we'll see how current versions are being optimized, and how you can start wielding the power of Quantum Computing today (or, maybe tomorrow, soon anyway)!
In this session we will look at all the important topics that are needed to get your Power BI modelling skills to the next level. We will cover the in memory engine, relationships, DAX filter context, DAX vs M and DirectQuery. This will allow set you on the path to master any modelling challenge with Power BI or Analysis Services.Kasper de Jonge 300
After a very quick overview of Azure Data Lake Analytics let’s get stuck into the technology. This highly scalable cloud service has been generally available for a few years now and its adoption for production workloads is rapidly growing. But is U-SQL just another transformation tool or can we unlock its potential as a powerful, scale out framework to manipulate our data in new ways? In this session we’ll explore that question and see how U-SQL can be applied to production data analytics solutions. Plus, our options for code generation and deployment in Azure with meta data driven workloads.Paul Andrew 300
You’ve probably already seen that R icon in Power BI Desktop. It shows up when creating sources, transformations and reports. But the ugly textbox you got when you clicked upon those icons didn’t encourage you to proceed? In this session you will learn just a few basic things about R that will greatly extend your Power BI data loading, transformation and reporting skills in Power BI Desktop and PowerBI.comNico Jacobs 200