A technical blog about my projects, challenges, and discoveries in the world of data warehousing using SQL Server, Power BI Desktop, DevExpress, and more.

Sunday, November 6, 2011

SQL Saturday #96 Wrap-Up and Top 10

SQL Saturday #96 happened in Washington, D.C. yesterday, and it was a great event. It is a special event to me because it was my first opportunity to speak at a PASS event and give back to a community that has given me much and transformed my queries for the better time and time again. If you are visiting my blog because you came to one of my sessions, please leave me some feedback on the presentation -- constructive criticism is as welcome as praise.

If you would like my PowerPoint presentations along with the .sql files I used for the demos in SQL Server Management Studio, you can download them from the SQL Saturday #96 Schedule page.

And in the same vein as my comments on the PASS Summit, I'm going to post the top ten things I learned yesterday. These are in roughly chronological order.
1. If you ever go to a SQL Saturday event, definitely use the Speed Pass and print it off before you to the venue on the day of the event. If you don't, you'll likely be in a line waiting to check in before you can join the fun.

2. Apparently Twitter or some Twitter-related organization ranks people's "influence" based on their tweets and the number of people who respond to them. This ranking is quite important to many people, and it was a point of animated discussion in the speaker's room at this event.

3. I discovered that people who are coming to T-SQL and SQL Server from Oracle and other platforms can benefit from an introductory course such as my introduction to Common Table Expressions, even when their time of experience with SQL is greater than mine. I suppose this is self-evident, but it was a little eye-opening to me.

4. Even though features have been around since SQL Server 2005, some people who have been using T-SQL for a long time may have overlooked a feature that I take for granted now, such as the window functions. I was a little pleased when I heard a couple of attendees express surprise that it is possible to generate an aggregate value without a GROUP BY clause because it meant that I had something to share that I knew could help them tremendously.

5. Receiving applause for a presentation is truly gratifying, even when it is a customary act done for every presenter. It seemed authentic in both my sessions, and therefore it was meaningful to me.

6. It seems that for most Business Intelligence professionals, having the primary justification for having a data warehouse is the ability to create an OLAP cube for analysis. I'm not sure that will be my justification. I think its primary benefit as a companion to our OLTP database would be to capture historical data for slowly changing dimensions and a close second or even a tie for first would be to enable a clearer semantic model for our end users in report design.

7. The date dimension generally adds the most overhead to any OLAP cube because it generally adds 365 rows x however many years are tracked. If there is any way to summarize to the weekly granularity or less, that will be a huge win for the efficiency of the cube.

8. It is ever clearer that any data warehouse that might be built as a companion for the Shelby v.5 OLTP database will include a single "EntityNames" role-playing dimension that contains all the entity attributes across all the available modules supplemented with views that take the core table and breaks out the module-specific attributes.

9. Even a large software company like Microsoft can overlook a feature that the end user sees as patently obvious. The case in point for today is the ability to update automatically all the Reporting Services reports that use a report part that has been changed. It's a manual process for now.

10. The built-in change tracking mechanism in SQL Server may be a great option for customers who want to know when things have changed at a more detailed granularity than we capture in the OLTP tables. But it only captures three days' changes by default, so a secondary mechanism for moving that data to a flat file or supplemental table is needed to go further back from there.

There you have it, my top 10 things learned yesterday at SQL Saturday #96. I'll return to blogging more procedural stuff after today. Let me finish this entry by saying that I really enjoyed the privilege of sharing my knowledge with the PASS community, and I hope to have similar opportunities in the future. If I do, you'll definitely be reading about it here. :-)