By SQL Server MVP Ami Levin
T-SQL Tuesday is now entering its fourth year! Sebastian Meine, who's hosting this month's T-SQL Tuesday, chose joins as this month's subject, to correspond with his "A join a day" blog series.
"If you walk into a room full of DBAs or DB developers and you feel like having the same kind of fun as setting fire to a dry hayfield, just ask this question: “What’s a better design, using natural keys or artificial keys?” Satisfaction guaranteed.
When I started to study database design, this was one of the first hot controversies I encountered. If you Google the phrase “natural vs. artificial keys,” you’ll come up with more than 150 million results, including endless debates, numerous articles, blog posts with passionate replies, long theoretical and practical arguments, and even the occasional profanity."
Just two months ago, we launched the beta version of Qure Analyzer V1.5. We would like to thank the hundreds of users who took part in the beta plan, used the beta version, reported bugs and provided valuable feedback. Today, I'm proud to announce the GA release of Qure Analyzer V1.5. You can download your copy here.
For a full list of features, see the previous blog post or have fun reading the very detailed user guide.
Today I want to share with you a true story that happened to me last week. Although a bit off-topic, you might find it 'amusing', useful and a real time saver in case you ever need to do the same thing.
When I bought my desktop PC last year, I purchased an SSD drive to be used as the system drive (C:\). At the time, the prices of SSD devices was pretty steep so although I had some doubts, I convinced myself that a 40GB drive should be more than enough to accommodate everything I need. Big mistake #1. As it turned out, only the I-tunes backups of my iPhone which (for some reason) can be placed only on C:\ took 16GB... Add to that my SQL Server installation, office and other essentials. Most of these applications, even when installed to a different drive, still place a lot of stuff on the system drive so soon I found myself struggling for space and using every trick in the book to free up as much space as possible on C:\ just to be able to work.
I've recently encountered some confusion regarding one of the new components of SQL Server 2012, the SQL Server Data Tools AKA SSDT. From the questions in the MSDN SQL Server forum which I moderate, I realized that the distinction between SQL Server Management Studio (SSMS) and SSDT is not clear enough to many. I have to agree that Microsoft did leave some space for confusion and a lot of duplicate functionality in both tools so let's try to clear up a bit of this confusion.
Before SQL Server 2012, the only management / development tool that shipped with SQL Server as part of the installation was SSMS. SSMS actually serves 2 main purposes - Management and development. You can manage all your SQL Server instances using graphical tools, wizards and using T-SQL. SSMS also included the visual schema designers (A separate blog topic...), Query designers, scripting features and many others that are used by both DBAs and developers.
About a year ago, we launched Qure Analyzer V1.0 (previously called Qure Workload Analyzer). Since then, we had thousands of downloads and received a lot of great feedback from the community. We did not rest... Your feedback and bug reports kept us busy improving, fixing and enhancing Qure Analyzer to keep its position as the best workload analysis tool available for SQL Server.
Yesterday, at our 118th Israeli SQL Server User Group Meeting, Adi Cohn delivered a session on common performance pitfalls. One of the issues he mentioned is something every DBA and database designer should be aware of. Many applications use overly large size limit for their variable data types. The reasoning in many cases is simply "Why not?" If you have a variable size column and you made the choice to use one of the VAR data types anyway, why limit the size? Even if you expect to have up to 300 characters in the 'comments' column for example, what is wrong with defining it as 3,000? it's just metadata, right? Wrong...
Although physically, storage will be exactly the same if you define your VARCHAR as 300 or 3,000, it can still make a huge impact on performance. SQL Server needs to estimate the work space that is required for each query and allocate it before the query begins to execute. For variable type columns, the estimation is that the data size will be 1/2 of the max size defined for the column. So if you have a column that is defined as 300 bytes, the allocation estimation will be 150 but for 3,000 - the estimate will be 1,500. On systems that experience memory pressure, this can make a huge difference in performance as your queries will wait for memory allocations, potentially significantly hurting performance in addition to the fact that needless memory will be allocated that could serve SQL Server for much more useful purposes.
An interesting discussion came up lately on the MVP private forums regarding index rebuilds, recompilation and statistics updates. Although the debate spanned too many aspects for me to cover here in this short post, I think that one important point should be emphasized as it might be making you do redundant work or even worse - unknowingly degrade the performance of your workload.
When you rebuild an index, a full scan of all index keys is performed. SQL Server uses this opportunity not only to rebuild the index, but also to update the related statistics. What is unique about this statistics update is the fact that it is equivalent to an update with the full-scan option. The default statistics sampling uses only a small fraction of the keys in order not to load your production server. A statistics based on a full scan of the data has the potential to be more accurate, therefore provide the query optimizer with better information, allowing it in turn to produce better performing execution plans for your workload queries. Also remember that a statistics update flushes all plans that use it from the procedure cache forcing a recompilation of the related queries.
SQL Server 2012 brings with it so many new features and enhancements that it is futile to try and sum it up. Many articles and blog posts have been and will be written on the shiny new HA/DR, T-SQL, Columnstore and batch processing and numerous others. In this short post, I want to draw your attention to some less ‘shiny’ work that will affect your first interaction with SQL Server 2012 – the installation process.
We would like to thank everyone who participated in the Qure Analyzer New iPad raffle. We have received a lot of great entries, but unfortunately there can only be two winners.
And the winners are:
The long expected announcement has finally come...
Partners and customers can already download the evaluation version, GA to begin ~ April 1st.
A big thank-you to the thousands of SQL Server professionals that have downloaded Qure Analyzer, our free tool for analyzing and comparing trace files. Every day we hear of really cool Qure Analyzer success stories. But we’d love to hear more – and we’re giving away a brand new iPad3 (yes, THREE) in the process!
So tell us how Qure Analyzer has helped you, and you’ll be entered into the March 31st raffle.
This week, Microsoft is hosting the annual MVP Summit in Redmond - http://www.2012mvpsummit.com/. It's an amazing event where MVPs from all over the world and from all award categories join Microsoft staff for in-depth product and technology sessions, meet the program managers and do a lot of networking and even more so FUN!
I've been fortunate to attend two summits in the past, but this year I won't be going. I've been doing too much travelling recently and I'm coming the US at the end of March to speak at SQL Connections! I will deliver a session about "Physical Join Operators" and on the way back to Israel, I'll stop in London to deliver a session on "Clusters vs. Heaps" in SQL Bits X. Both events are official SQL Server 2012 Launch events in the US and UK respectively.
Are you experiencing performance bottlenecks? Is this query taking forever to complete? Most DBAs would start their resolution attempts with one of SQL Server’s most powerful features – Indexes. Indexes are such an effective mean that a small change could mean a world of a difference in performance. Most DBAs are also aware that adding indexes will cause some degradation of data modification operations as more indexes need to be updated to reflect the changes. However, few are aware that sometimes adding indexes can be a double-edged sword, with far reaching implications on performance, and not always for the best… This article will help you understand a hidden danger of adding more indexes, that can result in surprising effects on your workload. It tells a story of a production server that was brought to its knees with the addition of a single, innocent looking index.
The SQL Server query optimizer is considered magical by many, even the top professionals. It is an extremely complicated piece of software whose intricate paths and logic are hard to perceive. In this article, the first in a series, We will try to tackle some of the query optimizer’s finer tricks to remove the magical aura and reveal the science underneath.
Writing a query that joins several tables is easy, right? There are just a few major join types and their syntax is pretty straightforward and clear. So how come such a simple join be the source of so many headaches? Why does one query with a simple JOIN take forever and another, similar one a fraction of a second? If you’ve ever wondered what goes on under the covers and how SQL Server implements your joins, this article will help you get started.
If you’ve ever used SQL Trace, STATISTICS IO or any other SQL Server monitoring, you should know what logical reads mean. Or do you? Logical reads server as one of the primary counters for evaluating a Query’s performance. The rule of thumb says “The lower the number of reads, the better the query performs”. I’m sure you’ve heard it before. But… did you notice that in some cases the exact opposite is true? Did you ever see a query improve dramatically in terms of duration and CPU but have it’s logical reads sky-rocket? If you want to understand the TRUE essence of logical reads and be able to tell when it is really a good indicator of performance, I highly recommend you to read this article.
RC stands for Release Candidate, it’s a much more stable and ‘close to final’ version than the previous CTP versions.
Go get your copy from http://www.microsoft.com/download/en/details.aspx?id=28145
This is soooo coooool...
In a previous post, I’ve presented the enhancements to failover cluster instances. In the mean time, Microsoftannounced some radical changes to it’s licensing and editions scheme. In SQL Server 2012, the standard and BI editions will support basic failover clustering of up to 2 nodes.
AlwaysOn Availability Groups