From time to time these days I have to put my DBA hat back on and review database schema designed by those whose focus is on application development. What I often find is that attempts to improve performance through indexing have been done in vain as the query optimizer either doesn’t use them enough to merit their creation or they have been created on the wrong columns.
Frequently, optimisation is referred to as a dark art, but here I am going to attempt to give some basic pointers to help enighten the novice database designer…
Clustered indexes are structured around the data pages of the table they are built on (or more accurately, the data pages are organised according to the sort order of the index). Moreover, the tables data pages are actually the leaf level nodes of the index itself, with the root and intermediate nodes containing pointers to either the index node below or the data pages of the underlying table. For this reason, when you build a clustered index, the data pages are organised into the order of the clustered index column(s). With this in mind here a several points about their use:
- If a table is going to receive frequent inserts, it is worth considering as candidates, columns that increment in the same order as the inserts take place. This means that data pages can be appended, rather than inserting in the middle, thereby reducing the page splits and fragmentation that can occur. Often this will be an identity column, but not always. There may be a business key that fits this requirement and is also part of frequent range scanning queries. An example of this might be the date key in a calendar dimension.
- Don’t create them on string columns whose size is potentially large. This includes, GUIDs, (N)VARCHARS and (N)CHAR datatypes. Clearly the larger the index, the bigger the maintenance overhead.
- Although by default, clustered indexes are created on the primary key, sometimes you are better off dropping this index and instead creating it on other columns. An example of this would be where an application often queries data in a certain sort order or from a certain range. In this instance choosing the sort or the range key may be beneficial.
- There is an overhead in maintaining any index. If the column you choose is updated regularly, then there will be an overhead while the index data pages (and therefore the table) are reordered and the dependent non clustered index’s row locators are updated for the new position of the data.
- Consider creating a clustered index on a view, where aggregations and joins are part of the definition. This can be an excellent tool for query intensive OLAP operations as the resultset is stored on the index. Remember though that maintaining this type of index is expensive and so wouldn’t be suitable where the underlying data frequently changes.
Non Clustered Indexes:
Non clustered indexes are structured in B-Trees in the same way as a clustered index. However, data rows are not sorted according to the index, and leaf level data is held in index pages and not data pages like a clustered index. Finally the leaf level holds row locators which will either be a clustered index key or in the case of heaped table, a pointer to a row.
- Foreign keys should all have an index to improve join performance to the table they depend on. (This is especially true in a STAR schema, where queries will often aggregate measures in the fact, by attributes in a dimension).
- Indexes can be considered, where a column has a good (say around 20% or higher) selectivity ratio. Thats to say the number of distinct values is high compared to the total number of rows in the table. Using a 1m row table as an example, two possible columns that demonstrate opposite ends of the selectivity spectrum would be:-
- Identity Columns – Highly selective, there are as many distinct values as there are rows in the table (1m distinct values / 1 m rows in table * 100 = 100%).
- BIT columns – Unselective, there are only 2 (may be 3 including NULL) possible values. (3 distinct values / 1 m rows * 100 = 0.0003%)
- Database maintenance mustn’t be overlooked. Rebuild your indexes and update your statistics on a regular basis to ensure that index fragmentation is kept under control and that the optimiser is using the best possible query plan.
- Remember that for indexes to be effective, they need to be used! If they aren’t being used, then either your statistics aren’t up to date; or the optimiser simply doesn’t see a benefit. Remove the index or refactor the query so that it does use it. Index usage can be monitored on your production environment with a trace, and the dynamic system view
sys.dm_db_index_usage_statsin SQL 2005 will also contain relevant statistics.
- Even following the above rules, doesn’t preclude testing your work.
SET STATISTICS IO/TIME ONbefore running your queries in development, so that you can see the work performed by different permutations of your index/query. Also, display the query plans to find out what operations the optimiser has performed to retrieve your results.
These pointers will give you a good start when designing the indexing strategy for your database. However, there is plenty of indepth information on MSDN; books online; and a whole host of database forums and blogs.
After upgrading your hardware, I believe that good indexing strategy is the source of the greatest gain in database performance, so don’t overlook it 😉