Resolving database performance issues

Part 1 of Database Performance Improvements looked at how to identify the symptoms for poor database performance and what to look for in order to troubleshoot these problems. This part looks at each item in detail and provides some suggestions for resolving these issues.

Suggestions for Improving Performance

1. A database server should never be overloaded. I have seen database servers that are used as email servers and file servers in addition to hosting the database. These processor and memory intensive applications take away resources from the database and could be playing a large part in your performance issues. I would suggest that all non-database applications be moved off the database server, in a planned manner of course.

2. Every system has it’s peaks and troughs in terms of usage. At it’s peak, there would be many users in the system doing a variety of functions – this is when the performance of the database is crucial. I have seen ERP implementations where production planners run the Master Production Scheduling (MPS) and Material Requirements Planning (MRP) modules several times a day. For those not familiar, the MPS and MRP can be very resource intense modules as they work with large datasets and churn through CPU like there is no tomorrow. For most organisations, these modules can be run once a day, either at day-start or day-end. Multiple runs through the day will not only be overkill, but also affect all other users of the system.

Read MoreMy suggestion is to look for similar resource intensive jobs and schedule them at appropriate times during the day, such as overnight or during lunch breaks.

3. Incorrect data types are an issue because the database server and/or the application has to covert these values to the appropriate data type each time it is used. The problem is not finding them, but rectifying them. Making changes to the database may seem easy in this case, but doing so will most likely break the software application that uses it. This is one problem that needs to be resolved by working closely with the software developers.

4. Transaction tables such as inventory transactions can easily grow into millions of records over the years. While useful to have, most transactions would never be used for anything except in exceptional circumstances. Having these records sitting in the database tables only makes the database work harder to give you relevant information.

I would suggest that data, especially transaction data older than a certain date, be archived in another non-production database. If for any reason these archived data are required, it’s a simple matter of making them available to the right people in the relevant format.

What I also suggest is for the creation of a Data Archival Strategy, which should hightlight the things that need to be taken into consideration such as current data retention requirements and legislation, and describe what needs to be archived and when, how to retrieve the archived data and who is responsible for this process. I’ll consider writing on this topic if there is sufficient interest from readers.

5. The best way to see if queries are run over non-indexed fields is to analyse individual queries. The Microsoft SQL Server & IBM DB2 provide tools such as Query Analyser, Debugging tools, Client Statistics, and so on to faciliate this. If you do find queries running over non-indexed fields and that these queries are being run on a regular basis across large datasets, it would be good to consider creating additional indexes using these fields. These new indexes will help speed up the queries and take less resources to run.

6. Having appropriate indexes will also help optimise queries that run across large volumes of data. Another alternative is to use Views which will only present selected bits of information for each query. This in effect will reduce the need for the database to sift through masses of data for each query.

7. Reviewing and optimising which columns are returned by queries can have a massive improvement on your database’s performance. For example, imagine a table has 100 columns but most functions that use this table only make use of 20 of these columns. The remaining 80 columns are processed by the database and transmitted through the network for no reason at all. This may seem insignificant when there aren’t too many records, but as tables grow this can be a huge drain on resources.

My suggestion here is to review your queries that run against large database tables and see which columns can be dropped from these queries. Beware though, changing any functions or stored procedures could affect your software applications, so again work closely with the software developers before making any changes.

8. Custom reports are great, especially if the users have the power to create their own reports. The downside is that the reports may not be using the most optimised queries or SQL in them. Don’t get me wrong – some tools are very smart about how they go about generating SQL, but not every tool is that good. Look at these user generated reports and see if you could optimise the queries used behind the scenes. You could be amazed as to how people find complicated ways to do simple things.

9. While I love using triggers, this can easily be over-used. If several triggers are fired when transaction records are updated, this would in effect pause the application from running until all the processing within the triggers are completed. My suggestion in situations like this is to consolidate all your triggers. This would ensure that only one or two triggers will be fired for each update and in turn take less time to complete.

The other thing to watch out for is triggers that cause more triggers to be fired. For example, when a Trigger gets fired, it is programmed to update some other tables. If these other tables have triggers, these would then fire causing the whole application to freeze while they complete. Again, triggers may need to be redesigned such that they don’t end up being problems!

10. Having said that creating indexes will help process queries faster, I am now saying that too many indexes will slow down your database. Each time a record is updated (added, updated or deleted), all indexes linked to that table will also get updated. This happens so that the database can continue to find records quickly, through the use of the indexes. But the flip-side is that the more indexes that need to be updated, the slower the record update process becomes.

So is there an optimal number of indexes for any database table? The simple answer is no (unless someone else has worked out a formula for this). In this case, some good judgement is required to determine an ideal balance and acceptable performance. If you suspect there could be too many indexes, see if they could be consolidated. I have seen installations that have had redundant indexes – for example, one index would be over columns A, B & C and another over columns A, B, C, D & E of the same table. Once more, before any changes are made to the indexing, check with the software developers for potential problems of making such changes.

11. This is more of a change management issue, but it does have an impact on databases. Each time any database object, be an index, trigger, or something else, is updated it takes up database resources to complete this task. Changes such as this should be scheduled for a time where the database is either not in use or is likely to be under-utilised. While utilising valuable database resources is one issue, what happens if the new index or trigger has a problem and needs to be reversed? In some cases, it becomes difficult to undo a database change while it’s being used, which means having to kick all the users off the system to fix the problem.

Good Change Management practices will help prevent this type of problem and having a good database catalog will help prevent issues like those associated with too many indexes (point 10) and triggers (point 9).

Photo by stock.xchng user aurelio.

%d bloggers like this: