Click here to Skip to main content
65,938 articles
CodeProject is changing. Read more.
Articles / database

How to Speed Up MySQL Database Performance

5.00/5 (1 vote)
15 Jun 2014CPOL6 min read 21K  
MySQL Database Performance Diagnostics - Where to start?

MySQL Database Performance Diagnostics - Where to Start?

In most web applications, the biggest bottleneck you are going to have is the database. If you want good performance and to scale your application, you are going to need to know how to optimize it. Even before you worry about caching or replication, your queries need to be lightning fast. All that helps, but it does nothing to help you on the first query. So how do you fix your queries? To do that, you have to first find out where your problem areas are and then second, figure out how to optimize the problem.

Find Out Where Your Problem Areas Are

The first thing you'll want to do is to get the Percona toolkit installed on your system (docs are here). This doesn't have to be the same computer that your database is on as long as you can get access to the slow query log. I've installed the tools on an EC2 instance of the Amazon Linux AMI using the following commands:

[get the tools]

> wget percona.com/get/percona-toolkit.rpm

[install it]
> rpm -i percona-toolkit.rpm

Turn On Your Slow Query Log

Why use the slow query log, what if the general query log is already available? The slow query log has additional information that will not be put in the general query log. By setting the slow query time to zero, we effectively turn on the slow query log for every query in the system. You will need to use the slow query log to allow the toolkit to analyze the queries. Make sure you have a file location specified for the slow query log in your configuration file.

[/etc/my.cnf]
 
log_slow_queries=/var/log/mysqld_slow.log

To enable slow queries, execute the commands below:

[log into mysql]
 
mysql> SET GLOBAL slow_query_log=ON;
mysql> SET GLOBAL long_query_time=0;

Now load up your application and have it run all the queries you want to test. Start with the slowest responding actions, but make sure to hit all the different aspects of your system that you want to check out. In my /etc/my.cnf file, I have the slow query pointing to /var/log/mysqld_slow.log.

Run the Analysis Tool - pt-query-digest

Run the Percona query analyzer tool and it will summarize and list the slowest queries, the ones that will benefit most from performance tuning and optimization (i.e., adding indexes, caching, or changes to your application code).

> cat slow_sql.txt

# 1.1s user time, 110ms system time, 28.59M rss, 225.45M vsz
# Current date: Thu Aug 29 17:12:17 2013
# Hostname: domU-12-31-39-10-2D-3C
# Files: /var/log/mysqld_slow.log
# Overall: 3.63k total, 21 unique, 106.82 QPS, 0.08x concurrency _________
# Time range: 2013-08-29 17:08:09 to 17:08:43
# Attribute total min max avg 95% stddev median
# ============ ======= ======= ======= ======= ======= ======= =======
# Exec time 3s 11us 3s 740us 49us 42ms 40us
# Lock time 2ms 0 121us 0 0 4us 0
# Rows sent 464 0 412 0.13 0 6.65 0
# Rows examine 466 0 412 0.13 0 6.65 0
# Query size 899.83k 14 490 253.70 299.03 71.92 246.02

# Profile
# Rank Query ID Response time Calls R/Call V/M Item
# ==== ================== ============= ===== ====== ===== ===============
# 1 0x5CBA2034458B5BC9 2.5270 93.9% 1 2.5270 0.00 SHOW DATABASES
# 2 0xB57AE8CACA7E9C87 0.1341 5.0% 3195 0.0000 0.00 SELECT testplan_iov_item_result testplan_iov_item
# MISC 0xMISC 0.0296 1.1% 436 0.0001 0.0 <19 ITEMS>

# Query 1: 0 QPS, 0x concurrency, ID 0x5CBA2034458B5BC9 at byte 0 ________
# This item is included in the report because it matches --limit.
....

Examine the Results

Run EXPLAIN "SQL STATEMENT" for each of the slow queries to see how the database will be running the queries.

mysql> EXPLAIN SELECT * FROM testplan_iov_item_result iovr
INNER JOIN testplan_iov_item iov
ON ( iovr.iovr_item_num = iov.iov_id )
WHERE iovr_test_ref = 1 AND
iov.iov_item_num > 0;
 
1    SIMPLE    iovr    ref    key_unique,index_1    index_1    4    const    876  
1    SIMPLE    iov    eq_ref    PRIMARY    PRIMARY    4    cxw_project_372.iovr.iovr_item_num    1    Using where

Pay attention to the last column (it is labeled 'extra'). This will tell you if you are using indexes or the dreaded filesort. If you have a filesort, that query is going to be slow so fix that first. A filesort means the database has to do an extra query after the data is retrieved. This really hurts you if there are a large number of results. For the above query, there are two tables being used (one for each returned row) and indexes are available. Since the extra column does not specify an index then data needs to be collected outside of what is available in the index. You want this column to show an index and, when it does, it will give you optimal speed. Why doesn't the Extra column show 'Using Index'? Because I am using SELECT * the values used are not found in the index and so the database table has to be queried. If I change it to SELECT iovr.iovr_id, then the Extra column would now show we are using indexes for the entire query and it will be much faster.

Learn about Indexes and Add Where You Need Them

Know what indexes you need by looking at the type of queries you are running and running EXPLAIN on them. Those guys are Percona really know what they are talking about. There is a great article "How to Design Indexes, Really", by Bill Karwin that explains the where and how of using indexes. As he says, you design your database to fit the model, but you set up your indexes from your queries. Watch his presentation and you will have a much better understanding of what to do. I had a horrible looking query that the Percona toolkit identified shown below. Simply by adding indexes on the columns used for comparison (in the 'on' part of the join query, the group by column, and especially the order by column), I was able to speed up the query by 300% to 400% without any changes to application logic. Indexes were added to the highlighted columns. The best thing about indexes is that you can add them without making any changes to the backend programming to receive their benefits.

SELECT  u.*, c.Name AS auth_company_db,
        c.org_init AS auth_company_initial_db,
        GROUP_CONCAT(t.feature_name ORDER BY t.feature_name SEPARATOR ', ' ) as auth_class_list
        FROM authorized_users AS u
        LEFT JOIN client AS c ON ( c.ClientCode = u.auth_company_id)
        LEFT JOIN feature_sel_table AS s ON (s.feature_ref1 = u.num AND s.feature_ref2 = 0 _
        AND s.feature_type='feature_type_user_class')
        LEFT JOIN feature_type_table AS t ON (s.feature_id = t.feature_id) _
        GROUP BY u.num  ORDER BY last_name, first_name

Change Your Application Code

Maybe you are generating a list of objects and you don't need the entire object list at once. Consider reworking your application for areas where you may not need to return an entire row set at the time of the query. When you just want to check if something exists, you won't need to return all rows. Changing a SELECT * to SELECT ID can make all the difference in the world in some applications.

Add a Query Cache

This simple technique can really speed up what your users see, but does nothing to help the first query. Add a query cache in your /etc/my.cnf file and restart your MySQL server instance. I listed this last, because the query cache is not the end all savior of performance. Although it is easy to simply turn on, it does nothing to fix your individual queries. Figure out why they are slow first, the query cache is just a bonus.

[mysqld]
query-cache-type = 1query-cache-size = 10M

Still not Fast Enough?

It's all about the caching. Your fastest speed will be if you can handle the request without going to the database. Use an application cache like memcached to catch the response before it gets to your database. The downside to this is you will need to manage cache lifetimes and decide how to handle invalidating and updating the cache. You can also decide to make writes to both memcached and the database so that you don't worry about invalidation. For searching, now that MyISAM and InnoDb (as of MySQL 5.6) both support fulltext search, you want to consider that option to speed up string searches. If that is not an option, an external option that works with most databases is the Sphinx Search Server.

Related Articles

I have other posts related to tuning and speeding up your web applications: PHP Performance Profiling, and one on having a custom diagnostic that helps you find your slowest queries MySQL Profiling in PHP, but the Percona toolkit usage here goes far beyond what I had done in my original posts.

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)