This was originally posted on the Keyhole Software blog on 11/25/2013.
This is the second blog in a two-part series on scaling PHP applications. The first blog in the series focused on replacing Apache+mod_php with Nginx+PHP-FPM. This blog will go into advanced topics that need consideration when moving a LAMP stack to a scaled architecture.
Step 3 – Use Percona XtraDB Cluster for Database
I’ll be honest here, this step is going to be much harder than the first two. Still no code changes, but there are configurations and ways of thinking that will seem strange and hard to setup at first. Once you get over that hurdle, I promise you’ll never want to go back.
Lets assume that we have a box called Box 1 with our application code and the MySQL database. If we want to add another box, Box 2, the initial approach would be to put both the application code and MySQL database on it. This is perfectly fine for reads, but for writes we have an issue. If Box 1′s application code runs an insert/update/delete query on its database, how do I run that query on Box 2? A bad approach would be to change the database interface class to run those queries on multiple servers. We need some way to do query replication across multiple servers.
MySQL has a way to setup Master-Slave replication so all queries ran on the Master are also ran on the Slave. This means that Box 2 selects can run on its database, but insert/update/delete queries need to point to Box 1. Again, this is a bad approach that requires code change. It also keeps us tied to a single point of failure.
The next iteration would be to try Master-Master replication. Queries ran on one server will propagate to the others behind the scenes. This is ultimately the ideal solution. We get rid of the single point of failure, and can talk to a database on the same box as the application code. Unfortunately, MySQL doesn’t have very good ways of implementing this replication.
Enter Percona XtraDB Cluster. Percona is a company that is focused on improving MySQL. They regularly branch the public MySQL code, and enhance it to make it faster and better for scaling. Their solution to MySQL’s lack of effective Master-Master replication is the XtraDB Cluster. When you setup a cluster of boxes running the service, queries ran on the local node will be sent to the others and a block is created. The block is lifted after the other nodes successfully execute that query.
Percona XtraDB Cluster is cool. Seriously cool. Because it is a branch of MySQL, connecting to it and executing queries on it is just like the stock MySQL to the application code. All Percona MySQL branches are designed to be faster, which is icing on the cake. It also seamlessly handles auto incrementing IDs when nodes come up and down. There will never be conflicts with IDs generated on the local node. The XtraBackup tool allows you to create backups nearly on the fly.
There are a few considerations that need to be made when using XtraDB Cluster. MyISAM tables are not replicated across the cluster. They need to be converted to InnoDB. This could be an issue if your composite primary keys are out of order (auto increment field is not first). I’ve read they plan to add MyISAM support in the future, but you really shouldn’t be using MyISAM in a production environment anyway. It uses table-level locking on insert/update/delete whereas InnoDB is row-level.
The next consideration is the number of nodes you’ll have. For a query to be successfully executed, a quorum ( >= 51%) of nodes must be able to execute it. A quorum can’t be achieved with one or two nodes, so three is the minimum. It’s recommended to use an odd number greater than three for the number of nodes in the cluster. The good news is that nodes going offline then back online is handled nearly seamlessly. If the node that started the cluster goes down, its configuration file has to be changed slightly so it won’t create a new cluster but will join the existing one.
The last consideration may not apply for all setups. I had some issues using prepared statements and mysqlnd as my connector. Queries that had prepared variables inside a concat would silently fail when XtraDB Cluster was running as a single instance. I submitted it as a bug, but no one was able to reproduce it.
Step 4 – Change Session Storage
PHP’s sessions are a life saver. I was using a Cookie-based approach before I realized PHP already supported saving user-specific data. PHP sessions work by checking a cookie (PHPSESSID by default). If that cookie is not set, it generates a unique one. If it is set, it will be sent to the server on all subsequent requests. That way the user can be uniquely identified. Session-specific data can be put into the $_SESSION variable.
The default mechanism PHP uses to save the session data is through the file system. This works great for the LAMP model, but not for an architecture where the user can hit different boxes. For sessions to be scaled, there needs to be a way to share session data across boxes.
My first pass at solving this problem was to use Memcached. I setup a Memcached cluster, installed a Memcached session plugin, and changed my php.ini to use that as the session handler. The biggest disadvantage with this approach was gracefully handling failure. Memcached does not persist to disk. If a server shut down, all that session data was essentially gone. In the applications I was dealing with, this was not an acceptable solution.
My next pass involved using Couchbase. It is an implementation of CouchDB whose interface is exactly the same as Memcached. It would write off session data to disk, so shut down was not an issue. The disadvantage with Couchbase was its performance cost. It would quickly gobble up memory and CPU for no apparent reason. This wasn’t acceptable either.
My third pass involved using Memcached and MongoDB. Newer versions of PHP have methods that allow you to define your own session handler. You can give it the methods to call when an open, read, write, close, and destroy come in. I wrote my own class that read from Memcached, but if there was a cache miss it would try MongoDB. This had a very small footprint, but required me to code that session class perfectly. It was also really slow on cache misses.
My fourth pass was to use Memcached with Redis. Redis allows for persistence to disk, and is much faster than MongoDB. The biggest disadvantage here was using two RAM-based technologies instead of one.
The solution I settled on was a Redis custom session save handler. The Redis cluster only supports replication through sharding. I needed the same set of data to live on each machine, so sharding wasn’t an option. I had to write my custom session save handler to replicate on all nodes in the cluster. The class was around 300 lines of code and required a lot of work, but it did the job very well.
There is a community-mantained Redis session save handler extension that can be used instead. I would have used it if sharding was an option. It is faster than the custom handler because it is written in C rather than PHP.
Step 5 – Miscellaneous
The Devil is in the details. This is the case with many things in life, including scaling. If you’ve followed the guide to this point, you now have a very solid foundation for your PHP application without any code changes. (Yay!) There are just a few more hurdles to go over, but thankfully the hardest part is over.
The LAMP model is nice because there are only three configuration files to maintain. In the architecture I’ve mentioned above, you have to maintain an Nginx, PHP-FPM, PHP, Percona XtraDB Cluster, and PHP session save handler. Then multiply that by the number of machines in your stack. It’s a lot to maintain, and I wish I had some magic bean I could suggest to solve everything.
I haven’t found a good mechanism for managing all these files outside of building one specifically for it. Updating application code on all the boxes is also a challenge. CI tools like Jenkins and Capistrano can help in this area. I don’t have much experience setting this part up, so best of luck.
Code written in a LAMP model assumes that the application code and file system are always tied together. In a scaled model, this is not the case. This means that caches stored to the file system or uploaded files may not exist on every machine in the cluster. If you have a database record with the path to an uploaded file, it may work on one box but not another. I’m still on the fence about the best way to solve this issue. The best solution I can provide is to use a daemon that listens for changes in certain directories, and synchronizes with other servers in the cluster. Using Lsync and csync2 is the recommended approach.
Conclusion
Moving from LAMP to a scaled model is part of the PHP life cycle. If written well enough, applications get so popular that hosting it on a single server won’t work. The best time to implement a scaled architecture is before there’s a problem. It will allow you to scale horizontally, and should require minimal human intervention.
The architecture I’ve described in this blog is designed to accomplish all the goals outlined above. Even if the LAMP model seems like it will work forever, implement a few of the changes I’ve described. Run benchmarks, don’t just take my word for it. Be objective, and start thinking every day how you can make your application better. Always be focused on ways to maximize output while minimizing code changes. When you get in that mindset and start truly caring your infrastructure, great things will happen.
The post Introduction to Scaling PHP Applications – Part 2 appeared first on Zach Gardner's Blog.