MySQL Performance: 5.0 vs 4.1

  1. If your applications use features which were well optimized in 4.1 you might see performance degredation (wtf?).
  2. As of now 4.1 will only get critical bug fixes.
  3. Amazingly it appears that 5.0 is slightly slower than 4.1. Why this is I don’t know.

I’m leaving this one to go to the dispearsed storage engine tutorial as I’m bored to death with this. Nothing interesting.

MySQL Partitioning

MySQL’s new partitioning feature is a way to split tables up into chunks based on various criteria. For instance, you could break up a table based on a timestamp and rows older than a specific date are stored in MySQL’s archive storage engine.

  1. Partitioning will first appear in version 5.1.
  2. Partitioning is a “Divide and Conquer” solution for large tables.
  3. You can partition data based on year. For instance you can put data from before 1999 into /var/data/part1, before 2001 onto /var/data/part2, etc.
  4. You can move data from partition to partition as data becomes stale as well.
  5. Sits above the storage engine layer and works with all table types. Sits in its own layer.
  6. You can remove partitioning with ALTER TABLE t1 REMOVE PARTITIONING.
  7. Limits on partitioning include that the partition function must return an integer result, all partitions must use the same storage engine, and it increases response time with many partitions.
  8. You can partition by odd and even values. For instance, you can put even values into partitions 2, 4, 6 and 8 and odd values into partitions 1, 3, 5 and 7.
  9. You can partition by hashing, which lets MySQL decide where to put the data. There isn’t a logical distribution between partitions so it makes pruning more difficult. For instance, you never really know which partition the data is on (unlike the above example).
  10. You can create composit partitions as well, which are essentially sub-partitions.
  11. If a primary key is defined no fields outside of primary key is allowed in partition function (wtf?).
  12. You can reorganize partitions whenever you want. For instance you can roll partition 2 into partition 3 and move their locations. This, however, locks the table until the partition operations have completed.
  13. If you have both a PRIMARY KEY and a UNIQUE index in a single you cannot do partitioning on that table.
  14. If a storage engine has a table size limit of 64GB (ie. InnoDB) you can have 64GB on each partition. This essentially allows you to scale beyond your storage engine’s table size limit.
  15. Did I just hear something about foreign key constraints being put into the meta storage engine in the near future? This would allow foreign key constraints for all table types I believe.
  16. Creating, updating, altering, etc. of partitions is done atomically. Either the change to the partition happens or it’s rolled back and an error message is returned.
  17. If one of your partitions goes away all of your data goes away. Yikes!
  18. It doesn’t sound like you can partition federated tables.
  19. Paritioning does work with replication.
  20. You can use it with auto-increment primary keys.

This is a very cool feature for people who have very large data sets and are looking to break those data sets up into smaller data sets. For instance, if you have a table with 1m records in it you could split it up into two data sets of 500k records in each data set, which would reduce the number of rows MySQL needs to scan during queries.

MySQL Cluster: New Features and Enhancements

This specifically about new features in the 5.1 release. As much as I like Australia I think it’s retarded to be talking about that at a MySQL conference, though it is funny I suppose.

  1. They now support variable sized rows, which reduces memory usage. This equates to more rows per gigabyte.
  2. Add/Update Index has been optimized over 5.0. Before it copied the entire table, added the new index and then moved it back over the old table. This all happened over the wire so you can imagine how long that took. This optimization speeds things up about four times.
  3. 5.1 now allows you to replicate across clusters. Used for geographical redundancy, split the processing load (why not add more nodes to the cluster?), etc. You have to use the row-based replication to enable this feature.
  4. Failover of replication channels is manual.
  5. They’re adding support for data on disk in 5.1 and indexes on disk in the future.

This absolutely amazes me. The moral of the story at MySQL UC has been that clustering is a nightmare to set up, maintain and use. However, storing data on disk is a step in the right direction. The solution from Continuent seems to be infinitely more elegant than both clustering and replication.

MySQL Replication: New Features and Enhancements

  1. MySQL 5.0 supports auto-increment variables for bi-directional replication to avoid auto-increment collisions.
  2. MySQL 5.0 replicates character sets and time zones.
  3. 5.0 now replicates stored procedures, triggers and views.
  4. MySQL 5.1 introduces row-based logging and replication (RBR)
  5. Dyanmic switching of binary log format between ROW, STATEMENT and MIXED.
  6. 5.1 allows for cluster replication.
  7. Replication method cannot be configured per table, but since it’s dynamic you can change it from the client before a transaction, etc.
  8. With auto_increment_offset and auto_increment_increment you can change starting points and how many you step between. Works with most table types. Works with InnoDB and MyISAM. This is specifically for multi-master setups.
  9. RBR allows clusters to replicate and also allows the server to replicate non-deterministic statements such as LOAD_FILE(). That being said you can’t have different table definitions on the slave as you can with SBR.

Wow, nothing super impressive here that makes me excited about new features in MySQL’s 5.0/5.1 replication. Their answer to the possible auto_increment collisions seems a bit simplistic and short-sighted, but then again I’m not a maintainer for MySQL’s replication code.

What annoys me greatly is MySQL’s refusal to simply add features to storage engines in favor of simply adding new storage engines. MyISAM doesn’t support transactions or foreign key constraints? Use InnoDB. InnoDB doesn’t support FULLTEXT? Use MyISAM. You need synchronous replication of data? Use NDB, but you need to denormalize and reduce your use of JOIN’s.

It’s enough to make me switch to PostgreSQL.

Advanced MySQL Replication with Load Balancing

Today I thought I’d check out session by the CTO of Continuent about their clustering solution. Before heading into the session we checked out Continuent’s booth in the exhibit hall. It certainly sounds like a great product, however the $5,000 per CPU licensing seems a bit Oracle’ish to me.

  1. A share nothing architecture. Split up into two layers; the controllers and the actual database nodes.
  2. Low latency.
  3. Single-copy equivalence for reads.
  4. Supports load balancing and heterogeneous databases (ie. MySQL and SQL Server sharing the same cluster).
  5. 100% Java and based on Sequoia.
  6. They might have an open source version. Will need to check this out.
  7. Fully transparent (including failover) to the application.
  8. The controllers act as a proxy to the database. To the applications they appear as the actual database. Below these controllers is where the databases actually sit.
  9. You need to compile/load your their specific driver, which I think would require a near PEAR DB driver.
  10. Requests come in where it determines what type of request it is (read vs. write). It then broadcasts writes to all controllers. All requests are executed in identical order. From there it’s sent to the scheduler, which makes sure the underlying databases remain in identical states. These requests can be sent in parrallel to each database server.
  11. The request controller then aggregates the responses from the databases. If there was an error across all of the systems then it’s a bad query of some sort, if only a single node respondes with an error then that systems is dropped. This can be configured to return a success message once one of the underlying nodes responds with a success.
  12. Reads are simply load balanced across the underlying nodes to the node with the least number of requests (sounds like LVS’s weighted least connection algorithm).
  13. Works with MyISAM, InnoDB and heap table types.
  14. The commits are synchronous.
  15. Once you have a dump with a starting date you can apply the dump to the new node and then the cluster controllers apply the logs until it’s up to date.
  16. The open source solution only supports ANSI SQL, doesn’t come with their own group talk protocol (so the clusters can communicate with each other) and doesn’t come with the database-specific dump and load.
  17. The cluster controllers keep track of the position in the overall sequence of each underlying node and sends reads only to those nodes that are up-to-date with the current position.
  18. The largest cluster they have are four nodes. Has been tested with as many as 64 nodes and supports tiering.
  19. Failover between cluster controllers happens in the driver level.
  20. Stored procedures where a “challenge”. Their approach allows you to tell their controllers what is inside of the stored procedure (what tables it changes, etc.) so the controller knows how to handle each procedure.

After sitting in on the MySQL Clustering tutorial I can honestly say this approach is infinitely better. It supports disk write, InnoDB, MyISAM and, on top of that, allows you to tier the clusters. The only downside I see is that it’s written in Java, but these days I’m not sure if that’s really a downside.

Replication for Scaling and High Availability

The second session today is replication for scaling and high availability, which I’m extremely interested in hearing about as we’ve been having problems with race conditions on our current replication setup.

  1. Replication works with all table types.
  2. Any “critical” reads must be done on the master as replication is asynchronous.
  3. Master will rotate binary logs automatically for every 1G of log records.
  4. You must purge any old, unused logs yourself. See the PURGE MASTER LOGS command. Using file system commands is not recommended as there are index files that need to be updated.
  5. Master and slave must be similar hardware as slaves generally do as much work and, sometimes, more work than the master. In other words, don’t skimp on the slave.
  6. Keep a “spare” slave that you can take down and clone to bring up new slaves.
  7. Use a load balancer for managing access to slaves.
  8. There is no raw limit to the number of slaves that a master can host, but each slaves takes up one connection so you’re limited by your max connections on the master.
  9. You can create “relay” slaves that relay data from the master to slaves sitting below them. Doing this increases replication delays though.
  10. If a slave will also be a master (see above) you need to enable log-slave-updates.
  11. Relying on the master-* lines in my.cnf can be problematic over using CHANGE MASTER SQL statement. The slave rememebers master information.
  12. Datacenter failover involves the www connecting through a proxy, which then decides who the current writeable master is.
  13. Should avoid DNS when doing inter-datacenter failover. DNS caching, propagation, etc.
  14. Using proxy allows instantaneous and complete switching of traffic between masters. Another option is stunnel.
  15. A connection comes through A and goes directly to that DB server. On B it comes through and is forwarded to A instead of going to B. None of the connections connect directly to either A or B.
  16. Writes go to only one master at a time. The firewall rules ensures that.
  17. Both masters should use skip-slave-start and read-only. Entire setup should come up read-only at boot. Essentially, make sure they come up in the state they failed in.
  18. Need to handle “Server is read only” and “Connection refused” in your application when failover is occurring.
  19. Master<->Master replication is only a problem if you’re writing to both masters at the same time.
  20. They recommend switching traffic between datacenters manually as it’s difficult to determine when to actually failover and not actually automatically failing over.
  21. Steps for failing over
    1. Set current master to read-only
    2. Wait for writes to flush through. Checking how far behind the other master is up-to-date. A way to do this is by creating a table on the master every second and seeing how long it takes to show up on the slave.
    3. Remove forward rules with old settings
    4. Add fowarding rules with new settings
    5. Remove read-only from new master using SET GLOBAL read_only
  22. Make sure to STOP SLAVE on the new master when master fails. When you bring it back up check the data first. If all is well then allow replication on the failed master to catch up from the new master. If not then restore from the new master.
  23. Reverse the above steps to bring the master back up (usually done during a planned failover later). You’ll remain live against the new master until then.

MySQL Cluster Configuration, Tuning, and Maintenance

Today is the 1st day of MySQL UC and I’m sitting in on the clustering tutorial, which promises to be three hours of information on MySQL’s clustering feature. I’m sure you’re thrilled that I’m, ahem, live blogging this. In reality, this is just so I have a central place for my notes.

  1. Broken up into three parts. The MySQL servers sit separate from the NDB Storage Engine, which are storage nodes (NDB nodes). The third part is called a management server. The management server, oddly enough, isn’t required once the cluster is up and running unless you want to add another storage node.
  2. Memory based storage engine. If you’re not using 5.1+ then you must have enough RAM in each storage node to store the data set. This means that if you have four machines with 4GB of RAM each you can store 8GB of data (16GB of total storage divided by two for two copies of the data set).
  3. Storage nodes are static and pre-allocate resources on startup.
  4. Supports transactions and row level locking.
  5. Should be noted this is a storage engine so you can’t create MyISAM or InnoDB tables inside of a cluster.
  6. Uses fixed sized records. This means if you have a varchar(255) and put a single byte into it that field is still using 255 bytes.
  7. No foreign key constraint support.
  8. Replication across nodes is syncronous across nodes. I assume this means that an INSERT happens once all of the nodes have completed the INSERT. This is different than regular replication which is asyncronous and introduces race conditions.
  9. Tables are divided into fragments (one fragment for each storage node). Each storage node is responsible for each fragment. Each fragment also has a secondary fragment, which is a copy of another node’s primary fragment. This data distribution happens automatically.
  10. NDB takes your primary key, creates a hash and then converts that to a fragment. So you’ll have various rows on each different storage node.
  11. A node group is a set of nodes that share the same fragment information. If you lose an entire node group you’ve lost half of the table and the cluster will not continue to operate. However, if one node in a node group fails there will still be enough data to keep the cluster up and running.
  12. If a node fails and it’s secondary counterpart takes over it will, essentially, have to perform the job of two nodes. Until a node has fully recovered it will not rejoin the cluster.
  13. Backups are hot and non-locking. Each node writes its own set of backup files. No support for incremental backups.
  14. Because it’s memory based you could lose data on a system crash (as you might have transactions sitting in RAM when a crash occurs). The COMMIT command does not write changes to disk meaning that you could have data sitting in memory that’s not on disk when a node crashes. This means the odd truth is that MySQL Clusters support syncronous replication, but are not atomic.
  15. NDB nodes will checkpoint data to disk (data + logs), which are used for system recovery. They write two logs, the UNDO and REDO logs.
  16. They recommend using TRUNCATE to delete all rows from a table.
  17. Modification operations are distributed to both the primary and secondary fragments (obviously).
  18. NDB will run on 64-bit machines. They recommend Dual CPU 64-bit machines. NDB is threaded. Application nodes (MySQL servers) can be whatever.
  19. SCI offers 30-100% better performance over gigabit.
  20. They actually recommend avoiding joins and to denormalize your schemas. Are you kidding me? He actually said “Performance for joins sucks.”

Overall, I’m underwhelmed by MySQL Clustering. You’re limited in storage with the RAM and you can’t optimize your schemas due to fixed field sizes. And any RDBMS “solution” that recommends you denormalize puts me off.

That being said the actual technology is pretty interesting and I suspect that in a few years we’ll see the clustering features in MySQL come into their own. As of now I suspect few people would be able to justify the sacrifices for the gains clustering allows.

Mt. Baker

Over a few pints at a pub in Ballard on Friday night with Garren, Julia and Sanna Garren and I had the bright idea to head up to Bellingham on Saturday night and then go skiing/snowboarding, and possibly do some fishing— since we had an ice fishing guide—on Mt. Baker all day Sunday for one last day on the slopes.

As it turns out the stars were perfectly aligned to not only make this trip happen, but to make it one of the best trips in recent memory. One of those trips where everything falls into place perfectly.

Garren and I headed out of Seattle around 7PM. About 15 miles outside of Bellingham is the Schmitts Casino which, after some discussion and a quick turnaround, we went to in search of the tri-state famous Schmitts Casino live blackjack. We found with $3 tables and 10x odds. Something you won’t find in even the cheapest Vegas casinos. It was here that Garren quite possibly had one of his luckiest runs at the table. The luck appeared to spread as multiple players who came to the table won over the next four hours. Always nice when you walk out with more money than you walked in with at a casino.

The next day the stars again aligned in our favor as we wound up in about 2 feet of fresh snow and very little competition for fresh tracks. I readily admit I was doing things I had absolutely no business doing on the mountain. Finding myself in the trees up to my waist in powder on multiple occasions giggling like a school girl.

My first WordPress plugin

So I finally broke down today and started working on my first WordPress plugin. I liked how recent links worked, but I wanted an aggregated list of both links I found interesting and my Flickr photos. I was going to hack the recent links plugin that I had used, but decided against it in the end. In the end I decided that it made sense to store my interesting links on del.icio.us and my photos on Flickr and then aggregate them into a single list, which is what you see now below this post.

This way my photos stay on Flickr and my links stay on del.icio.us, but they show up inline on my blog’s frontpage. I’ve got a few bugs and kinks to work out still, but it’s definitely ready for beta testing. If you are interested in playing with it then give me a shout and I’ll package it up for you to test out.

  • Integrates with Flickr and stores photo and tag information into a MySQL table.
  • Integrates with del.icio.us and stores link and tag information into a MySQL table.
  • Uses PEAR’s DB and HTML_Request packages and PHP5’s SimpleXML extension to seemlessly fetch and cache new content every hour or every time an admin visits the website.