I’ve been increasingly questioning the current model of university education in the US. Not only the value for the money, but just the entire notion that it’s a good way to learn. I got my Bachelor’s in Computer Science from UVA, which has been going through utter facepalm-worthy madness recently. It may be biasing my point of view.
MySQL 5.6 has a new option for innodb_flush_method. When O_DIRECT_NO_FSYNC is used then fsync is not done after writes. This was a response to feature request 45892. Unfortunately this option as implemented and documented is not safe to use with XFS. That is too bad because it can make performance much better for workloads that stall on fil_flush.
I was discussing how to avoid surprising users and someone pointed out that what seems intuitive and rational to one person is often complete insanity for others. The mental gap between a developer and a user can often be a chasm far too wide to cross. Of all the bug reports I’ve filed against MySQL, here is my all-time favorite:
select * from t where a >= 1.0order by a;
Does not cause an error. I believe it should, because there should be a whitespace before ORDER BY.
This is part 2 in a 3 part series. In part 1, we took a quick look at some initial configuration of InnoDB full-text search and discovered a little bit of quirky behavior; here, we are going to run some queries and compare the result sets.
I was curious to check how Percona XtraDB Cluster behaves when it comes to MySQL replication latency — or better yet, call it data propagation latency. It was interesting to see whenever I can get stale data reads from other cluster nodes after write performed to some specific node. To test it I wrote quite a simple script (you can find it in the end of the post) which connects to one node in the cluster, performs an update and then immediately does the read from second node.
For this post on MySQL 5.6 performance I used sysbench for a cached & update-only workload. The previous post on an update-only but not cached workload is here. For this workload each query updates 1 row by primary key. The database size is ~32GB and the data was cached in a 64GB InnoDB buffer pool. The table was read into the buffer pool before the tests were done.
I recently had the chance to talk to the San Francisco MySQL Users Group and any group that gets 80 plus regulars to attend meetings is impressive. That night they had 280 people RSVP! And a better than average percentage appears. Thanks to Theo, Mike and Erin — the organizers for the invite. I also had the chance to ask Erin O’Neill the secrets of how they run their meetings.