On Wednesday May 16th, we ran a
webinar to provide an overview of all of the new replication features and
enhancements that are previewed in the MySQL 5.6 Development Release –
including Global Transaction IDs, auto-failover and self-healing,
multi-threaded, crash-safe slaves and more.
Collectively, these new capabilities enable
MySQL users to scale for next generation web and cloud applications.
Attendees posted a number of great
questions to the MySQL developers, serving to provide additional insights
into how these new features are implemented. So I thought it would be useful to
post those below, for the benefit of those unable to attend the live webinar
(note, you can listen to the On-Demand replay which is available now).
Before getting to the Q&A, there are a
couple of other resources that maybe useful to those wanting to learn more
about Replication in MySQL 5.6
For more detail on any of the features
discussed below, be sure to check out the Developer Zone article: Replication
developments in MySQL 5.6
So here is the Q&A from the event
The results from recent benchmarking of the
Multi-Threaded Slave enhancement were discussed, prompting the following questions
from 0 - 10 threads, did you notice any increase in IOwait on CPU?
A. In this case, yes. The actual
amount depends on a number of factors: your hardware, query/transaction
distribution across databases, server general and InnoDB specific configuration
Q. Will the multiple threads work on
different transactions on the same database, or each thread works on a separate
A. Each worker thread works on separate
Q. If I set the slave-parallel-workers to
less than the number of databases, can I configure which databases use which worker
A. There is no such configuration option to
assign a certain Worker thread to a database. Configuring slave-parallel-workers
to less than the number of databases is a good setup. A Worker can handle
Q. If I create 4 threads and I have 100
databases, can I configure which databases use which threads?
A. There won't be 1 worker to a mapping of
exactly 25 databases, but it will be very close to that type of distribution.
Q. Thank You. I ask as we have about 8
databases that have a lot of transactions and about 30 that are used less
frequently. It would be nice to have the ability to create 9 workers, 1 for
each if the heavy databases and 1 for all the others
A. The current design enables relatively
equal distribution of transactions across your databases, but it could be that
9 workers will fit anyway.
Q. Does multi-thread slave still work if
there is foreign key across database?
A. MTS preserves slave consistency *within* a database, but not
necessarily *between* databases. With MTS enabled and replication ongoing, updates to one database can be
executed before updates to another causing them to be temporarily out of sync
with each other.
Q. How is auto-increment managed with the
A. MTS makes sure Worker threads do not
execute concurrently on the same table. Auto-increment
is guaranteed to be handled in the same way as the single threaded
"standard" slave handles auto-increment
Q. Can you use semi-synchronous replication
on one of the slaves when using MTS?
A. Semi-sync is friendly to MTS. MTS is
about parallel execution - so they cooperate well.
Row Based Replication
Q. If you only store the PK column in the
Before Image for updates, does this mean you don't care about the slave's data
potentially being out-of-sync? Will we be able to control how much data is
stored in the binary logs?
A. The rule is that we ship a *PK
equivalent* so that the slave is always able to find a row. This means:
1. if master table has PK, then we ship the PK only
2. if master table does not have PK, then we ship the entire row
Transaction IDs and HA Utilities
Q. Would the failover utility need to sit
on a 3rd party host to allow arbitration?
A. The utility would typically run on a
client, not on the hosts it is monitoring
Q. Can you explain the upgrade process to move
to MySQL 5.6? I am assuming that the slave(s) are upgraded first and that
replication would be backwards compatible. And after the slave(s) are upgraded,
the master would be next. But then how do you turn on the GTID?
A. Right: the slave(s) are upgraded first.
After upgrading Slaves they would be started with --gtid-mode=on (technically a
couple of other options are needed as well). And then the same process would be
followed for the upgraded Master.
Q. For failover functionality, if I had a
setup like this: Server1 replicates to Server2, Server2 replicates to Server3,
Server4, and Server5. If Server2 were to fail, can I have it configured so that
Server1 can become the new master for Server3/4/5?
A. Yes - you can either configure the failover
to the most recent slave based on GTID, or list specific candidates. What will
happen is that Slave 1 will temporarily become a slave of 3, 4 and 5 to ensure
it replicates any more recent transactions those slaves may have, and then it
will become the master
Q. What is the overhead of replication
A. It is minimal - we plan to publish
benchmarks over the summer to better characterize any overhead
Q. Are you exposing the checksum to the
plugin api so we can add our own checksum types?
A. We prepared some of interfaces, i.e.
checksum methods are identified by a specific code byte (1-127) values. But
it's not really a plugin at this point.
Q. Do checksums verify both replicated data
and the data on slave?
A. Yes - checksums are implemented across
the entire path - so you can check for issues in replication itself, or in the
hardware or network
Q. I think, it is better to turn on
checksums at the relaylog to avoid overhead on the master, but if we do that
and checksum fails (i.e. not matching the master's data) then what happens –
will the slave throw an error
A. I agree, it's better to relax the
Master, which verifies the checksum only optionally when the Dump Thread reads
from the binlog prior to the replication event being sent out to the Slave. The slave mandatorily checks the checksummed
events when they are sent across the network, and optionally when they are read
from Relay-log. In either case, an error is thrown.
Q. Are checksums optional? In some cases we don't care for huge data
A. Yes, checksums are optional.
Q. Is Time delayed replication applied at the
database level only and not for the entire slave?
A. Applied for the slave as execution is
Q. When we configure information log event,
does it show meaningful query log for any binlog format? (Row based especially)
A. When using row based replication, you get
the original query in human readable format... obviously you don’t want to see
all the rows modified in a table of size, which can be huge
Q. Will the binlog include user id?
A. User id is replicated in some cases for
Query-log-event - namely user id of the invoker when a stored routine is
Q. What level of access does the remote
binlog backup need?
A. Replication slave
As you can see, our Engineering team was
kept busy with questions over the course of the webinar. Be sure to check out the MySQL 5.6
Replication webinar replay and if
you have further questions, please don’t hesitate to use the comments below!