Thursday 22 December 2011

Eventual Consistency in MySQL Cluster - implementation part 3




As promised, this is the final post in a series looking at eventual consistency with MySQL Cluster asynchronous replication. This time I'll describe the transaction dependency tracking used with NDB$EPOCH_TRANS and review some of the implementation properties.

Transaction based conflict handling with NDB$EPOCH_TRANS

NDB$EPOCH_TRANS is almost exactly the same as NDB$EPOCH, except that when a conflict is detected on a row, the whole user transaction which made the conflicting row change is marked as conflicting, along with any dependent transactions. All of these rejected row operations are then handled using inserts to an exceptions table and realignment operations. This helps avoid the row-shear problems described here.

Including user transaction ids in the Binlog

Ndb Binlog epoch transactions contain row events from all the user transactions which committed in an epoch. However there is no information in the Binlog indicating which user transaction caused each row event. To allow detected conflicts to 'rollback' the other rows modified in the same user transaction, the Slave applying an epoch transaction needs to know which user transaction was responsible for each of the row events in the epoch transaction. This information can now be recorded in the Binlog by using the --ndb-log-transaction-id MySQLD option. Logging Ndb user transaction ids against rows in-turn requires a v2 format RBR Binlog, enabled with the --log-bin-use-v1-row-events=0 option. The mysqlbinlog --verbose tool can be used to see per-row transaction information in the Binlog.

User transaction ids in the Binlog are useful for NDB$EPOCH_TRANS and more. One interesting possibility is to use the user transaction ids and same-row operation dependencies to sort the row events inside an epoch into a partial order. This could enable recovery to a consistent point other than an epoch boundary. A project for a rainy day perhaps?

NDB$EPOCH_TRANS multiple slave passes

Initially, NDB$EPOCH_TRANS proceeds in the same way as NDB$EPOCH, attempting to apply replicated row changes, with interpreted code attached to detect conflicts. If no row conflicts are detected, the epoch transaction is committed as normal with the same minimal overhead as NDB$EPOCH. However if a row conflict is detected, the epoch transaction is rolled back, and reapplied. This is where NDB$EPOCH_TRANS starts to diverge from NDB$EPOCH.

In this second pass, the user transaction ids of rows with detected conflicts are tracked, along with any inter-transaction dependencies detectable from the Binlog. At the end of the second pass, prior to commit, the set of conflicting user transactions is combined with the user transaction dependency data to get a complete set of conflicting user transactions. The epoch transaction initiated in the second pass is then rolled-back and a third pass begins.

In the third pass, only row events for non-conflicting transactions are applied, though these are still applied with conflict detecting interpreted programs attached in case a further conflict has arisen since the second pass. Conflict handling for row events belonging to conflicting transactions is performed in the same way as NDB$EPOCH. Prior to commit, the applied row events are checked for further conflicts. If further conflicts have occurred then the epoch transaction is rolled back again and we return to the second pass. If no further conflicts have occurred then the epoch transaction is committed.

These three passes, and associated rollbacks are only externally visible via new counters added to the MySQLD server. From an external observer's point of view, only non-conflicting transactions are committed, and all row events associated with conflicting transactions are handled as conflicts. As an optimisation, when transactional conflicts have been detected, further epochs are handled with just two passes (second and third) to improve efficiency. Once an epoch transaction with no conflicts has been applied, further epochs are initially handled with the more optimistic and efficient first pass.

Dependency tracking implementation

To build the set of inter-transaction dependencies and conflicts, two hash tables are used. The first is a unique hashmap mapping row event tables and primary keys to transaction ids. If two events for the same table and primary key are found in a single epoch transaction then there is a dependency between those events, specifically the second event depends on the first. If the events belong to different user transactions then there is a dependency between the transactions.

Transaction dependency detection hash :
{Table, Primary keys} -> {Transaction id}

The second hash table is a hashmap of transaction id to an in-conflict marker and a list of dependent user transactions. When transaction dependencies are discovered using the first dependency detection hash, the second hash is modified to reflect the dependency. By the end of processing the epoch transaction, all dependencies detectable from the Binlog are described.

Transaction dependency tracking and conflict marking hash :
{Transaction id} -> {in_conflict, List}

As epoch operations are applied and row conflicts are detected, the operation's user transaction id is marked in the dependency hash as in-conflict. When marking a transaction as in-conflict, all of its dependent transactions must also be transitively marked as in-conflict. This is done by a traverse through the dependency tree of the in-conflict transaction. Due to slave batching, the addition of new dependencies and the marking of conflicting transactions is interleaved, so adding a dependency can result in a sub-tree being marked as in-conflict.

After the second pass is complete, the transaction dependency hash is used as a simple hash for looking up whether a particular transaction id is in conflict or not :

Transaction in-conflict lookup hash :
{Transaction id} -> {in_conflict}

This is used in the third pass to determine whether to apply each row event, or to proceed straight to conflict handling.

The size of these hashes, and the complexity of the dependency graph is bounded by the size of the epoch transaction. There is no need to track dependencies across the boundary of two epoch transactions, as any dependencies will be discovered via conflicts on the data committed by the first epoch transaction when attempting to apply the second epoch transaction.

Event counters

Like the existing conflict detection functions, NDB$EPOCH_TRANS has a row-conflict detection counter called ndb_conflict_epoch_trans.

Additional counters have been added which specifically track the different events associated with transactional conflict detection. These can be seen with the usual SHOW GLOBAL STATUS LIKE syntax, or via the INFORMATION_SCHEMA tables.

  • ndb_conflict_trans_row_conflict_count
    This is essentially the same as ndb_conflict_epoch_trans - the number of row events with conflict detected.
  • ndb_conflict_trans_row_reject_count
    The number of row events which were handled as in-conflict. It will be at least as large as ndb_conflict_trans_row_count, and will be higher if other rows are implicated by being in a conflicting transaction, or being dependent on a row in a conflicting transaction.
    A separate ndb_conflict_trans_row_implicated_count could be constructed as ndb_conflict_trans_row_reject_count - ndb_conflict_trans_row_conflict_count
  • ndb_conflict_trans_reject_count
    The number of discrete user transactions detected as in-conflict.
  • ndb_conflict_trans_conflict_commit_count
    The number of epoch transactions which had transactional conflicts detected during application.
  • ndb_conflict_trans_detect_iter_count
    The number of iterations of the three-pass algorithm that have occurred. Each set of passes counts as one. Normally this would be the same as ndb_conflict_trans_conflict_commit_count. Where further conflicts are found on the third pass, another iteration may be required, which would increase this count. So if this count is larger than ndb_conflict_trans_conflict_commit_count then there have been some conflicts generated concurrently with conflict detection, perhaps suggesting a high conflict rate.


Performance properties of NDB$EPOCH and NDB$EPOCH_TRANS

I have tried to avoid getting involved in an explanation of Ndb replication in general which would probably fill a terabyte of posts. Comparing replication using NDB$EPOCH and NDB$EPOCH_TRANS relative to Ndb replication with no conflict detection, what can we can say?

  • Conflict detection logic is pushed down to data nodes for execution
    Minimising extra data transfer + locking
  • Slave operation batching is preserved
    Multiple row events are applied together, saving MySQLD <-> data node round trips, using data node parallelism
    For both algorithms, one extra MySQLD <-> data node round-trip is required in the no-conflicts case (best case)
  • NDB$EPOCH : One extra MySQLD <-> data node round-trip is required per *batch* in the all-conflicts case (worst case)
  • NDB$EPOCH : Minimal impact to Binlog sizes - one extra row event per epoch.
  • NDB$EPOCH : Minimal overhead to Slave SQL CPU consumption
  • NDB$EPOCH_TRANS : One extra MySQLD <-> data node round-trip is required per *batch* per *pass* in the all-conflicts case (worst case)
  • NDB$EPOCH_TRANS : One round of two passes is required for each conflict newly created since the previous pass.
  • NDB$EPOCH_TRANS : Small impact to Binlog sizes - one extra row event per epoch plus one user transaction id per row event.
  • NDB$EPOCH_TRANS : Small overhead to Slave SQL CPU consumption in no-conflict case

Current and intrinsic limitations

These functions support automatic conflict detection and handling without schema or application changes, but there are a number of limitations. Some limitations are due to the current implementation, some are just intrinsic in the asynchronous distributed consistency problem itself.

Intrinsic limitations
  • Reads from the Secondary are tentative
    Data committed on the secondary may later be rolled back. The window of potential rollback is limited, after which Secondary data can be considered stable. This is described in more detail here.
  • Writes to the Secondary may be rolled back
    If this occurs, the fact will be recorded on the Primary. Once a committed write is stable it will not be rolled back.
  • Out-of-band dependencies between transactions are out-of-scope
    For example direct communication between two clients creating a dependency between their committed transactions, not observable from their database footprints.

Current implementation limitations

  • Detected transaction dependencies are limited to dependencies between binlogged writes (Insert, Update, Delete)
    Reads are not currently included.
  • Delete vs Delete+Insert conflicts risk data divergence
    Delete vs Delete conflicts are detected, but currently do not result in conflict handling, so that Delete vs Delete + Insert can result in data divergence.
  • With NDB$EPOCH_TRANS, unplanned Primary outages may require manual steps to restore Secondary consistency
    With pending multiple, time spaced, non-overlapping transactional conflicts, an unexpected failure may need some Binlog processing to ensure consistency.

Want to try it out?

Andrew Morgan has written a great post showing how to setup NDB$EPOCH_TRANS. He's even included non-ascii art. This is probably the easiest way to get started. NDB$EPOCH is slightly easier to get started with as the --ndb-log-transaction-id (and Binlog v2) options are not required.

Edit 23/12/11 : Added index

Monday 19 December 2011

Eventual consistency in MySQL Cluster - implementation part 2




In previous posts I described how row conflicts are detected using epochs. In this post I describe how they are handled.

Row based conflict handling with NDB$EPOCH


Once a row conflict is detected, as well as rejecting the row change, row based conflict handling in the Slave will :
  • Increment conflict counters
  • Optionally insert a row into an exceptions table
For NDB$EPOCH, conflict detection and handling operates on one Cluster in an Active-Active pair designated as the Primary. When a Slave MySQLD attached to the Primary Cluster detects a conflict between data stored in the Primary and a replicated event from the Secondary, it needs to realign the Secondary to store the same values for the conflicting data. Realignment involves injecting an event into the Primary Cluster's Binlog which, when applied idempotently on the Secondary Cluster, will force the row on the Secondary Cluster to take the supplied values. This requires either a WRITE_ROW event, with all columns, or a DELETE_ROW event with just the primary key columns. These events can be thought of as compensating events used to revert the original effect of the rejected events.

Conflicts are detected by a Slave MySQLD attached to the Primary Cluster, and realignment events must appear in Binlogs recorded by the same MySQLD and/or other Binlogging MySQLDs attached to the Primary Cluster. This is achieved using a new NdbApi primary key operation type called refreshTuple.

When a refreshTuple operation is executed it will :
  1. Lock the affected row/primary key until transaction commit time, even if it does not exist (much as an Insert would).
  2. Set the affected row's author metacolum to 0
    The refresh is logically a local change
  3. On commit
    - Row exists case : Set the row's last committed epoch to the current epoch
    - Cause a WRITE_ROW (row exists case) or DELETE_ROW (no row exists) event to be generated by attached Binlogging MySQLDs.

Locking the row as part of refreshTuple serialises the conflicting epoch transaction with other potentially conflicting local transactions. Updating the stored epoch and author metacolumns results in the conflicting row conflicting with any further replicated changes occurring while the realignment event is 'in flight'. The compensating row events are effectively new row changes originating at the Primary cluster which need to be monitored for conflicts in the same way as normal row changes.

It is important that the Slave running at the Secondary Cluster where the realignment events will be applied, is running in idempotent mode, so that it can handle the realignment events correctly. If this is not the case then WRITE_ROW realignment events may hit 'Row already exists' errors, and DELETE_ROW realignment events may hit 'Row does not exist' errors.

Observations on conflict windows and consistency

When a conflict is detected, the refresh process results in the row's epoch and author metacolumns being modified so that the window of potential conflict is extended, until the epoch in which the refresh operation was recorded has itself been reflected. If ongoing updates at both clusters continually conflict then refresh operations will continue to be generated, and the conflict window will remain open until a refresh operation manages to propagate with no further conflicts occurring. As with any eventually consistent system, consistency is only guaranteed when the system (or at least the data of interest) is quiescent for a period.

From the Primary cluster's point of view, the conflict window length is the time between committing a local transaction in epoch n, and the attached Slave committing a replicated epoch transaction indicating that epoch n has been applied at the Secondary. Any Secondary-sourced overlapping change applied in this time is in-conflict.

This Cluster conflict window length is comprised of :

  • Time between commit of transaction, and next Primary Cluster epoch boundary
    (Worst = 1 * TimeBetweenEpochs, Best = 0, Avg = 0.5 * TimeBetweenEpochs)
  • Time required to log event in Primary Cluster's Binlogging MySQLDs Binlog (~negligible)
  • Time required for Secondary Slave MySQLD IO thread to
    - Minimum : Detect new Binlog data - negligible
    - Maximum : Consume queued Binlog prior to the new data - unbounded
    - Pull new epoch transaction
    - Record in Relay log
  • Time required for Secondary Slave MySQLD SQL thread to
    - Minimum : Detect new events in relay log
    - Maximum : Consume queued Relay log prior to new data - unbounded
    - Read and apply events
    - Potentially multiple batches.
    - Commit epoch transaction at Secondary
  • Time between commit of replicated epoch transaction and next Secondary Cluster epoch boundary
    (Worst = 1 * TimeBetweenEpochs, Best = 0, Avg = 0.5 * TimeBetweenEpochs)
  • After this point a Secondary-local commit on the data is possible without conflict
  • Time required to log event in Secondary Cluster's Binlogging MySQLDs Binlog (~negligible)
  • Time required for Primary Slave MySQLD IO thread to
    - Minimum : Detect new Binlog data
    - Maximum : Consume queued Binlog data prior to the new data - unbounded
    - Pull new epoch transaction
    - Record in Relay log
  • Time required for Primary Slave MySQLD SQL thread to
    - Minimum : Detect new events in relay log
    - Maximum : Consume queued Relay log prior to new data - unbounded
    - Read and apply events
    - Potentially multiple batches.
    - For NDB$EPOCH_TRANS, potentially multiple passes
    - Commit epoch transaction
    - Update max replicated epoch to reflect new maximum.
  • Further Secondary sourced modifications to the rows are now considered not-in-conflict

From the point of view of an external client with access to both Primary and Secondary clusters, the conflict window only extends from the time transaction commit occurs at the Primary to the time the replicated operations are applied at the Secondary, and its commit time Secondary epoch ends. Changes committed at the Secondary after this will clearly appear to the Primary to have occurred after its epoch was applied on the Secondary and therefore are not in-conflict.

Assuming that both Clusters have the same TimeBetweenEpochs, we can simplify the Cluster conflict window to :
  Cluster_conflict_window_length = EpochDelay +
P_Binlog_lag +
S_Relay_lag +
S_Binlog_lag +
P_Relay_lag

Where
EpochDelay minimum is 0
EpochDelay avg is TimeBetweenEpochs
EpochDelay maximum is 2 * TimeBetweenEpochs


Substituting the default value of TimeBetweenEpochs of 100 millis, we get :
     EpochDelay minimum is 0
EpochDelay avg is 100 millis
EpochDelay maximum is 200 millis


Note that TimeBetweenEpochs is an epoch-increment trigger delay. The actual experienced time between epochs can be longer depending on system load. The various Binlog and Relay log delays can vary from close to zero up to infinity. Infinity occurs when replication stops in either direction.

The Cluster conflict window length can be thought of as both
  • The time taken to detect a conflict with a Primary transaction
  • The time taken for a committed Secondary transaction to become stable or be reverted

We can define a Client conflict window length as either :
 Primary->Secondary

Client_conflict_window_length = EpochDelay +
P_Binlog_lag +
S_Relay_lag +
EpochDelay

or

Secondary->Primary

Client_conflict_window_length = EpochDelay +
S_Binlog_lag +
P_Relay_lag

Where EpochDelay is defined as above.


These definitions are asymmetric. They represent the time taken by the system to determine that a particular change at one cluster definitely happened-before another change at the other cluster. The asymmetry is due to the need for the Secondary part of a Primary->Secondary conflict to be recorded in a different Secondary epoch. The first definition considers an initial change at the Primary cluster, and a following change at the Secondary. The second definition is for the inverse case.

An interesting observation is that for a single pair of near-concurrent updates at different clusters, happened-before depends only on latencies in one direction. For example, an update to the Primary at time Ta, followed by an update to the Secondary at time Tb will not be considered in conflict if:

 Tb - Ta > Client_conflict_window_length(Primary->Secondary)


Client_conflict_window_length(Primary->Secondary) depends on the EpochDelay, the P_Binlog_lag and S_Relay_lag, but not on the S_Binlog_lag or P_Relay_lag. This can mean that high replication latency, or a complete outage in one direction does not always result in increased conflict rates. However, in the case of multiple sequences of near-concurrent updates at different sites, it probably will.

A general property of the NDB$EPOCH family is that the conflict rate has some dependency on the replication latency. Whether two updates to the same row at times Ta and Tb are considered to be in conflict depends on the relationship between those times and the current system replication latencies. This can remove the need for highly synchronised real-time clocks as recommended for NDB$MAX, but can mean that the observed conflict rate increases when the system is lagging. This also implies that more work is required to catch up, which could further affect lag. NDB$MAX requires manual timestamp maintenance, and will not detect incorrect behaviour, but the basic decision on whether two updates are in-conflict is decided at commit time and is independent of the system replication latency.

In summary :
  • The Client_conflict_window_length in either direction will on average not be less than the EpochDelay (100 millis by default)
  • Clients racing against replication to update both clusters need only beat the current Client_conflict_window_length to cause a conflict
  • Replication latencies in either direction are potentially independent
  • Detected conflict rates partly depend on replication latencies

Stability of reads from the Primary Cluster

In the case of a conflict, the rows at the Primary Cluster will tentatively have replicated operations applied against them by a Slave MySQLD. These conflicting operations will fail prior to commit as their interpreted precondition checks will fail, therefore the conflicting rows will not be modified on the Primary. One effect of this is that a read from the Primary Cluster only ever returns stable data, as conflicting changes are never committed there. In contrast, a read from the Secondary Cluster returns data which has been committed, but may be subject to later 'rollback' via refresh operations from the Primary Cluster.

The same stability of reads observation applies to a row change event stream on the Primary Cluster - events received for a single key will be received in the order they were committed, and no later-to-be-rolled-back events will be observed in the stream.

Stability of reads from the Secondary Cluster

If the Secondary Cluster is also receiving reflected applied epoch information back from the Primary then it will know when it's epoch x has been applied successfully at the Primary. Therefore a read of some row y on the Secondary can be considered tentative while Max_Replicated_Epoch(Secondary) < row_epoch(y), but once Max_Replicated_Epoch(Secondary) >= row_epoch(y) then the read can be considered stable. This is because if the Primary were going to detect a conflict with a Secondary change committed in epoch x, then the refresh events associated with the conflict would be recorded in the same Primary epoch as the notification of the application of epoch x. So if the Secondary observes the notification of epoch x (and updates Max_Replicated_Epoch accordingly), and row y is not modified in the same epoch transaction, then it is stable. The time taken to reach stability after a Secondary Cluster commit will be the Cluster conflict window length.

Perhaps some applications can make better use of the potentially transiently inconsistent Secondary data by categorising their reads from the Secondary as either potentially-inconsistent or stable. To do this, they need to maintain Max_replicated_epoch(Secondary) (By listening to row change events on the ndb_apply_status table) and read the NDB$GCI_64 metacolumn when reading row data. A read from the Secondary is stable if all the NDB$GCI_64 values for all rows read are <= the Secondary's Max_Replicated_Epoch.

In the next post (final post I promise!) I will describe the implementation of the transaction dependency tracking in NDB$EPOCH_TRANS, and review the implementation of both NDB$EPOCH and NDB$EPOCH_TRANS.

Edit 23/12/11 : Added index

Thursday 8 December 2011

Eventual consistency in MySQL Cluster - implementation part 1




The last post described MySQL Cluster epochs and why they provide a good basis for conflict detection, with a few enhancements required. This post describes the enhancements.

The following four mechanisms are required to implement conflict detection via epochs :
  1. Slaves should 'reflect' information about replicated epochs they have applied
    Applied epoch numbers should be included in the Slave Binlog events returning to the originating cluster, in a Binlog position corresponding to the commit time of the replicated epoch transaction relative to Slave local transactions.
  2. Masters should maintain a maximum replicated epoch
    A cluster should use the reflected epoch information to track which of its epochs has been applied by a Slave cluster. This will be the maximum of all epochs applied by the Slave.
  3. Masters should track commit-time epoch per row
    To allow per-row detection of conflicts
  4. Masters should track commit-authorship per row
    To differentiate recent epochs due to replication or conflicting activity.

'Reflecting' epoch information and maintaining the maximum replicated epoch

Every epoch transaction in the Binlog contains a special WRITE_ROW event on the mysql.ndb_apply_status table which carries the epoch transaction's epoch number. This is designed to give an atomically consistent way to determine a Slave cluster's position relative to a Master cluster. Normally these WRITE_ROW events are applied by the Slave but not logged in the Slave's Binlog, even when --log-slave-updates is ON. A new MySQLD option, --ndb-log-apply-status causes WRITE_ROW events applied to the mysql.ndb_apply_status table to be binlogged at a Slave, even when --log-slave-updates is OFF. These events are logged with the ServerId of the Slave MySQLD, so that they can be applied on the Master, but will not loop infinitely.

Allowing this applied epoch information to propagate through a Slave Cluster has the following effects :
  1. Downstream Clusters become aware of their position relative to all upstream Master clusters, not just their immediate Master cluster.
    They gain extra mysql.ndb_apply_status entries for all upstream Masters.
  2. Circularly replicating clusters become aware of which of their epochs, and epoch transactions, have been applied to all clusters in the circle.
    They gain extra mysql.ndb_apply_status entries for all Binlogging MySQLDs in the loop

Effect 1 is useful for replication failover with more than two replication-chained clusters where an intermediate cluster is being routed-around (A->B->C) -> (A->C). Cluster C knows the correct Binlog file and position to resume from on A, without consulting B.

Effect 2 could be used to allow clients to wait until their writes have been fully replicated and are globally visible, a kind of synchronous replication. More relevantly, effect 2 allows us to maintain a maximum replicated epoch value for detecting conflicts.

The visible result of using --ndb-log-apply-status on a Slave is that the mysql.ndb_apply_status table on the Master contains extra entries for the Binlogging MySQLDs attached to its Cluster. The maximum replicated epoch is the maximum of these epoch values.

    Cluster 1 Epoch transactions in flight in
a circular configuration
(Ignoring Cluster 2 epochs)

39 38 37
->---->----->----->----->--
/ \ (Queued epochs 36-26)
Cluster 1 Cluster 2
(Queued epochs 23,24) \ /
-<---<------<----<----<----
25 26 27

Current epoch = 40
Max replicated epoch = 22


A MySQLD acting as a conflict detecting Slave for a cluster needs to know the attached cluster's maximum replicated epoch for conflict detection. On Slave start, before the Slave starts applying replicated changes to the Ndb storage engine, it scans the mysql.ndb_apply_status table to find the highest reflected epoch value. Rows in mysql.ndb_apply_status with server ids in the CHANGE MASTER TO IGNORE_SERVER_IDS list are considered to be local servers, as well as the Slave's own server id, and the maximum replicated epoch is the maximum epoch value from these rows.

@ Slave start

max_replicated_epoch = SELECT MAX(epoch)
FROM mysql.ndb_apply_status
WHERE server_id IN @@IGNORE_SERVER_IDS;



Once the Max_replicated_epoch has been initialised at slave start, it is updated as each reflected epoch event (WRITE_ROW event to mysql.ndb_apply_status) arrives and is processed by the Slave SQL thread. The current Max_replicated_epoch can be seen by issuing the command SHOW STATUS LIKE 'Ndb_slave_max_replicated_epoch';. Note that this is really just a cached copy of the current result of the SELECT MAX(epoch) query from above. One subtlety is that the max_replicated_epoch is only changed when the Slave commits an epoch transaction, as it is only at this point that we know for sure that any event committed on the other cluster before the replicated epoch was applied has been handled.

Per row last-modified epoch storage

Each row stored in Ndb has a built-in hidden metadata column called NDB$GCI64. This columns stores the epoch number at which the row was last modified. For normal system recovery purposes, only the top 32 bits of the 64 bit epoch, called the Global Checkpoint Index or GCI are used. NDB$EPOCH needs further bits to be stored per-row. Epoch values only use a few of the bits in the bottom 32 bits of the epoch, so by default 6 extra bits per row are used to enable a full 64 bit epoch to be stored for each row. The actual number of bits used can be controlled by a parameter to NDB$EPOCH. Where some epoch is not fully expressible in the number of bits available, the bottom 32 bits are saturated, which again errs on the side of safety, potentially causing false conflicts, but ensuring no real conflicts are missed. The ndb_select_all tool has a --gci64 option which shows each row's stored epoch value.

A conflict detecting slave detects conflicts between transactions already committed, whose rows have their commit-time epoch numbers, and incoming operations in an epoch transaction, which are considered to have been committed at the epoch given by the current Maximum Replicated Epoch. An incoming operation is considered to be in-conflict if the row it affects has a last-committed epoch that is greater than the current Maximum Replicated Epoch.

  in_conflict = (ndb_gci64 > max_replicated_epoch)


In other words, at the time the change was committed on the other Cluster, that other Cluster was only aware of our changes as-of our epoch (max_replicated_epoch). Therefore it was unaware of any changes committed in more recent epochs. If the row being changed has been locally modified since that epoch then there have been concurrent modifications and a conflict has been discovered.

Note that this mechanism is purely based on monitoring serialisation of updates to rows. No semantic understanding of row data, or the meaning of applied changes is attempted. Even if both clusters update some row to contain exactly the same value it will be considered to be a conflict, as the updates were not serialised with respect to each other.

Per row hidden Author metacolumn

One advantage of reusing the row's last-modified epoch number for conflict detection is that it is automatically set on every commit. However the downside is that when a replicated modification is found to not be in conflict, and is applied, the row's epoch is automatically set to the current value at commit time as normal. By definition, the current epoch value is always greater than the maximum replicated epoch, and so if a further replicated modification to the same row were to arrive, it would find the row's epoch to be higher than the current maximum replicated epoch, and detect a false conflict.

In theory we could consider the current maximum replicated epoch to be the row's commit time epoch, but as the per-row epoch is used for other more critical DB recovery purposes it's not safe to abuse it in this way. Instead we use the observation that if we found a previous row update from some other cluster to be not-in-conflict, then further updates from it are also not-in-conflict.

To detect this, a new hidden metadata column is introduced called NDB$AUTHOR. This column is set to zero when a row is modified by any unmodified NdbApi client, including MySQLD, but when a row is modified by the MySQLD Slave SQL thread, it is set to one. More generally, NDB$AUTHOR could be set to a non-zero identifier of which other cluster sourced an accepted change. Just setting to one limits us to having one other cluster originating potentially conflicting changes. The ndb_select_all tool has a --author option which shows each row's stored Author value.

By extending the conflict detecting function to examine the NDB$AUTHOR value, we avoid the problem of falsely detecting conflicts when applied consecutive replicated changes.
  in_conflict = (ndb$author != change_author) && (ndb_gci64 > max_replicated_epoch)


We are currently just using 1 to mean 'other author', so this simplifies to :
 in_conflict = (ndb$author != 1) && (ndb_gci64 > max_replicated_epoch)

= (ndb$author == 0) && (ndb_gci64 > max_replicated_epoch)


This conflict detection function is encoded in an Ndb interpreted program and attached to the replicated DELETE and UPDATE NdbApi operations so that it can be quickly and atomically executed at the Ndb data nodes as a predicate prior to applying the operation.

Ndb binlog row event ordering and false conflicts

The happened-before relationship between reflected epoch events (WRITE_ROW to mysql.ndb_apply_status) and incoming row events is used to determine whether a conflict has occurred. As described in the last post, Ndb offers limited ordering guarantees on the row events within an epoch transaction. The only guarantee is that multiple changes to the same row will be recorded in the order they committed. This implies that the relative ordering of the reflected epoch WRITE_ROW event, on some row in mysql.ndb_apply_status, and other row events on other tables sharing the same epoch transaction is meaningless. The only ordering guarantees between different rows exist at epoch boundaries.

This means that if we see a reflected epoch WRITE_ROW event somewhere in replicated epoch j, then we can only safely assume that this happened before incoming row events in epoch j+1 and later. The row events appearing before and after the reflected epoch WRITE_ROW event in epoch j may have committed before or after the reflected epoch event.

The relaxed relative ordering gives us reduced precision in determining happened-before, and to be safe, we must err on the side of assuming that a conflict exists rather than that it does not. Consider a Master committing a change to row X, recorded in epoch N. This is then applied on the Slave in Slave epoch S. If the Slave then commits a local change affecting the same row X in the same epoch S, this will be returned to the Master in the same Slave epoch transaction, and the Master will be unable to determine whether it occurred before or after it's original write to X, so must assume that it occurred before and is therefore in conflict. If the Slave had committed its change in epoch S+1 or later, the happened-before relationship would be clear and the change would not be considered in conflict.

These potential false conflicts are the price paid here for the lack of fine grained event ordering in the Ndb Binlog.

I'm lost

There's been a lot of information, or at least a lot of words. Let's summarise how NDB$EPOCH and NDB$EPOCH_TRANS detect row conflicts by following

  • @Cluster A
    Transactions modify rows, automatically setting their hidden NDB$GCI64 column to the current epoch and their NDB$AUTHOR column to 0

    Binlogging MySQLDs record modified rows in epoch transactions in their Binlogs, together with MySQLD generated mysql.ndb_apply_status WRITE_ROW events

  • @Cluster B
    Slave MySQLDs apply replicated epoch transactions along with their generated mysql.ndb_apply_status WRITE_ROW events

    Other clients of Cluster B commit transactions against the same data.

    Binlogging MySQLDs 'reflect' the applied-replicated epoch information by recording the mysql.ndb_apply_status WRITE_ROW events in their Binlogs as a result of --ndb-log-apply-status.

    Binlogging MySQLDs also record the row changes made by local clients.

  • @Cluster A
    Slave MySQLDs track the incoming reflected epoch mysql.ndb_apply_status WRITE_ROW events to maintain their ndb_slave_max_replicated_epoch variables

    Slave MySQLDs attach NdbApi interpreted programs to UPDATE and DELETE operations as they are applied to the database, comparing the row's stored NDB$GCI64 and NDB$AUTHOR columns with constant values supplied in the program.

    If there are no conflicts, the UPDATE and DELETE operations are applied, and the row's NDB$AUTHOR columns are set to one indicating a successful Slave modification

    If there are conflicts then conflict handling for the conflicting rows begins.

Now does that make any sense? Assuming it does, then next we look at how detected conflicts are handled.

Once again, another wordy endurance test and we're not finished. Surely the end must be near?

Edit 23/12/11 : Added index

Wednesday 7 December 2011

Eventual Consistency in MySQL Cluster - using epochs




Before getting to the details of how eventual consistency is implemented, we need to look at epochs. Ndb Cluster maintains an internal distributed logical clock known as the epoch, represented as a 64 bit number. This epoch serves a number of internal functions, and is atomically advanced across all data nodes.

Epochs and consistent distributed state

Ndb is a parallel database, with multiple internal transaction coordinator components starting, executing and committing transactions against rows stored in different data nodes. Concurrent transactions only interact where they attempt to lock the same row. This design minimises unnecessary system-wide synchronisation, enabling linear scalability of reads and writes.

The stream of changes made to rows stored at a data node are written to a local Redo log for node and system recovery. The change stream is also published to NdbApi event listeners, including MySQLD servers recording Binlogs. Each node's change stream contains the row changes it was involved in, as committed by multiple transactions, and coordinated by multiple independent transaction coordinators, interleaved in a partial order.

  Incoming independent transactions
affecting multiple rows

T3 T4 T7
T1 T2 T5

| | |
V V V

-------- -------- --------
| 1 | | 2 | | 3 |
| TC | | TC | | TC | Data nodes with multiple
| |--| |--| | transaction coordinators
|------| |------| |------| acting on data stored in
| | | | | | different nodes
| DATA | | DATA | | DATA |
-------- -------- --------

| | |
V V V

t4 t4 t3
t1 t7 t2
t2 t1 t7
t5

Outgoing row change event
streams by causing
transaction


These row event streams are generated independently by each data node in a cluster, but to be useful they need to be correlated together. For system recovery from a crash, the data nodes need to recover to a cluster-wide consistent state. A state which contains only whole transactions, and a state which, logically at least, existed at some point in time. This correlation could be done by an analysis of the transaction ids and row dependencies of each recorded row change to determine a valid order for the merged event streams, but this would add significant overhead. Instead, the Cluster uses a distributed logical clock known as the epoch to group large sets of committed transactions together.

Each epoch contains zero or more committed transactions. Each committed transaction is in only one epoch. The epoch clock advances periodically, every 100 milliseconds by default. When it is time for a new epoch to start, a distributed protocol known as the Global Commit Protocol (GCP) results in all of the transaction coordinators in the Cluster agreeing on a point of time in the flow of committing transactions at which to change epoch. This epoch boundary, between the commit of the last transaction in epoch n, and the commit of the first transaction in epoch n+1, is a cluster-wide consistent point in time. Obtaining this consistent point in time requires cluster-wide synchronisation, between all transaction coordinators, but it need only happen periodically.

Furthermore, each node ensures that the all events for epoch n are published before any events for epoch n+1 are published. Effectively the event streams are sorted by epoch number, and the first time a new epoch is encountered signifies a precise epoch boundary.

 Incoming independent transactions

T3 T4 T7
T1 T2 T5

| | |
V V V

-------- -------- --------
| 1 | | 2 | | 3 |
| TC | | TC | | TC | Data nodes with multiple
| |--| |--| | transaction coordinators
|------| |------| |------| acting on data stored in
| | | | | | different nodes
| DATA | | DATA | | DATA |
-------- -------- --------

| | |
V V V

t4(22) t4(22) t3(22) Epoch 22
...... ...... ......
t1(23) t7(23) t2(23) Epoch 23
t2(23) t1(23) t7(23)
......
t5(24) Epoch 24

Outgoing row change event
streams by causing transaction
with epoch numbers in ()



When these independent streams are merge-sorted by epoch number we get a unified change stream. Multiple possible orderings can result.
One Partial ordering is shown here :

      Events      Transactions
contained in epoch

t4(22)
t4(22) {T4,T3}
t3(22)

......

t1(23)
t2(23)
t7(23)
t1(23) {T1, T2, T7}
t2(23)
t7(23)

......

t5(24) {T5}



Note that we can state from this that T4 -> T1 (Happened before), and T1 -> T5. However we cannot say whether T4 -> T3 or T3 -> T4. In epoch 23 we see that the row events resulting from T1, T2 and T7 are interleaved.

Epoch boundaries act as markers in the flow of row events generated by each node, which are then used as consistent points to recover to. Epoch boundaries also allow a single system wide unified transaction log to be generated from each node's row change stream, by merge-sorting the per-node row change streams by epoch number. Note that the order of events within an epoch is still not tightly constrained. As concurrent transactions can only interact via row locks, the order of events on a single row (Table and Primary key value) signifies transaction commit order, but there is by definition no order between transactions affecting independent row sets.

To record a Binlog of Ndb row changes, MySQLD listens to the row change streams arriving from each data node, and merge-sorts them them by epoch into a single, epoch-ordered stream. When all events for a given epoch have been received, MySQLD records a single Binlog transaction containing all row events for that epoch. This Binlog transaction is referred to as an 'Epoch transaction' as it describes all row changes that occurred in an epoch.

Epoch transactions in the Binlog

Epoch transactions in the Binlog have some interesting properties :
  • Efficiency : They can be considered a kind of Binlog group commit, where multiple user transactions are recorded in one Binlog (epoch) transaction. As an epoch normally contains 100 milliseconds of row changes from a cluster, this is a significant amortisation.
  • Consistency : Each epoch transaction contains the row operations which occurred when moving the cluster from epoch boundary consistent state A to epoch boundary consistent state B
    Therefore, when applied as a transaction by a slave, the slave will atomically move from consistent state A to consistent state B
  • Inter-epoch ordering : Any row event recorded in epoch n+1 logically happened after every row event in epoch n
  • Intra-epoch disorder : Any two row events recorded in epoch n, affecting different rows, may have happened in any order.
  • Intra-epoch key-order : Any two row events recorded in epoch n, affecting the same row, happened in the order they are recorded.

The ordering properties show that epochs give only a partial order, enough to subdivide the row change streams into self-consistent chunks. Within an epoch, row changes may be interleaved in any way, except that multiple changes to the same row will be recorded in the order they were committed.

Each epoch transaction contains the row changes for a particular epoch, and that information is recorded in the epoch transaction itself, as an extra WRITE_ROW event on a system table called mysql.ndb_apply_status. This WRITE_ROW event contains the binlogging MySQLD's server id and the epoch number. This event is added so that it will be atomically applied by the Slave along with the rest of the row changes in the epoch transaction, giving an atomically reliable indicator of the replication 'position' of the Slave relative to the Master Cluster in terms of epoch number. As the epoch number is abstracted from the details of a particular Master MySQLD's binlog files and offsets, it can be used to failover to an alternative Master.

We can visualise a MySQL Cluster Binlog as looking something like this. Each Binlog transaction contains one 'artificially generated' WRITE_ROW event at the start, and then RBR row events for all row changes that occurred in that epoch.

    BEGIN
WRITE_ROW mysql.ndb_apply_status server_id=4, epoch=6998
WRITE_ROW ...
UPDATE_ROW ...
DELETE_ROW ...
...
COMMIT # Consistent state of the database

BEGIN
WRITE_ROW mysql.ndb_apply_status server_id=4, epoch=6999
...
COMMIT # Consistent state of the database

BEGIN
WRITE_ROW mysql.ndb_apply_status server_id=4, epoch=7000
...
COMMIT # Consistent state of the database
...


A series of epoch transactions, each with a special WRITE_ROW event for recording the epoch on the Slave. You can see this structure using the mysqlbinlog tool with the --verbose option.

Rows tagged with last-commit epoch

Each row in a MySQL Cluster stores a hidden metadata column which contains the epoch at which a write to the row was last committed. This information is used internally by the Cluster during node recovery and other operations. The ndb_select_all tool can be used to see the epoch numbers for rows in a table by supplying the --gci or --gci64 options. Note that the per-row epoch is not a row version, as two updates to a row in reasonably quick succession will have the same commit epoch.

Epochs and eventual consistency

Reviewing epochs from the point of view of my previous posts on eventual consistency we see that :
  • Epochs provide an incrementing logical clock
  • Epochs are recorded in the Binlog, and therefore shipped to Slaves
  • Epoch boundaries imply happened-before relationships between events before and after them in the Binlog

The properties mean that epochs are almost perfect for monitoring conflict windows in an active-active circular replication setup, with only a few enhancements required.

I'll describe these enhancements in the next post.

Edit 23/12/11 : Added index

Friday 25 November 2011

Speaking at Oracle UK User Group conference


I will be speaking in the MySQL track of the UK Oracle User Group conference on 5th December in Birmingham UK. The title of the session is "Building Highly Available and Scalable, Real Time Services with MySQL Cluster" - full details here.

I'm not a regular conference attendee, never mind speaker. However I'm looking forward to meeting current and potential MySQL users, and also attending some of the talks in the MySQL and other tracks. Maybe I can learn something about RAC, or Exadata?

If you are attending and want to talk about MySQL or MySQL Cluster then please track me down and say hello.

Note that this is the first picture I have included in 3 years of posts - maybe I shouldn't wait 3 years for the next one?

Thursday 20 October 2011

Eventual Consistency - detecting conflicts




In my previous posts I introduced two new conflict detection functions, NDB$EPOCH and NDB$EPOCH_TRANS without explaining how these functions actually detect conflicts? To simplify the explanation I'll initially consider two circularly replicating MySQL Servers, A and B, rather than two replicating Clusters, but the principles are the same.

Commit ordering

Avoiding conflicts requires that data is only modified on one Server at a time. This can be done by defining Master/Slave roles or Active/Passive partitions etc. Where this is not done, and data can be modified anywhere, there can be conflicts. A conflict occurs when the same data is modified at both Servers concurrently, but what does concurrently mean? On a single server, modifications to the same data are serialised by locking or MVCC mechanisms, so that there is a defined order between them. e.g. two modifications MX and MY are committed either in order {MX, MY} or {MY, MX}.

For the purposes of replication, two modifications MX and MY on the same data are concurrent if the order of commit is different at different servers in the system. Each server will choose one order, but if they don't all choose the same order then there is a conflict. Having a different order means that the last modification on each server is different, and therefore the final state of the data can be different on different servers.

One way to avoid conflicts is to get all servers to agree on a commit order before processing an operation - this ensures that all replicas process operations in the same order, waiting if necessary for missing operations to arrive to ensure no commit-order variance.

Note that commit-ordering is only important between modifications affecting the same data - modifications which do not overlap in their data footprint are unrelated and can be committed in any order. A system which totally orders commits may be less efficient than one which only orders conflicting commits.

Happened before

For the NDB$EPOCH asynchronous conflict detection functions, commit orders are monitored to detect when two modifications to the same data have been committed in different orders.

Given two modifications MX and MY to the same data, each server will decide a happened before (denoted ->) relationship between them :

  1. MX -> MY (MX happened before MY)

    or

  2. MY -> MX (MY happened before MX)

If all servers agree on order 1, or all servers agree on order 2 then there is no conflict. If there is any disagreement then there is a conflict.

In practice, disagreement arises because the same data is modified at both Server A and Server B before the Server A modification is replicated to B and/or vice-versa.

Sometimes when reading about commit ordering, the reason why commit orders should not diverge is lost - the only reason to care about commit ordering is because it is related to conflicting modifications and the potential for data divergence.

Determining happened before from the Binlog


We assume a steady start state, where both Server A and Server B agree about the state of their data, and no modifications are in-flight. If a client of Server A then commits modification MA1 to row X, then from Server A's point of view, MA1 happened before any future modification to row X.

MA1 -> M*

If a client of Server B commits modification MB1 to row X around the same time (before, or after, or thereabouts), from Server B's point of view, MB1 happened before any future modification to row X.

MB1 -> M*

Both Servers are correct, and content with their world view. Note that in general, when committing a modification Mj, a server naturally asserts that from its point of view the modification happened before any as-yet-unseen modification Mk.

Some time will pass and the replication mechanisms will pull Binlogged changes across and apply them. When Server B pulls and applies Server A's Binlogged changes, modification MA1 will be applied to row X. Server B will then naturally be of the opinion that :

MB1 -> MA1

Independently, Server A will pull Server B's binlogged changes and apply modification MB1 to row X, and will come to the certain opinion that :

MA1 -> MB1

These happened before relationships are contradictory so there is a conflict. If nothing is done then A and B will have diverged, with Server A storing the outcome of MB1, and Server B storing the outcome of MA1.

Note that if the --log-slave-updates server option were on, then Server A's Binlog would have recorded {...MA1...MB1...}, whereas Server B's Binlog would have recorded {...MB1...MA1...}. By recording when the Slave applies replicated updates in the Binlog, we record the commit order of the replicated updates relative to other local updates, and encode the happened before relationship in the relative positions of events in the Binlog.

The Binlog is of course transferred between servers, so in a circular replication setup, Server A can become aware of the happened before information from Server B and vice-versa by examining the received Binlogs. The Slave SQL thread examines Binlogs as it applies them, so can be extended to extract happened before information, and use it to detect conflicts.

Recall that Server A asserts that its committed modification to row X (MA1) happened before any as-yet-unseen replicated modification :

MA1 -> M*

Therefore, to detect a conflict, Server A only needs to detect the case where the incoming Binlog from Server B infers that some modification MB* to row X happened before server A's already committed modification MA1.

If Server B Binlog implies MB* -> MA1 then there has been a conflict

This is in essence how the NDB$EPOCH functions work - the Binlog is used to capture happened before relationships which are checked to determine whether conflicting concurrent modifications have occurred.


Conflict Windows

In the previous example, Server A commits MA1 modifying row X, and Server B commits MB1 also modifying row X. From Server A's point of view, as soon as it commits MA1, there is potential for a replicated modification from B such as MB1 to be found in-conflict with MA1. We say that from Server A's point of view a window of potential conflict on row X has opened when MA1 was committed. Server A monitors Server B's Binlog as it is applied and when it reaches the point where the commit of MA1 at Server B is recorded, Server A knows that any further MB* recorded in Server B's Binlog after this cannot have happened before MA1, therefore the window of potential conflict on row X has closed.

We define the window of potential conflict on a row X as the time between the commit of a modification M1, and the Slave processing of an event in a replicated Binlog indicating that modification M1 has been applied on the other server(s) in the replication loop.

Any incoming replicated modification M2 also affecting row X while it has an open conflict window is in conflict with M1, as it must appear to have happened-before M1 to the server which committed it.

Observations about the window of potential conflict :
  • It is defined per committed modification per disjoint data set
  • It can be extended by further modifications to the same data from the same server
    The window does not close all further modifications have been fully replicated
  • Window duration is dependent on the replication round-trip delay
    Which can vary greatly
  • Once it closes, further modifications to the same data from anywhere are safe, but will each open their own window of potential conflict.
  • From the point of view of one Server, conflicts can occur at any time until the conflict window is closed
  • From the point of view of one Server, the duration of the window of potential conflict is similar to

    Replication Propagation Delay A to B + Replication Propagation Delay B to A

    These delays may not be symmetric.
  • From the point of view of an external observer/actor, the system will detect two modifications MA1 and MB1 committed at times tMA1 and tMA2 as in-conflict if

    tMB1 - tMA1 < Replication Propagation Delay A to B

    ( A before B, but not by enough to avoid conflict )

    or

    tMA1 - tMB1 < Replication Propagation Delay B to A

    ( B before A, but not by enough to avoid conflict )
  • The window of potential conflict can only be as short as the replication propagation delay between systems, which can tend towards, but never reach zero.

Tracking conflict windows with a logical clock

A row's conflict window opens when a modification is committed to it, and closes when the Slave processes an event indicating that the modification was committed on the other server(s). How can we track all of these independent conflict windows? If only we had a database :)

This is solved by maintaining a per-server logical clock, which increments periodically. Each modification to a row sets a hidden metacolumn of the row to the current value of the server's logical clock. This gives each row a kind of coarse logical timestamp. When the logical clock increments, an event is included in the Binlog to record the transition. Further, all row events for modifications with logical clock value X are stored in the Binlog before any row events for modifications with logical clock value X+1.

 Server A Binlog events    ClockVal stored in DB
by Modification

...
MA1 39
MA2 39
MA3 39
ClockVal_A = 40
MA4 40
MA5 40
ClockVal_A = 41
MA6 41


When a Slave applies the Binlog, the ClockVal events are passed through into its Binlog, and are then made available to the original server in a circular configuration.

 Server B Binlog events

...
MB1
MB2
ClockVal_A = 40
MB3
MB4
ClockVal_B = 234
MB5
MB6
ClockVal_A = 41
MB7
...



Using the Binlog ordering, we can see that ClockVal_A = 40 happened before MB3 and MB4 at Server B. This implies that MA1, MA2 and MA3 happened before MB3 and MB4 at server B.

When applying Server B's Binlog to Server A, the Slave at Server A maintains a maximum replicated clock value, which increases as it observes its ClockVal_A events returned. When applying a row event originating from Server B, the affected row's stored clock value is first compared to the maximum replicated clock value to determine whether the row event from B conflicts with the latest committed change to the row at Server A.

The two modifications are in conflict if the stored row's clock value is greater than or equal to the maximum replicated clock value.

in_conflict = row_clockval >= maximum_replicated_clockval

Using a logical clock to track conflict windows has the following benefits :
  • Automatic update on commit of row modification, opening conflict window
  • Automatic extension of conflict window on further modification on row with open conflict window.
  • Automatic closure of conflict window on maximum replicated clock value exceeding row's stored value
  • Efficient storage cost per row - one clock value.
  • Efficient runtime processing cost - inequality comparison between maximum replicated clock value and row's stored clock value.

As you might have guessed, NDB$EPOCH uses the MySQL Cluster epoch values as a logical clock to detect conflicts. The details of this will have to wait for yet another post. In my first two posts on this subject I thought, 'one more post and I can finish describing this', but here I am at three posts and still not finished. Hopefully the next will get more concrete and finally describe the mysterious workings of NDB$EPOCH. We're getting closer, honest.

Edit 23/12/11 : Added index

Wednesday 12 October 2011

Some MySQL projects I think are cool - Shard-Query

I've already described Justin Swanhart's Flexviews project as something I think is cool. Since then Justin appears to have been working more on Shard-Query which I also think is cool, perhaps even more so than Flexviews.

On the page linked above, Shard-Query is described using the following statements :

"Shard-Query is a distributed parallel query engine for MySQL"
"ShardQuery is a PHP class which is intended to make working with a partitioned dataset easier"
"ParallelPipelining - MPP distributed query engines runs fragments of queries in parallel, combining the results at the end. Like map/reduce except it speaks SQL directly."

The things I like from the above description :
  • Distributed
  • Parallel
  • MySQL
  • Partitioned
  • Fragments
  • Map/Reduce
  • SQL
The things that scare me :
My fear of PHP is most likely groundless, based on experiences circa 1998. I suspect it runs much of the real money-earning web, and perhaps brings my scribblings to you. However, the applicability of Shard-Query seems so general, that to actually (or apparently) limit it to web-oriented use cases seems a shame. In any case I am not hipster enough to know which language would be better - OCaml? Dart? Only joking. I suppose that if the MySQL Proxy could do something along these lines then the language debate would be moot.

I am likely to fall foul of the lack-of-original-content test if I quote too much from the Shard-Query website, but the How-it-works section seems relevant here.

How it works

  • The query is parsed using http://code.google.com/p/php-sql-parser
  • A modified version of the query is executed on each shard.
  • The queries are executed in parallel using http://gearman.org
  • The results from each shard are combined together
  • A version of the original query is then executed over the combined results
  • All aggregation is done on the slaves (pushed down)
  • Queries with inlists can be made into parallel queries.
  • A callback can be used for QueryRouting. You provide a partition column, and a callback which returns information pointing to the correct shard. The most convenient way to do this is with Shard-Key-Mapper

Query rewriting rules

The core of Shard-Query are the query rewriting rules, which Justin introduces in a blog post entitled 8 substitution rules for running SQL in parallel. These transforms and substitutions allow Shard-Query to execute a user supplied query across multiple database shards. A single query (SELECT) can be mapped into a query to be applied to some, or all shards, and further queries to be used to merge the results of the per-shard queries into a final result.

Compared to a single system query, the consistency of the view that the sharded query executes against is less well defined, but this may well be acceptable for some applications.

On a single server

A single MySQL instance offers inter-query parallelism, but currently has very limited intra-query parallelism, Shard-Query can circumvent this by splitting a single query into multiple sharded sub-queries which can run in different MySQLD threads (as they are each submitted by different clients) to give intra-query parallelism. To me this seems more of a cool side effect and proof of reasonable implementation efficiency, than a real reason to use Shard-Query. Perhaps someone out there has the perfect use case for this.

Across multiple servers

The big-name MySQL users scale-out with MySQL, storing subsets of data on separate MySQL instances. Shard-Query allows SQL queries spanning all shards to be executed. This is what scaled-out MySQL has been waiting for.

I don't think it would be a good idea to run heavy traffic through Shard-Query to access a set of sharded MySQL instances yet, but Shard-Query gives a great way to perform occasional queries across all shards. This could be great for reporting and perhaps some light mining for patterns, trends etc. The ability to query across live real time data may be a real gain.

Loose coupling and availability


Scaling out via sharding standalone database servers has many difficulties, but the independence of each shard can benefit availability relative to a more tightly coupled distributed system. The loose coupling of the MySQL instances means that it's far less likely that the failure of one shard will drag others down with it, increasing system availability. Shard-Query can give the loosely coupled shards a smoother facade. The limited set of capabilities that Shard-Query gives over the set of shards may well be more than good enough. Note that 'good enough' is a recurring theme in this 'MySQL projects I think are cool' series. Often 'best' results in expensive or unnecessary compromises as a side-effect of trying to please everybody all the time.

Yet another MySQL sharded scaleout design

Looking at Shard-Query, and the recent MySQL-NoSQL Api developments, it seems like a modern MySQL sharded scaleout design might make use of :
  • MySQL SQL Apis (PHP, JDBC, ODBC, Ruby, Python, ....)
  • NoSQL access mechanisms
    (HandlerSocket, Memcached(1,2))
  • ShardQuery for SQL reporting / analysis
Per-instance efficiency can be maximised by using the NoSQL access Apis, single-instance SQL is still available if required for the application, and a global SQL view is also available.

This combination of scalability, efficiency and SQL query-ability could be a sweet spot in the increasingly confusing multi-dimensional space of high throughput distributed databases.

Monday 10 October 2011

Eventual consistency with transactions




In my last post I described the motivation for the new NDB$EPOCH conflict detection function in MySQL Cluster. This function detects when a row has been concurrently updated on two asynchronously replicating MySQL Cluster databases, and takes steps to keep the databases in alignment.

With NDB$EPOCH, conflicts are detected and handled on a row granularity, as opposed to column granularity, as this is the granularity of the epoch metadata used to detect conflicts. Dealing with conflicts on a row-by-row basis has implications for schema and application design. The NDB$EPOCH_TRANS function extends NDB$EPOCH, giving stronger consistency guarantees and reducing the impact on applications and schemas.

Concurrency control in a single synchronous system

MySQL Cluster is a relational system. Data is stored in tables with defined schemas of typed columns. As with any relational system, real-world concepts can be modelled in a number of ways with different trade offs. One such consideration is the level of normalisation applied to a data model. Transactions and concurrency control ensure that some data modelled using multiple tables, rows and columns, appears to any external observer to move instantaneously between stable, self consistent states. This is a powerful simplification, and eases the complexity burden on application writers. Each transaction provides the illusion of serialised access to the database. Multiple transactions can execute in parallel, so long as they do not interfere by accessing the same data. Where transactions do interfere, some real serialisation can occur. In practice, applications depend on the serialisation and atomicity guarantees given by transactions, often in ways not fully made explicit or understood by the application designers.

Concurrency control in independent, asynchronously replicated systems

Asynchronously replicating writes between two independent systems erodes the guarantees given by single system concurrency control. Each system maintains its transactional guarantees in parallel, and incorporates modifications from the other system asynchronously, at some time after they were originally committed. Where the same row is modified on both systems concurrently, two versions of the same row are produced, and there is no longer a single history of values for the given row. This can cause replicas to diverge. Note that the window of 'concurrency', or 'potential conflict' is related to the time taken for a committed update to be applied on all replicas. This is similar, or equivalent to the commit delay experienced by a synchronous 2-phase commit system.

Conflicts can be detected using some form of conflict detection. On detecting a conflict, steps can then be taken to avoid divergence, and resolve any unwanted effects of the concurrent writes.

Replica divergence,
external effects and cascading impacts

Divergence can be avoided if conflicting writes can be merged in some way. Some write conflicts may be equivalent, associative or otherwise mergeable, especially if the operations are replicated rather than their resulting states. However merging requires specific schema and application knowledge to determine how to merge conflicting writes.

More generally, divergence can be avoided by rejecting one or both conflicting writes. This is the approach we have taken, with handling of rejected writes delegated to the application, where the knowledge exists to handle them via the exceptions table mechanism.

However write conflicts are handled, it is important to consider :
  1. Cascading impacts on dependent operations
    Operations based on the results of conflicting operations may themselves require handling to avoid divergence.
  2. Real world / other system effects based on conflicting writes
    Maintaining database consistency does not guarantee that real world effects have been correctly compensated.

A database system does not exist in a vacuum. Operations are performed to reflect external world, or external system events. When the effects of an operation are later reverted, the real world effects may also require some compensating actions. These external world compensating actions are beyond the scope of any DBMS system and are application specific. In a real application of this technology, this is probably the most important part of the design.

Any particular conflict originates between two concurrent operations, but once a conflicting operation is committed, other operations can read its results, and commit their own, expanding the impact of the original conflict. Conflicts are discovered asynchronously, some time after the original operations are committed, so there can be a large number of subsequent operations in the replication pipeline which depend on the conflicting operations at the point they are discovered. All of the invalidated subsequent operations must be handled.

Row based conflict detection and data shearing

Using row-based conflict detection and re-alignment can counteract data divergence so that rows become consistent eventually, but this comes at the cost of eroding the atomicity of committed transactions. For example, a committed transaction which writes to three rows may, after conflict handling, have none, one, two or all three row changes reverted.

Within a single system, the two potentially visible states were :
  1. Before transaction (All rows at version 1 : Av1, Bv1, Cv1)
  2. After transaction (All rows at version 2 : Av2, Bv2, Cv2)

With row based conflict detection, ignoring the row variants we're actually conflicting with, there could be :

Before transaction : (All rows at version 1 : Av1, Bv1, Cv1)

After transaction
  1. Av2, Bv2, Cv2 (All rows at version 2)
  2. Av2, Bv2, Cv1 (Cv2 reverted)
  3. Av2, Bv1, Cv2 (Bv2 reverted)
  4. Av2, Bv1, Cv1 (Bv2, Cv2 reverted)
  5. Av1, Bv2, Cv2 (Av2 reverted)
  6. Av1, Bv2, Cv1 (Av2, Cv2 reverted)
  7. Av1, Bv1, Cv2 (Av2, Bv2 reverted)
  8. Av1, Bv1, Cv1 (Av2, Bv2, Cv2 reverted)

Depending on the concept that the distinct rows A,B,C represented, this can vastly increase the complexity of understanding the data. If A, B and C model entirely separate entities, which just happened to be transactionally updated together then there may be no problem if they fare differently in conflict detection. If they model portions of the state of a larger entity then reasoning about the state of that entity becomes complex.

This potential chopping up of changes committed in a transaction can be described as shearing of the data model represented by the schema. In practice, the potential for shearing between rows implies that for tables with conflicts handled on a row basis, cross row consistency is not available. This in turn implies that the schema must be modified to ensure that data items which cannot tolerate relative shear are placed in the same row so that they share the same fate and remain self-consistent. This single-row limit to consistency is native and natural to some NoSQL / key-value / wide column store products, but is a weakening of the normal guarantees in a transactional system.

Requiring that schemas and applications using conflict detection can tolerate shear between any two rows is quite a heavy burden to place on applications, especially those not written with eventual consistency in mind. Is there some way to support optimistic conflict detection without breaking up committed transactions, and shearing rows?

Transaction based conflict detection


One way to avoid inter-row shearing is to perform conflict detection on a row-by-row basis, but on discovering a conflict, take action on a transaction basis. More concretely, when a row conflict is discovered, any other rows written as part of the same transaction should also be considered in-conflict by implication. This reduces the set of stable states back to the original case - all rows at version 1 or all rows at version 2.

Where a row is found to be in-conflict with some replicated row operation, a further replicated row operation on the same row should also be found to be in-conflict, until the conflict condition has been cleared. This property is implicitly implemented in the existing row based conflict detection functions.

When the scope of a conflict is extended to include all row modifications in a transaction, this implies that all following replicated row operations which affect the same rows, must also be in conflict by implication. To avoid row shearing, these implied-in-conflict rows must implicate the other rows in their transactions, and those rows may in-turn implicate other rows. The overall effect is that a single row conflict must cause its transaction, and all dependent transactions to be considered to be in conflict.

From our database centric point of view, transactions can only become dependent on each other through the data they access in the database. If transaction X updates rows A and B, and transaction Y then reads row B and updates row C, then we can say that transaction B has a read-write dependency on transaction A via row B. We cannot tell whether there is some other out-of-band communication between transactions.

By tracking this transaction 'footprint' information, and looking for row overlaps, we can determine transaction dependencies. This is how the new NDB$EPOCH_TRANS function provides transactional conflict detection.

NDB$EPOCH_TRANS conflict detection function

The NDB$EPOCH_TRANS conflict function uses the same mechanism as the NDB$EPOCH function to detect concurrent updates to the same row across two clusters. However, once a row conflict has been detected in an operation which is part of a replicated transaction, all other operations in that replicated transaction are considered to be in conflict. Furthermore, any transactions found to be dependent on that transaction are also considered in conflict. Once the full set of in conflict transactions has been determined, the set of affected rows are handled in the same way as in NDB$EPOCH.

Specifically :
  • The replicated operations are not applied
  • The exceptions table(s) are populated with the affected primary keys
  • The affected row epochs are updated
  • Realignment Binlog events are generated to (eventually) realign the Secondary cluster

As with NDB$EPOCH, NDB$EPOCH_TRANS is asymmetric, so the Primary Cluster always wins when a conflict is detected. As with NDB$EPOCH, this allows applications needing pessimistic properties to obtain them by accessing the Primary Cluster. Applications which can handle the relaxed consistency of optimism can access either Cluster. With NDB$EPOCH_TRANS, transactions committed on the Secondary Cluster are guaranteed to be atomic, whether or not they are later found to be in conflict. Each committed transaction will either be unaffected by conflict detection, or be completely reverted. There will be no row shear.

This slightly stronger optimistic consistency guarantee may ease the implementation of relaxed consistency / eventually consistent applications. For example, where some concept is modelled by a number of different rows in different tables, any transactional modification will either be atomically applied, or not applied at all, so the relationships between the rows affected by a transaction will preserved. The need to flatten a schema into single-row entities is reduced, although careful design is still required to get a good understanding of transaction boundaries, and the behaviour of the overall system when transactions are reverted.

Transaction dependency tracking


NDB$EPOCH_TRANS is built in to the MySQL Cluster Storage Engine. It is active in the normal MySQL Slave SQL thread, as part of the normal table handler calls made when applying a replicated Binlog. The NDB$EPOCH_TRANS code in the Ndb storage engine tracks transaction dependencies based on the primary keys accessed by row events in the Binlog, and their transaction ids. If two row events have the same table and primary key values, then they affect the same row. If two events affect the same row, and are in different transactions, then the second transaction depends on the first. In this way, a transaction dependency graph is built by the MySQL Cluster Storage Engine as row events are applied by the Slave from a replicated Binlog. This graph is then used to find dependencies when a conflict is detected.

A Binlog only contains WRITE_ROW, UPDATE_ROW and DELETE_ROW events. This means that we only detect dependencies between transactions which write the same rows. We do not currently track dependencies between writers and readers. For example :

Transaction A : {Write row X, Write row Y}
Transaction B : {Read row Y, Write row Z}

Binlog : {{Tx A : Wr X, Wr Y}, {Tx B : Wr Z}}

In this example, the dependency of Transaction B on Transaction A is not recorded in the Binlog, and so the Slave is not aware of it. This would result in the write to row Z not being considered in conflict, when it should be.

A future improvement is to add selective tracking of reads to the Binlog, so that Write -> Read dependencies will implicate reading transactions when a conflict is discovered.

There's more to come


Another long dry post, best consumed with your favourite drink in hand. As I mentioned last time, these functions are pushed, and available in the latest releases of MySQL Cluster. I'd be happy to hear from anyone who wants to try them out and give feedback. I've been deliberately light with implementation details thus far, as I'm saving those for yet another posting. I think that some of the implementation details are interesting from a replication point of view, even if you're not interested in these particular conflict detection algorithms. You may disagree :)

Edit 23/12/11 : Added index