Sunday, December 15, 2013

Secrets of the ANTs Data Server 06 --Phased Execution

previous

In the section on MVCC, I said that on a commit or rollback, ADS goes through a list of updates and modifies them. The question you might be asking is, “How do we avoid concurrency issues when doing these changes?”. Also, I only described how values in rows get changed. Indexes also have to change when values change and I didn’t describe how that is done.


There are MVCC ways to handle both of these issues, but ADS didn’t use them. For both tasks, ADS used traditional single-threaded algorithms. It was able to do that by a mechanism called phased execution. In brief, phased execution is a strategy where you break down the tasks into separate phases and require all task threads to be in the same phase at the same time. In other words, threads block at a phase change until all threads are ready to enter that phase.


ADS had three phases:


session phase where we did the MVCC processing described previously
commit phase where we updated indexes and did commits and rollbacks
cleanup phrase where we cleaned up rolled-back update blocks and expired history blocks


In session phase, we used MVCC and mostly non-locking algorithms to modify potentially shared data. In the other two phases, we allocated work to the tasks in such a way that there is no potentially shared data. For example if an index needs to be updated, we would assign the index to a single task to do all of the work. None of the data would be shared with other tasks.


In session phase, each task thread would cycle through a list of messages called the session list until a global flag was set that said to switch to commit phase. Then the thread would increment a counter and block until the counter was equal to the number of task threads. At that point, each task thread would start reading and executing work from another list of messages called the commit list until a flag was set telling to to change back to session phase. Again, it would increment a counter, and then block on the counter until it was equal to the number of task threads. When the counter reached the number of task threads, then all thread would go back to executing work from the session list.


About every hundred or so of these session phase/commit phase loops, the thread would all drop into cleanup phase to do various cleanup work.


Phased execution substitutes time waiting on locks for time waiting for phase changes. It lets you use serial algorithms on shared structures by ensuring that at certain times during execution, the structures are not shared. This is pretty much what a lock does as well. By getting an exclusive lock on a data structure, you know that as long as you hold the lock, the structure is not shared.

No comments:

Post a Comment