Cont’d from Building a database in C# – Part 1
So with the B+ tree semantics out of the way it was time to start looking at what was missing. Obviously it was all in memory and not yet on disk, and once it is on disk I’ll certainly need to cache. Yet there was still one thing I wanted to address early on before I started playing with persistence…. Synchronization.
Tackling the Synchronization and Concurrency seemed like it was going to be a no-brainer. Take a simple reader-writer lock housed with each node, and lock accordingly… Right? Well turns out there was two problems with this. First what lock to use? I want a lock that will exhibit fairly consistent characteristics regardless off the type of loads (heavy read, heavy write, or balanced read and write). Apparently that simple request was ‘sooting for the moon’…
ReaderWriterLock – This was my first stop, I already had an interface I wrapped around it some time ago. Though at that time I quickly discovered that the implementation I had based on Monitor was more efficient I never really dug any deeper. Surely the ReaderWriterLock would be more efficient than exclusive access right? Well it really depends on a couple of things, How long does it take to acquire the lock, and how long are you going to hold it. After some initial testing I found that the ReaderWriterLock does work wonders for very read-heavy uses. When 9 out of 10 operations or more are read locks, the ReaderWriterLock performs reasonably well. At 50% read vs write this lock degrades in performance notably, and at 90% writes it’s almost unusable. So if you need a reader writer lock and will only use the write lock infrequently this is an option. However, it’s completely unusable for my needs in this project… next…
ReaderWriterLockSlim – This one is a total WTF, I can’t find anything this implementation is good at. The cost was consistently worse than using any other type of lock even a Monitor. According to MSDN: “In addition, the performance of ReaderWriterLockSlim is significantly better than ReaderWriterLock. ReaderWriterLockSlim is recommended for all new development.” I call Bull$h1t on that one.
FastResourceLock – Found this one online, and although it’s a GPL license, I gave it a try. As with most of these it performs very well at 1% to 10% writes; however, it degrades as that percentage increases significantly. Out of all locks, it’s performance for acquiring the Read lock was far superior, but with the GPL license its really not worth considering for my use.
System.Threading.Monitor – Frankly speaking this thing really moves. Even with a low percentage of writes, 10%, this lock is so efficient that it performs very close to best reader/writer lock out there. Only when writes drop below this number do you start to have a performance issue. At 1% write this guy is significantly (near one order of magnitude) slower than the plain old ReaderWriterLock. Bummer really, I sure wouldn’t mind just being able to use the lock keyword; however, since I can’t know the ratio of reads:writes beforehand I need a lock that will perform well in all cases.
Given that none of the above really worked out the sheer performance of the Monitor started me thinking… “Can I use this as a building-block for my own lock?” So there I go off to build my own reader writer lock. The concept is simple really, for readers I need to obtain a Monitor to ensure there are no writers, increment a count of readers, and release the Monitor. For writers I only need to obtain the Monitor, wait for the readers to decrement the count to zero, and then return. That sounds simple enough, let’s call it a SimpleReadWriteLock and get started:
SimpleReadWriteLock – So we only need a single piece of state, the reader count (int _readers) and we can write lock as shown here in it’s most basic form:
public class SimpleReadWriteLock { int _readers; bool TryRead(int timeout) { if(!System.Threading.Monitor.TryEnter(this, timeout)) return false; System.Threading.Interlocked.Increment(ref _readers); System.Threading.Monitor.Exit(this); return true; } void ExitRead() { System.Threading.Interlocked.Decrement(ref _readers); } bool TryWrite(int timeout) { if(!System.Threading.Monitor.TryEnter(this, timeout)) return false; while(_readers > 0) { System.Threading.Thread.SpinWait(1000); /*some kind of wait and/or sleep*/ } return true; } void ExitWrite() { System.Threading.Monitor.Exit(this); } }
I’ve over-simplified the wait for readers loop for this post; however, it demonstrates just how simple the idea really is. In the version I’m using I actually spin-wait a few iterations until I fall back to an AutoResetEvent’s WaitOne. However, it should suffice to demonstrate the SimpleReadWriteLock concept and basic idea of the implementation. Test after test has proven this performs very reliably and very fast. At the low-end of writes 1% to 10%, this out performs the Monitor nicely. At the high-end of writes, 50% or more, this performs very consistently with the use of a Monitor, and never worse. So finally I have a lock that performs well enough at the low end, and very well at the high end of writers.
This chart’s vertical axis is the average number of milliseconds over 3 runs, the horizontal axis is the percentage of write locks out of the total 100,000 with 1% being the far left edge. It’s worth noting that inside of each lock (read or write) the same workload was being performed; however, that work was not more than minimal. Had this been an actual usage of the lock and the lock was held during an expensive operation the results would vary drastically especially where the Monitor is concerned.
So what’s the final conclusion on the lock for the database I’m building? Well I started off using a variation of the SimpleReadWriteLock shown above. However, at present I’m using an implementation that completely ignores the read-lock aspect and uses the Monitor for write locks. Basically the tree supports MVCC (Muti Version Concurrency Control) to allow readers to perform reads without being blocked by write locks. This was a little tricky to get working but I’ll cover that in the next post as I start to unravel the caching scheme.
Trackbacks