So to start with what does “LurchTable” stand for?

Least
Used
Recently
Concurrent
Hash
Table

Of course I am dyslexic so I can get away with swapping U and R around. My dyslexia aside, It’s a fair representation of what the class does. It essentially is a ConcurrentDictionary that keeps and maintains a linked-list of nodes in specific way. It does differ significantly from ConcurrentDictionary in a few very important ways…

  1. The number of hash buckets available can not change after construction. This keeps the code simple and maintainable while providing all the tread-safety goodness. This is also going to be the biggest change from the ConcurrentDictionary. The LurchTable is typically used with a hard-limit on items. For example a cache might allow no more than 10k items. So we probably want at least 5k hash buckets, each ‘bucket’ is just an integer so this would pre-allocate 20kb of memory as an int[]. On the extreme end, if you were to supply int.Max as the hash size you would exceed .NET’s ability to allocate the memory required.
  2. Key/Value pairs (entries) are allocated and expanded dynamically as-needed. Unused entries are tracked in a separate linked-list. Unlike the .NET Framework’s Dictionary that will fail when adding the ‘nth’ entry due to an OutOfMemoryException being raise, the LurchTable does not allocate all entries as a single array. The allocation size is specified (or derived) at construction and the LurchTable will allocate a new array of this size and ‘append’ it to the existing allocated entries. This preserves time and memory as the contents of the dictionary fluctuate and prevent the LurchTable from attempting excessively large allocations.
  3. The linked lists, both used and unused entries, are maintained via lockless linked lists. The use of non-blocking linked lists allows for a high-degree of concurrent modifications to be made to the contents of the collection. This approach also greatly reduces the contention on the ‘free’ entries required to find an empty entry to use for a new item. In addition, the LurchTable maintains several ‘free’ lists to even further reduce contention.

Linking Entries

Entries can be either non-linked, linked by insertion, lined by insertion or modification, or linked by insertion, modification or access. The internal linking strategy is determined at construction by specifying one of the LurchTableOrder enumeration members (Access, Insertion, Modified, None). When ordering by anything other than None, a hard-limit on items can be specified at construction as well. When a hard-limit is set, the insertion of n+1 items will cause the ‘oldest’ item in the ordered list to be removed. Typically all this is specified at construction by using the more verbose ctor on the LurchTable:

    public LurchTable<TKey, TValue>(
        LurchTableOrder ordering,         // One of Access, Insertion, Modified, or None
        int limit,                        // Hard-limit number of items, or int.MaxValue for no limit
        int hashSize,                     // The number of hash buckets to allocate (adjusted up to a prime number)
        int allocSize,                    // The number of items to allocate at once (power of 2 that is < = allocSize)
        int lockSize,                     // The number of locks to allocate (adjusted up to a prime number)
        IEqualityComparer<TKTKey> comparer  // The comparer to use for the keys
    )

Other Deltas from the ConcurrentDictionary

New methods not typically found on a dictionary include the ability to Peek, Dequeue, or TryDequeue the oldest item in the collection. Peek will return the oldest entry based upon the link order. TryDequeue will attept to remove the oldest entry based upon the link order and can optionally take a predicate to control removal. Lastly, Dequeue is a tight-polling loop on TryDequeue and returns only after an item is successfully dequeued (Note: For obvious reasons Dequeue should be used cautiously).

All enumerations of the LurchTable are thread-safe and in Hash-order. Due to the lockless nature of the internal linking, it is not possible to enumerate the contents of the collection in the linked order. Enumeration of the links could easily be added; however, it would not be thread-safe and thus is not currently implemented.

All methods on the collection are thread-safe except the “Initialize” method. The Initialize() method recreates the internal ‘entries’ collection and essentially removes all items in O(1) whereas the Clear() method is now an O(n) time operation.

Events for ItemAdded, ItemUpdated, and ItemRemoved are exposed from the collection and are executed in-lock with the operation. While I don’t like events that execute from synchronized code, there really isn’t a better way to deal with this. Since I heavily rely on ItemRemoved events to flush data from a LurchTable cache to primary store, the event must act as an atomic operation to the consumer of the collection. All events are fire after internal structures have been modified and just prior to lock-release. Care should be taken not to raise exceptions from these events as these will propagate to the caller attempting to add/update/remove the item.

Uses for LurchTable

  • LRU Cache An LRU (Least Recently Used) cache is the most common of applications for this structure. Simply configure a hard-limit on the items and use LurchTableOrder.Access at construction. Then from the caller use the GetOrAdd call to either pull from cache or fetch from primary and add to cache in a single call.
  • Write-through Cache Another common usage might be a write-behind/delayed-write strategy for an external storage mechanism. Constructing with a hard-limit and using LurchTableOrder.Modified will give you a read-through/write-through cache. Use the GetOrAdd to fetch and any of the available set operations to write, then hook the ItemRemoved to flush to external storage.
  • Throttling Work Queue An interesting use (or abuse?) of the LurchTable is using it to distribute work across several threads. When a producer of work can outpace the ability of the consumers one needs to throttle the producer. One way is to simply sleep the producer until more work can be queued, another is to let the producer also consume work. Construct with LurchTableOrder.Insert and a hard-limit on the back-log to allow. Hook the ItemRemoved event to implement the work item processing. Start one or more threads producing work items by adding them to the dictionary. Finally start one or more threads consuming data by calling Dequeue/TryDequeue. This works well if the work to be performed is small (since the consumer can block the producer), mileage may vary for longer running tasks.

I’ve used this collection extensively for the past year and have been very pleased with the results and ease of use. It’s probably one of the more versatile and better pieces of code I’ve put together in a while. Give it a try by just including the source directly (/browse/src/Library/Collections/LurchTable.cs) or by using the NuGet package CSharpTest.Net.Library. You can also find the online help at http://help.csharptest.net.

Comments