UPDATE
BookSleeve has now been succeeded by StackExchange.Redis, for lots of reasons. The API and intent is similar, but the changes are significant enough that we had to reboot. All further development will be in StackExchange.Redis, not BookSleeve.
ORIGINAL CONTENT
At Stack Exchange, performance is a feature we work hard at. Crazy hard. Whether that means sponsoring load-balancer features to reduce system impact, or trying to out-do the ORM folks on their own turf.
One of the many tools in our performance toolkit is Redis; a highly performant key-value store that we use in various ways:
- as our second-level cache
- for various tracking etc counters, that we really don’t want to bother SQL Server about
- for our pub/sub channels
- for various other things that don’t need to go direct to SQL Server
It is really fast; we were using the redis-sharp bindings and they served us well. I have much thanks for redis-sharp, and my intent here is not to critique it at all – but rather to highlight that in some environments you might need that extra turn of the wheel. First some context:
- Redis itself is single threaded supporting multiple connections
- the Stack Exchange sites work in a multi-tenancy configuration, and in the case of Redis we partition (mainly) into Redis databases
- to reduce overheads (both handshakes etc and OS resources like sockets) we re-use our Redis connection(s)
- but since redis-sharp is not thread-safe we need to synchronize access to the connection
- and since redis-sharp is synchronous we need to block while we get each response
- and since we are split over Redis databases we might also first have to block while we select database
Now, LAN latency is low; most estimates put it at around 0.3ms per call – but this adds up, especially if you might be blocking other callers behind you. And even more so given that you might not even care what the response is (yes, I know we could offload that somewhere so that it doesn’t impact the current request, but we would still end up adding blocking for requests that do care).
Enter BookSleeve
Seriously, what now? What on earth is BookSleeve?
As a result of the above, we decided to write a bespoke Redis client with specific goals around solving these problems. Essentially it is a wrapper around Redis dictionary storage; and what do you call a wrapper around a dictionary? A book-sleeve. Yeah, I didn’t get it at first, but naming stuff is hard.
And we’re giving it away (under the Apache License 2.0)! Stack Exchange is happy to release our efforts here as open source, which is groovy.
So; what are the goals?
- to operate as a fully-functional Redis client (obviously)
- to be thread-safe and non-blocking
- to support implicit database switching to help with multi-tenancy scenarios
- to be on-par with redis-sharp on like scenarios (i.e. a complete request/response cycle)
- to allow absolute minimum cost fire-and-forget usage (for when you don’t care what the reply is, and errors will be handled separately)
- to allow use as a “future” – i.e request some data from Redis and start some other work while it is on the wire, and merge in the Redis reply when available
- to allow use with callbacks for when you need the reply, but not necessarily as part of the current request
- to allow C# 5 continuation usage (aka async/await)
- to allow fully pipelined usage – i.e. issue 200 requests before we’ve even got the first response
- to allow fully multiplexed usage – i.e. it must handle meshing the responses from different callers on different threads and on different databases but on the same connection back to the originator
(actually, Stack Exchange didn’t strictly need the C# 5 scenario; I added that while moving it to open-source, but it is an excellent fit)
Where are we? And where can I try it?
It exists; it works; it even passes some of the tests! And it is fast. It still needs some tidying, some documentation, and more tests, but I offer you BookSleeve:
http://code.google.com/p/booksleeve/
The API is very basic and should be instantly familiar to anyone who has used Redis; and documentation will be added.
In truth, the version I’m open-sourcing is more like the offspring of the version we’re currently using in production – you tend to learn a lot the first time through. But as soon as we can validate it, Stack Exchange will be using BookSleeve too.
So how about some numbers
These are based on my dev machine, running redis on the same machine, so I also include estimates using the 0.3ms latency per request as mentioned above.
In each test we are doing 5000 INCR commands (purely as an arbitrary test); spread over 5 databases, in a round-robin in batches of 10 per db – i.e. 10 on db #0, 10 on db #1, … 10 on db #4 – so that is an additional 500 SELECT commands too.
redis-sharp:
- to completion 430ms
- (not meaningful to measure fire-and-forget)
- to completion assuming 0.3ms LAN latency: 2080ms
BookSleeve
- to completion 391ms
- 2ms fire-and-forget
- to completion assuming 0.3ms LAN latency: 391ms
The last 2 are the key, in particular noting that the time we aren’t waiting on LAN latency is otherwise-blocking time we have subtracted for other callers (web servers tend to have more than one thing happening…); the fire-and-forget performance allows us to do a lot of operations without blocking the current caller.
As a bonus we have added to ability to do genuinely parallel work on a single caller – by starting a Redis request first, doing the other work (TSQL typically), and then asking for the Redis result. And let’s face it, while TSQL is versatile, Redis is so fast that it would be quite unusual for the Redis reply to not to already be there by the time you get to look.
Wait – did you say C# 5?
Yep; because the API is task based, it can be used in any of 3 ways without needing separate APIs:
As an example of the last:
IMPORTANT: in the above “await” does not mean “block until this is done” – it means “yield back to the caller here, and run the rest as a callback when the answer is available” – or for a better definition see Eric Lippert’s blog series.
And did I mention…
…that a high perfomance binary-based dictionary store works well when coupled with a high performance binary serializer? ;p