Friday 11 July 2014

Securing redis in the cloud

Redis has utility in just about any system, but when we start thinking about “the cloud” we have a few additional things to worry about. One of the biggest issues is that the cloud is not your personal space (unless you have a VPN / subnet setup, etc) – so you need to think very carefully about what data is openly available to inspection at the network level. Redis does have an AUTH concept, but frankly it is designed to deter casual access: all commands and data remain unencrypted and visible in the protocol, including any AUTH requests themselves. What we probably want, then, is some kind of transport level security.
Now, redis itself does not provide this; there is no standard encryption, but you could configure a secure tunnel to the server using something like stunnel. This works, but requires configuration at both client and server. But to make our lives a bit easier, some of the redis hosting providers are beginning to offer encrypted redis access as a supported option. This certainly applies both to Microsoft “Azure Redis Cache” and Redis Labs “redis cloud”. I’m going to walk through both of these, discussing their implementations, and showing how we can connect.

Creating a Microsoft “Azure Redis Cache” instance

First, we need a new redis instance, which you can provision at https://portal.azure.com by clicking on “NEW”, “Everything”, “Redis Cache”, “Create”:
image
image
There are different sizes of server available; they are all currently free during the preview, and I’m going to go with a “STANDARD” / 250MB:
image
Azure will now go away and start creating your instance:
image
This could take a few minutes (actually, it takes surprisingly long IMO, considering that starting a redis process is virtually instantaneous; but for all I know it is running on a dedicated VM for isolation etc; and either way, it is quicker and easier than provisioning a server from scratch). After a while, it should become ready:
image

Connecting to a Microsoft “Azure Redis Cache” instance

We have our instance; lets talk to it. Azure Redis Cache uses a server-side certificate chain that should be valid without having to configure anything, and uses a client-side password (not a client certificate), so all we need to know is the host address, port, and key. These are all readily available in the portal:
image
image
Normally you wouldn’t post these on the internet, but I’m going to delete the instance before I publish, so; meh. You’ll notice that there are two ports: we only want to use the SSL port. You also want either stunnel, or a client library that can talk SSL; I strongly suggest that the latter is easier! So; Install-Package StackExchange.Redis, and you’re sorted (or Install-Package StackExchange.Redis.StrongName if you are still a casualty of the strong name war). The configuration can be set either as a single configuration string, or via properties on an object model; I’ll use a single string for convenience – and my string is:
mgblogdemo.redis.cache.windows.net,ssl=true,password=LLyZwv8evHgveA8hnS1iFyMnZ1A=

The first part is the host name without a port; the middle part enables ssl, and the final part is either of our keys (the primary in my case, for no particular reason). Note that if no port is specified, StackExchange.Redis will select 6379 if ssl is disabled, and 6380 if ssl is enabled. There is no official convention on this, and 6380 is not an official “ssl redis” port, but: it works. You could also explicitly specify the ssl port (6380) using standard {host}:{port} syntax. With that in place, we can access redis (an overview of the library API is available here; the redis API is on http://redis.io)
var muxer = ConnectionMultiplexer.Connect(configString);
var db = muxer.GetDatabase();
db.KeyDelete("foo");
db.StringIncrement("foo");
db.StringIncrement("foo");
db.StringIncrement("foo");
int i = (int)db.StringGet("foo");
Console.WriteLine(i); // 3

and there we are; readily talking to an Azure Redis Cache instance over SSL.

Creating a new Redis Labs “redis cloud” instance and configuring the certificates


Another option is Redis Labs; they too have an SSL offering, although it makes some different implementation choices. Fortunately, the same client can connect to both, giving you flexibility. Note: the SSL feature of Redis Labs is not available just through the UI yet, as they are still gauging uptake etc. But it exists and works, and is available upon request; here’s how:

Once you have logged in to Redis Labs, you should immediately have a simple option to create a new redis instance:

image

Like Azure, a range of different levels is available; I’m using the Free option, purely for demo purposes:

image

We’ll keep the config simple:

image

and wait for it to provision:

image

(note; this only takes a few moments)

Don’t add anything to this DB yet, as it will probably get nuked in a moment! Now we need to contact Redis Labs; the best option here is support@redislabs.com; make sure you tell them who you are, your subscription number (blanked out in the image above), and that you want to try their SSL offering. At some point in that dialogue, a switch gets flipped, or a dial cranked, and the Access Control & Security changes from password:

image

to SSL; click edit:

image

and now we get many more options, including the option to generate a new client certificate:

image

Clicking this button will cause a zip file to be downloaded, which has the keys to the kingdom:

image

The pem file is the certificate authority; the crt and key files are the client key. They are not in the most convenient format for .NET code like this, so we need to tweak them a bit; openssl makes this fairly easy:
c:\OpenSSL-Win64\bin\openssl pkcs12 -inkey garantia_user_private.key -in garantia_user.crt -export -out redislabs.pfx

This converts the 2 parts of the user key into a pfx, which .NET is much happier with. The pem can be imported directly by running certmgr.msc (note: if you don’t want to install the CA, there is another option, see below):

image

Note that it doesn’t appear in any of the pre-defined lists, so you will need to select “All Files (*.*)”:

image

After the prompts, it should import:

image

So now we have a physical pfx for the client certificate, and the server’s CA is known; we should be good to go!

Connecting to a Redis Labs “redis cloud” instance


Back on the Redis Labs site, select your subscription, and note the Endpoint:

image

We need a little bit more code to connect than we did with Azure, because we need to tell it which certificate to load; the configuration object model has events that mimic the callback methods on the SslStream constructor:
var options = new ConfigurationOptions();
options.EndPoints.Add(
    "pub-redis-16398.us-east-1-3.1.ec2.garantiadata.com:16398");
options.Ssl = true;
options.CertificateSelection += delegate {
    return new System.Security.Cryptography.X509Certificates.X509Certificate2(
    @"C:\redislabs_creds\redislabs.pfx", "");
};

var muxer = ConnectionMultiplexer.Connect(options);
var db = muxer.GetDatabase();
db.KeyDelete("foo");
db.StringIncrement("foo");
db.StringIncrement("foo");
db.StringIncrement("foo");
int i = (int)db.StringGet("foo");
Console.WriteLine(i); // 3

Which is the same smoke test we did for Azure. If you don’t want to import the CA certificate, you could also use the CertificateValidation event to provide custom certificate checks (return true if you trust it, false if you don’t).

Way way way tl:dr;


Cloud host providers are happy to let you use redis, and happy to provide SSL support so you can do it without being negligent. StackExchange.Redis has hooks to let this work with the two SSL-based providers that I know of.

Thursday 3 July 2014

Dapper gets type handlers and learns how to read maps

A recurring point of contention in dapper has been that it is a bit limited in terms of the types it handles. If you are passing around strings and integers: great. If you are passing around DataTable – that’s a bit more complicated (although moderate support was added for table valued parameters). If you were passing around an entity framework spatial type: forget it.
Part of the problem here is that we don’t want dapper to take a huge pile of dependencies on external libraries, that most people aren’t using – and often don’t even have installed or readily available. What we needed was a type handler API. So: I added a type handler API! Quite a simple one, really – dapper still deals with most of the nuts and bolts, and to add your own handler all you need to provide is some basic parameter setup.
For example, here’s the code for DbGeographyHandler; the only interesting thing that dapper doesn’t do internally is set the value – but the type-handler can also do other things to configure the ADO.NET parameter (in this case, set the type name). It also needs to convert between the Entity Framework representation of geography and the ADO.NET representation, but that is pretty easy:
public override void SetValue(IDbDataParameter parameter, DbGeography value)
{
    parameter.Value = value == null ? (object)DBNull.Value
        : (object)SqlGeography.Parse(value.AsText());
    if (parameter is SqlParameter)
    {
        ((SqlParameter)parameter).UdtTypeName = "GEOGRAPHY";
    }
}

and… that’s it. All you need to do is register any additional handlers (SqlMapper.AddTypeHandler()) and it will hopefully work. We can now use geography values in parameters without pain – i.e.
conn.Execute("... @geo ...",
    new { id = 123, name = "abc", geo = myGeography }

Plugin packages

This means that handlers for exotic types with external dependencies can be shipped separately to dapper, meaning we can now add packages like Dapper.EntityFramework, which brings in support for the Entity Framework types. Neat.

Free to a good home: better DataTable support

At the same time, we can also make a custom handler for DataTable, simplifying a lot of code. There is one slight wrinkle, though: if you are using stored procedures, the type name of the custom type is known implicitly, but a lot of people (including us) don’t really use stored procedures much : we just use raw command text. In this situation, it is necessary to specify the custom type name along with the parameter. Previously, support for this has been provided via the AsTableValuedParameter() extension method, which created a custom parameter with an optional type name – but dapper now internally registers a custom type handler for DataTable to make this easier. We still might need the type name, though, so dapper adds a separate extension method for this, exploiting the extended-properties feature of DataTable:
DataTable table = ...
table.SetTypeName("MyCustomType");
conn.Execute(someSql, new { id = 123, values = table });

That should make things a bit cleaner! Custom type handlers are welcome and encouraged - please do share them with the community (ideally in ready-to-use packages).

Tuesday 1 July 2014

SNK and me work out a compromise

and bonus feature: our build server configuration
A few days ago I was bemoaning the issue of strong names and nuget. There isn’t a perfect solution, but I think I have a reasonable compromise now. What I have done is:
  • Copy/pasted the csproj, with the duplicates referencing a strong name key
  • Changed those projects to emit an assembly named StackExchange.Redis.StrongName
  • Copy/pasted the nuspec, with the new spec creating the StackExchange.Redis.StrongName package from the new assemblies
  • PINNED the assembly version number; this will not change unless I am introducing breaking changes, which will also coincide with major/minor version number changes – or maybe also for reasonably significant feature additions; not for every point-release, is the thing
  • Ensured I am using [assembly:AssemblyFileVersion] and [assembly:AssembyInformationalVersion] to describe the point-release status
This should achieve the broadest reach with the minimum of fuss and maintenance.
Since that isn’t enough words for a meaningful blog entry, I thought I’d also talk about the build process we use, and the tooling changes we needed for that. Since we are using TeamCity for the builds, it is pretty easy to double everything without complicating the process – I just added the 2 build steps for the new csproj, and told it about the new nuspec:
image
Likewise, TeamCity includes a tool to automatically tweak the assembly metadata before the build happens – the “AssemblyInfo patcher” – which we can use to pin the one value (to change manually based on human decisions), while keeping the others automatic:
image
Actually, I’ll probably change that to use a parameter to avoid me having to dig 3 levels down. After that, we can just let it run the build, and out pop the 2 nupkg as artefacts ready for uploading:
image
If you choose, you can also use TeamCity to set up an internal nuget feed – ideal for both internal use and dogfooding before you upload to nuget.org. In this case, you don’t need to grab the artefacts manually or add a secondary upload step – you can get TeamCity to publish them by default, making them immediately available in-house:
image
It isn’t obvious in the UI, but this search is actually restricted to the “SE” feed, which is my Visual Studio alias to the TeamCity nuget server:
image
Other build tools will generally have similar features – simply: this is what works for us.