Tuesday 24 November 2015

The road to DNX–part 3

In part 1, we  looked at an existing library that we wanted to move to core-clr; we covered the basics of the tools, and made the required changes just to change to the project.json build approach, targeting the same frameworks.

In part 2, we looks at “dnxcore50”, and how to port a library to support this new framework alongside existing .net frameworks. We looked at how to setup and debug tests. We then introduced  “dnx451”: the .net framework running inside DNX.

In part 3, we dive deeper still…

Targeting Hell

You could well be thinking that all these frameworks (dnx451, dnxcore40, net35, net40, etc) could start to become  tedious. And you’d be right! FastMember only targets a few, but as a library author, you may know the …. joy … of targeting a much wider set of .net frameworks. Here’s the build tree for protobuf-net r668 (pre core-clr  conversion):

image_thumb1

These  are incredibly hard  to build  currently (often requiring per-platform tools). It is a mess. Adding more frameworks isn’t going to make our life any easier. However, many of these frameworks have huge intersections. Rather than having to explicitly target 20 similar frameworks, how about if we could just target an entire flavor of similar APIs? That is what the .NET Platform Standard (aka: netstandard) introduces. This is mainly targeting a lot of the newer frameworks, but then… it is probably about time I dropped support for Silverlight 3. With these new tools, we  can increase our target audience very quickly, without overly increasing our development burden.

Important: the names here are moving. At the current time (rc1), the documentation talks about “netstandard1.4” etc; however, the tools recognise “dotnet5.5” to mean the same thing. (edited: I originally said dotnet5.4===netstandard1.4; I had an off-by-one error - the versions are 4.1 off from each-other!) Basically, the community made it clear that “dotnet” was too confusing for this purpose, so the architects wisely changed the nomenclature. So  “netstandard1.1” === “dotnet5.2” – savvy?

Great! So how do we do this? It is much easier than it sounds; we just change our project.json from “dnxcore50” to “dotnet5.4”:

image

netstandard1.4 (dotnet5.5) is the richest variant – the intersection of DNX Core 5.0 and .NET Framework 4.6.*. We're going to target dotnet5.4. If you go backwards (1.3, 1.2, 1.1, etc) you can target  a wider audience, but using a narrower intersection of available APIs. netstandard1.1, for example, includes Windows Phone 8.0. There are various tables  on the .NET Platform Standard documentation that tell you what each name targets. Notice I added a “COREFX” define. That  is because the compiler is now including “DOTNET5_4” as a build symbol, not the more specific “DNXCORE50”. To avoid confusion (especially since I know it will change in the next tools drop), I’ve changed my existing “#if DNXCORE50” to “#if COREFX”, for my convenience:

image

We don’t have to stop with netstandard1.3, though; what I suggest library authors do is:

  1. get it working on what they actively want to support
  2. then try wider (lower number) versions to see what they can support

For example, changing to 1.2 (dotnet5.3) only gives me 13 errors, many of them duplicates and trivial to fix:

image

And interestingly, this is the same 13 errors that I get for 1.1 (dotnet5.2). If  I try  targeting dotnet5.1, I lose a lot of  things that I absolutely depend on (TypeBuilder, etc), so perhaps draw the line at dotnet5.2; that is still a lot more potential users than 1.4. With some minimal changes, we can support dotnet5.2 (netstandard1.1); the surprising bits there are:

  • the need to add a System.Threading dependency to get Monitor support (aka: the “lock” keyword)
  • the need to explicitly specify the System.Runtime version in the test project

We can  test this with “dnx  test” / “dnx perf”, in both core-clr and .net  under dnx, and it works fine. We don’t need the dnx451 specific build any more

Observations:

  • I have seen issues with dotnet5.4 projects trying to consume libraries that expose dotnet5.2 builds; this might just be because of the in-progress tooling
  • At the moment, xunit targets dnxcore50, not dotnet*/netstandard* – so you’ll need to keep your test projects targeting dnxcore50 and dnx451 for  now; however, your library code should be able to just target the .NET Platform Standard without dnx451 or dnxcore50:

image

That’s pretty much the key bits of netstandard; it lets you target a wider audience without having a myriad of individual frameworks defined. But you can use this in combination with more specific targets if you want to use specific features of a particular framework, when available.

Packaging

As this is aimed at library authors, I’m assuming  you have previously deployed to nuget, so you should be familiar with the hoops you need to jump through, and the maintenance overhead. When you think about it, our project.json already defines quite a few of the key things nuget  needs (dependencies, etc). The dnx tools, then, introduce a new way to package our libraries. What we need to do first is fill in some extra fields (copying from an existing nuspec, typically):

image

Now all we need to do is “dnu pack --configuration release”:

image

and … we’ve just built our nupkg (or two). Aside: does anyone else think “dnu pack” should default to release builds? Or is that just me? We can go in and see what it has created:

image

The nupkg is the packed contents, but we can also see what it was targeting. Looking at the above, it occurs that I should probably go back and remove the explicit dnx451 build, deferring to dotnet5.2, but… meh. It’ll all change again in rc2 ;p

I wish there was a “dnu push” for uploading to nuget, but for now I’ll just use the manual upload:

image

The details are as expected, so: library uploaded! (and repeated for the strong-name version; don’t get me started on the “strong-name or don’t strong-name” debate; it makes me lose the will to live).

We have now built, tested, packaged and deployed our multi-targeting library that uses the .NET Platform Standard 1.1 and above, plus .Net 3.5 and .Net 4.0. Hoorah for us!

I should also note that Visual Studio also offers the ability to create packages; this is hidden in the project properties (this is manipulating the xproj file):

image

and if you build this way, the outputs go into “artifacts” under the solution (not the project):

image

Either way: we have our nupkg, ready to distribute.

Common Problems

The feature  I want isn’t available in core-clr

First, search dotnet/corefx; it is, of course, entirely possible that it isn’t supported, especially if you are doing WPF over WCF, or something obscure like … DataTable ;p Hopefully you find it tucked away in a different package; lots of things move.

The feature is there on github, but not on nuget

One thing you could do here is to try using the experimental feed. You can control  your package feeds using NuGet.config in your solution folder, like in this example, which disregards whatever package feeds are defined globally, and uses the experimental feed and official nuget feed. You may need to explicitly specify a full release number (including the beta marker) if it is pre-release. If that still doesn’t work, you could perhaps enquire with the corefx team on why/when.

The third party package I want doesn’t support core-clr

If it is open source, you could always throw them a pull-request with the changes. That depends on a lot of factors, obviously. Best practice would be to communicate with the project owner first, and check for branches. If it is available in source but not on NuGet, you could build it locally, and (using the same trick as above) add a local package source – all you need to do is drop the nupkg in a folder on the file-system, and add the folder to the NuGet.config file. When the actual package gets released, remember to nuke any temporary packages from  %USERPROFILE%/.dnx/packages.

I don’t like having the csproj as well as the project.json

Long term, we can probably nuke those csproj; they are handy to keep for now, though, to make it easy for people to build the solution (minus core-clr support).

The feature I want isn’t available in my target framework / Platform Standard

Sometimes, you’ll be able to work around it. Sometimes you’ll have to restrict what you can support to more forgiving configurations. However, sometimes there are cheeky workarounds. For example, RegexOptions.Compiled is not available on a lot of Platform Standard configurations, but it is there really. You can cheat by checking if the enum is defined at runtime, and use it when available; here’s a nice example of that. There are uglier things you can do, too, such as using reflection to see if types and methods are actually available, even if they aren’t there in the declared API – you should try to minimize these things. As an example, protobuf-net would really like to use FormatterServices.GetUninitializedObject() when it is available. Just… be careful. This trick work on things like universal applications, but then: neither will hardly any of what protobuf-net does, so that is a moot point.

I’m having a problem with the tooling

The various teams are very open to feedback. I confess that I sometimes struggle to know what should go to the corefx team vs the asp.net team (some of the boundaries are largely arbitrary and historical), but it’ll probably find a receptive ear.

Conclusion

The core-clr project moves a lot of pieces, and a lot of things are still in flux. But: it is now stable enough that many library authors should be more than capable of porting their projects, and quite  possibly simplifying their build process at the same time.

Happy coding.

The road to DNX - part 2

In part 1 I gave a brief introduction to the core-clr project and the key tools involved, from the perspective of a library author with existing .net libraries that they want to migrate to core-clr. I took a sample project (FastMember), and made some tooling changes to take it from a csproj-based build (targeting .net 3.5 and .net 4.0), to a build using project.json (again, targeting .net 3.5 and .net 4.0). Because the core-clr tools are not yet stable (rtm) or mainstream, I have retained the ability to build everything from the csproj – so that any arbitrary developer who clones the repo can build right away without having to install a pile of unfamiliar, unreleased tools.

In part 2, we start exploring what we can now do with the new tools.

Target Platforms

Here we’re going to look at adding a new core-clr build, and making the necessary code changes to make it compile.

The first thing we probably want to do is start playing with core-clr. At current (rc1), in terms of build tools this is “dnxcore50”. Our framework dependencies change from being “frameworkAssemblies” to “dependencies”: we’re now going to be pulling down each set of libraries from nuget separately (cached in %USERPROFILE%/.dnx/packages), rather than a monolithic platform install. All we need to do to start, then, is add a “dnxcore50” token into our project.json (for those who have done a lot of core-clr work: we’ll be changing this later, don’t worry), including a dependency on “System.Runtime” (the version number was chosen with the help of auto-complete, so I didn’t need to go out of the editor to look this up):

image

Since we’ve changed our dependencies, we need to use “dnu restore”. This looks at what project.json requires, and compares it to project.project.lock.json which tracks what has been resolved already. If anything is missing, it looks in .dnx/packages, and if they are missing it uses our defined package sources to go and get them. This works identically for framework dependencies and 3rd-party libraries – it makes the entire thing painless.

Having added “dnxcore50” and the “System.Runtime” dependency, we can try to build – although we’re not actually expecting it to compile yet. In fact, “dnu build” reports 270 errors, and Visual Studio lights up like the Blackpool illuminations:

image

At this point, we have a small job to do of going through the errors and figuring out what packages we are missing. Granular framework packages makes deployment more convenient (and hopefully more frequent), but means we need a few more dependencies.

Let’s look at the second one – Hashtable; one way of resolving this is to play auto-complete pot-luck by guessing that this is probably in System.Collections.Something:

image

Since Hashtable is non-generic, it is probably in System.Collections.NonGeneric; we can add this and that error goes away:

image

We can go through a few obvious ones this way, getting rid of almost two-thirds of the fail:

image

I still have errors relating to:

  • TypeBuilder / MethodBuilder / ILGenerator etc (the library does metaprogramming)
  • members of Type / MemberInfo: IsValueType, IsDefined, MemberType (the library does a lot of reflection)
  • IDataReader and DataTable

Which – broadly – covers some of the more subtle hurdles you need to jump.

finding rogue types

(edited: this tool does exactly this! - thanks for pointing that out, TIL).

It isn’t necessarily obvious where to find something like ILGenerator, especially since we’re included System.Reflection. There may be better tools, but my usual go-to place is the dotnet/corefx repo; by searching for “class ILGenerator” you can usually quickly determine whether something exists. Often you’ll hit the actual definition first time, which tells you the package in the name. In this case we get the tests, which is probably enough:

image

but if you’re still stuck,  you can click into the test, then go back a few levels until you find the test’s project.json, and look at what it referenced:

image

So I’m probably going to want System.Reflection.Emit and System.Reflection.Emit.ILGeneration.

working with refactored APIs

You’ll find  a lot of places where the API available to you has been changed. For example, if you work with reflection – everything changes; System.Type ceases (or rather: ceased, quite a long time ago on a lot of frameworks) being the rich “I know everything” type. The idea is that Type is a lightweight token that allows you to identify and compare types, and if you want to know more you use System.TypeInfo; there are methods to translate between the two (GetTypeInfo() and AsType(), respectively). Likewise,  MemberTypes no longer exists. As far as I know, there is no single master list of these changes – you just  kinda need to tease  each one separately.

Some changes you can make  in a way that works satisfactorily on all frameworks; for example, rather than doing a switch on MemberType:

image

we can use some combination of “is”/”as”/cast:

image

In some places, there are extension methods in utility libraries you can use to bridge the gap; for example, to add a lot of familiar methods back onto Type, you can add a dependency on “System.Reflection.TypeExtensions”, and ensure that you have “using System.Reflection;” in the code-file (because  these are extension methods added by the System.Reflection.TypeExtensions type)

In other places, like IsValueType, IsPublic, etc - there is no single common API we can use; it is fundamentally different (and “extension properties” aren’t a thing). The good news is that the project.json build chain makes it easy to use #if sections to switch between different implementations. The upper-case name of the target framework is automatically added as a build symbol  - so we can check using “#if DNXCORE50”. If the impacted API is only used in one place, you can just use #if in-situ, but for frequently recurring things like IsValueType (which is often all over your code), I do not recommend polluting all your code with constant #if. Rather, my strategy is to create a utility class that bridges the gap (usually, but not always, via extension methods), and have just that class deal with different implementations.

The really nice thing is that the IDE helps us here: in the top left corner, it now tells us all the frameworks we are targeting, and we can switch between them in the context of a filelook in particular at which sections are greyed / colorized as we switch between frameworks:

image

image

Here’s the commit with everything compiling except System.Data, which I have just excised for now.

System.Data is a much more complicated story; while System.Data has been migrated, there are significant changes:

I’m going to look at this System.Data reference in a bit more detail, but the takeaway here is not System.Data: it is how we can investigate problems.

The first is a big problem if you’re using DataTable; this is a controversial change (see the linked thread), but in reality it is often misused. There are times when it is genuinely the right tool, but for most scenarios you should really have moved  to an ORM or micro-ORM approach by now (did  I mention that  Dapper is available for core-clr? – 1.50 and upwards). It is also (mis)used as part of the reader metadata API. Even if we don’t ever see DataTable, we will need (soon) a  new API that has similar aims as GetSchemaTable(). This is all a bit of an aside, but the point I’m trying to emphasize is that : some APIs have irreconcilable differences. If your library depends on these features, you’re going to have some soul searching to do.

Ignoring the DataTable difference, we can still access much of the rest of System.Data; but we need to move from interfaces to abstract base types (DbDataReader instead of IDataReader). In this case, we have a little grunt work to do, but afterwards: we again have a single implementation with minimal #if. The one interesting bit is that DbEnumerator doesn’t seem to be in the current packaging:

image

It is dotnet/corefx in a branch, but not in “master”. This looks like some “work in progress” in the conversion, since that API is meant to talk in terms of IDataRecord (or DbDataRecord), and neither of those is supported in core-clr currently, so it isn’t clear to me what this enumerator is  meant to do on core-clr! You will occasionally find pockets like this; to seek clarification, I could look at what SqlDataReader does, or I could ask the developers. Checking github, it looks like it uses a copied implementation in System.Data.SqlClient.  And despite being declared “public”, this simply isn’t in the currently published assemblies. In this case, it all looks a bit of a mess, and it isn’t critical, so I’ll ask the developers, and throw an exception in that scenario for now. Here’s my eventual core-clr conversion of the System.Data-related code.

Testing Against Multiple Frameworks

Woohoo! We now have a project that compiles against .net 3.5, .net 4.0 and dnxcore50. That’s a great start, but we haven’t actually done anything except get it to compile yet. We want to run our tests, too! If you remember from part 1, FastMember has tests that use NUnit. I’m a pragmatist when it comes to test tools. I’ll be honest: at the current time, the easiest way to test on core-clr is via xunit. I’m sure the other tools will catch up, though.

Now, you’re probably thinking “but I don’t want to change all my tests”. I agree with you. Which is why I don’t do that. Instead: I cheat. We can make a final decision whether to migrate the tests more formally when everything is RTM. Right now, we just want things to work. What we’re going to do is:

  • add a “dnxcore50” framework block to our test projects
  • use xunit from the new framework
  • add a bridge file (only active when targeting xunit) that shims between the two
  • #if out any tests that won’t compile on core-clr due to missing features

With these changes, we can start moving to testing. A key piece here is learning about dnx commands; out project.json can actually declare multiple  named commands (which map to assemblies in which to locate an entry-point) that dnx can then invoke. For xunit, the one we want is “xunit.runner.dnx”, but as it happens FastMember.Tests already declares a Main() entry-point that does some performance tests. As such, we can declare multiple commands:

image

The IDE even updates to let us choose very conveniently what the “play” button should do:

image

If we’re going to use the IDE, we also need to make sure that we’re targeting .NET Core, since we haven’t enabled DNX tools for regular .NET yet (DNXCORE50 is .NET Core):

image

Finally, we can hit play, and amazingly our tests pass first time (this is the exception, not  the rule):

image

We can switch to the “perf” command and run that:

image

To do the same thing at the command-line; the really really important thing to remember is to switch framework to core-clr via dnvm (here, c64 is an alias for “rc1 64-bit core-clr on windows” that I created in part 1):

image

In the IDE, you can debug tests with breakpoints in the ways you would hope. The test tooling is “functional” right now, but is improving at a rate.

But what about regular .net?

This is where it starts getting fun! Remember that dnxcore50 is core-clr using the dnx tools. net40 and net35 are regular  .net; the dnx tools don’t really do much other than compile them. But! The dnx tools themselves allow you to run .net apps! There is a different build we use for this: dnx451. This is .net 4.5.1 on dnx. Because this is consuming framework assemblies (not nuget feeds), it is basically the same configuration as net40 – except we can now use the up-to-date xunit bits (which support dnx451); we can add a new build to all of our projects in the project.json:

image

The IDE now lets us successfully run tests targeting the .NET Framework using the dnx tools:

image

And again, we can use the command-line to run our tests from dnx, making sure to use dnvm to switch to the .NET Framework (I have an n64 alias for this):

image

Oops! I broke something, and it only impacts the .NET Framework version. This is easiest to diagnose in the IDE, where pressing F5 quickly tells me it is something to do with the DbDataReader.Dispose method:

image

Fair enough; that’s just my own brain-dead implementation. I fixed this, and another error (a boolean inversion, along with committing the dnx451 builds, here; Nick Craver is going to laugh at me for this…) that I had introduced, but: this shows the importance of testing any changes you make to your  existing codebase! Our tests now pass for dnxcore50 and dnx451:

image

End of part 2

That got longer than I expected. We’ve now got as far as targeting dnxcore50 and dnx451 (alongside regular net35 and net40), running test suites, debugging, etc. We’ve actually seen something happen, and we’ve seen things go wrong.

Coming up in the unplanned part 3 (part 2 got too big):

  • Targeting hell: what the hell is netstandard, and  why should I care?
  • Packaging and deployment
  • Common problems

Continue to part 3

Monday 23 November 2015

The road to DNX – part 1

Target audience: library authors who want to get into this “dnx” thing.

Part 2; Part 3

Unless you have been asleep at the wheel, you probably know that Microsoft have been working really really hard at moving forward with the “corefx” / “core-clr” / “dnx” / “asp.net 5” stream of work (all broadly related and often used interchangeably, whether correctly or incorrectly) – their effort to make .net a truly open-source, cross-platform open technology. An awesome set of aims. A few days ago saw Release Candidate 1, and it is now becoming very capable.

Any platform is only as rich as the ecosystem – the libraries available for it. In the case of core-clr (which, right or wrong, I will now use to mean the set of things mentioned above), this is a combination of the framework libraries (which are now open source, this is the “corefx” piece), but also the third party community libraries – which often, but not exclusively, means nuget.

It is my judgement that there are a large number of library authors and contributors who want to start exploring the tools, but find the current state confusing and overwhelming. My aim here, then, is to try to demystify what you need to do. I’m going to assume that you already have some .net libraries that you want to migrate to core-clr. I’m drawing on the involvement I’ve had working on the core-clr conversions of Dapper, protobuf-net, Sigil, Jil (all now available for core-clr on nuget), SimpleSpeedTester (PR not yet taken), and StackExchange.Redis (I’m still working through a big PR kindly contributed by some awesome Microsoft folks). Topics I hope to cover:

Part 1 – running fast to stand still
  • the tools of the trade – what you need to get started, what each does, and where to find things
  • our sample project: FastMember
  • say hello to project.json and package structure
Part 2 – learning to fly
  • understanding target platforms / monikers
  • more on package management
  • changing your code to fit the platform; what is going to hurt?
  • testing your code
  • packaging and deployment
I’m not going to cover application code such as MVC applications.

Caveat emptor

All of these things are evolving; I hope it is all correct at the moment (rc1), but many details may have subtle changes by rtm and beyond. Such is the life of the software developer. Expect your cheese to be moved.

The tools of the trade

In the past, the .net framework has been huge system-wide installs of the entire framework library, with many upgrades over-the-top (meaning: once you’ve installed 4.6.1 or whatever: all 4.6 apps get those changes). In dnx, everything is much more granular; this makes for a much better upgrade cadence – System.Some.Component has changes they want to get out? Sure thing: they just deploy it to nuget, and you pick it up when you choose. The tool-chain is very different, and you need some new pieces; so… let’s go get them.

dnvm

The first  thing you need is “dnvm” – the “.NET Version Manager”. This tool is in charge of installing and managing as many different runtimes as you can like, including cross-targeting reference runtimes. Essentially, when people talk about “1.0.0-rc1-final” (the current release), that is the runtime version. You can install this for windows, mac or linux. If you’re using Visual Studio, be sure to install the appropriate bits (look just above “runtime and tooling”). In particular: don’t get confused by the fact that the page is talking about “ASP.NET 5”. Even if you are a pure library author with no interest in ASP.NET, this is the right stuff. I told you the names were largely interchangeable!

So what does dnvm do? Once it is installed, the first thing you want to do is “dnvm update-self” (to ensure you have the latest dnvm tooling), then “dnvm upgrade”, to update to the latest runtime.
dnvm is basically a tool that manages and switches between any number of runtimes, where runtimes are just folders under %USERPROFILE%/.dnx/runtimes; here are mine:

image

which is exactly what I get if I type “dnvm list”

image

If I decide I don’t need those beta 4-8 bits, I can just delete the folders any way I choose, and type “dnvm list” again:

image

(I could also use “dnvm uninstall” for this)

I can install additional runtimes with “dnvm install”, and I can give runtimes aliases – for example, in my examples I’ve added “c64” to mean “rc1 coreclr on x64 targeting windows”. I did this with:

dnvm alias c64 1.0.0-rc1-final -r coreclr -a x64”

(and likewise for the others) I can now switch my command-line tools between runtimes by entering “dnvm use c64” or “dnvm use n86”. Note that a runtime comprises not just the core pieces to make .net work at all, but also includes per runtime our other main tools - “dnu” and “dnx”.

dnu

Our next  tool is “dnu”, the “Microsoft .NET Development Utility”. This acts as a wrapper including:

  • package management (for obtaining and managing our dependencies)
  • build tools (the compiler)
  • packaging and deployment tools (think: “nuget pack”)

To see this tool in action we really need to have a project, but perhaps the most important thing to remember about a lot of what it does is: %USERPROFILE%/.dnx/packages. In the same way that dnvm owns /runtimes, dnu owns /packages – the local cache of dependencies we have on our local machine.

dnx

The last of our tools is “dnx”, the “Microsoft .NET Execution environment”; basically, it runs stuff! There are ways of bootstrapping things, but for most dev purposes, dnx is your friend (unless you’re using an IDE to do the same thing).
Again, we can’t  really show dnx doing much without a project, so we’ll come back to it.
All of these are command-line tools; almost everything can also be done in the IDE (with the right tools); but it is worth understanding what is going on.

What is FastMember?

Frankly, it is a little project I wrote ages ago and haven’t changed in ages (I didn’t even migrate it from google-code until recently). In fact, I even lost the snk password and had to break the identity (well damn, that’s embarrassing). What it does isn’t particularly important – just that it is an existing real-world library that I want to move to core-clr (one of the things it does very nicely is allow you to expose an IEnumerable<T> as an IDataReader for SqlBulkCopy). It does a few non-trivial things, but we’ll burn that bridge when we get to it.

Say hello project.json

Foreword:

This is actually one of the hardest bits. Once you have the project structure working, most other things are relatively easy! This step is awkward for existing projects. It would be nice if the tools made this a little less messy.

You may have heard mention of project.json; this is the new format that can be used  as an alternative to a csproj file. It is clean, human maintainable, and relatively versatile. So how do we get one? There is a way to do this with “dnu wrap”, but I’m not personally very satisfied with the hybrid beast that results from that – I’m going to focus instead on a complete transition to core-clr tooling. The good news is that a project.json file is very simple – and it is opinionated, with a lot of assumptions made implicitly (like: “include all the .cs files in the sub-tree”). One of the opinions it currently holds very strongly is that the folder name defines identity. So since I want my package to be FastMember, it needs to be in a folder called FastMember. This fits my existing file structure, except currently my .csproj  is also in the FastMember folder:

image

This is actually slightly problematic, and is a current pain point – because project.json and csproj do not play nicely in the same folder. While transitioning (as in: while you wait for the tools to stabilise so that the entire team are familiar with working in dnx), you probably want to have both csproj and project.json builds side-by-side. Your mileage may vary, but what has worked best for me is to move the csproj.

You should apply your own thoughts, but since a csproj targets a single framework (and a project.json doesn’t), I’m using a _Net40, _Net35 etc suffix for each of my csproj folders, with the project.json going into the main folder. So for each of my projects I’m going to:

  • Relocate the csproj file (and packages.config) into FastMember_Net40 (the original FastMember/FastMember.csproj targets .net 4.0) – don’t worry about multi-targeting – that will be covered later
  • Manually edit the csproj to pick up code files from the existing location – there’s a trick you can do here with the same “everything  under the sub-tree” approach; basically, from the new location I can tell it it to include “..\FastMember\**\*.cs”
  • Create a minimal project.json; for  starters, “{ }” will let us at least get to the next step

And:

  • Rename the FastMember_Signed folder to FastMember.Signed, because the nuget  package is called FastMember.Signed
  • Update the existing sln with the new csproj project locations
  • Create a new sln that has the project.json projects

Here’s the outcome:

image

The _Net35, _Net40 etc just contain the various csproj files for different builds. My actual code is in the FastMember and FastMember.Tests folders, so that I can move the project.json into each. I split this into two commits – one that refactored the existing csproj / sln, and one that created the project.json and corresponding sln.

IMPORTANT: as soon as you add a project.json to a sln you get a second file per project.json for free: {YourProject}.xproj; these should be included in source control.

IMPORTANT: as soon as you try to build (which first  invokes package restore), or explicitly run “dnu restore” – you get a project.lock.json file per project.json; this is an internal tracking file and does not need to be included in source control.

So after this, I have:

  • actual code (.cs) in FastMember and FastMember.Tests
  • a project.json (and lock file, and xproj) in FastMember, FastMember.Tests, FastMember.Signed, FastMember.Signed.Tests (note that the .Signed ones will be identical once complete, but with strong names included) – linked by FastMember.DNX.sln
  • csproj / packages.config in each of the _Net35 / Net40 folders – linked by FastMember.sln

Now, after all that, I can load *either* of the  two solutions.
At this point, my project.json is just a dummy “{ }”, but  we can fill it out; this bit is alarmingly simple to get a minimal build that mirrors the existing .net (pre core-clr) setup:

  • FastMember and FastMember.Signed should each target .net 3.5 and .net 4.0
  • they all need to reference System.Data from the BCL
  • the 3.5 build needs to define NO_DYNAMIC to compile without “dynamic”
  • the two test projects should reference NUnit 3 and their corresponding main project file
  • the .Signed versions should use the SNK, and need to obtain their .cs files from the parallel non-signed versions

The project.json for the above is very simple; and if you use Visual Studio the IDE will automatically prompt you in all the right places (for other editors, the schema is published here). Note that BCL references come under  “frameworkAssemblies”, where as packages from our package manager come under “dependencies”. One really nice thing the Visual Studio tools do for us here is auto-completion on package sources – on both names and versions:

imageimage

A quick shout out to “Visual Studio Code”: it should be noted that the folder-based approach used by core-clr works very well in this cut down  (but fast and well-featured) editor. And for extra awesome, it includes all these same abilities (just by typing “code .” from the command-line in the folder of choice):

image

In addition to building in Visual Studio, we can also build at the command line (after running “dnu restore”) by using “dnu build” or “dnu build --configuration release” in any of the folders with a project.json:

image

As you can see, building a single project.json can build for multiple targets – in this case .NET  3.5 and .NET 4.0. This results in the usual dll, pdb and xml outputs under bin/debug/net35 and bin/debug/net40 – or bin/release/net35 and bin/release/net40 if we specified release. So far, so good.

End of Part 1

At this point, you would be well within your rights to be underwhelmed. We’ve taken quite a bit of effort to get back to exactly where we started from: a project we can build that targets .net 3.5 and .net 4.0 and can  be compiled to binaries - but using a project.json instead of a csproj. Everything so far has been just tooling changes. In part 2, we’ll get into what this enables. It goes uphill from here, honest! See you soon.

Continue to part 2

Thursday 26 February 2015

First thoughts on Matias Ergo Pro


Today my Matias Ergo Pro finally arrived and I thought I’d record my initial reactions. I should first make clear – this was bought retail over a year ago; this is not a “thanks for the free stuff” post. I’m writing it because I care about keyboards. First, here’s my old setup – a Goldtouch v1:
IMG_20150226_094001[1]
This provides excellent “tent” support of arbitrary angles, a definite split left/right section, abandons a dedicated numeric segment, and uses a non-traditional layout for the secondary keys. I own several of these, and they have served me very well. As you can see from the photo, you may need a wrist strip to avoid strain (and for any serious tenting this will barely reach), but I can’t think of any way they could have avoided this. The keys feel fine. The one thing it doesn’t let you do is separate the left/right panels very much (the ball-joint connects rigidly them), which some posture folks maintain would help us avoid wrist strain. But: it has worked great. I love my Goldtouch keyboards.

…trades tenting for separation…

So: what is the Matias Ergo Pro?
IMG_20150226_093936[1]
Pictured above we see my new Ergo Pro, freshly unboxed. Like the Goldtouch it has distinct left/right sections, abandons a dedicated numeric segment, and uses a non-traditional layout for the secondary keys. However, it trades tenting for separation; instead of a ball joint, we get a cable connect. Limited tenting is provided by feet clips, which feel very solid:
IMG_20150226_100233[1]
This limited flexibility allows them to include inbuilt (removable, IIRC) wrist pads, which feel far better than the separate gel strip I used with the Goldtouch. But most importantly: YOU CAN MOVE THE PANELS APART. The connecting cable resizes for most reasonable separations, keeping the cabling tidy, which is a nice touch. It feels really nice to have your arms facing more forwards than diagonal.

What about the keys?

I believe they are ALPS, and they feel and sound great. It is great to have quality switches on keyboards. The alternative layout is interesting, but fine. I have no real need for dedicated “copy” etc, so I’ll probably try to re-map those through 3rd party software to something more useful.

Any issues?

I’ve only had the keyboard an hour, so I’m still getting used to it; however, I have had a few issues so far:
  • so far, it is only available in US layouts; this isn’t a huge issue to me and I knew this when I ordered, but you might care more about this than I do
  • (SEE UPDATE BELOW) the “num lock” key is spectacularly badly positioned IMO – I keep tripping this when looking for “n” – and I’m not sure, but I think they might be doing something on-device with this, as I have been unable (so far) to disable it through software; I’ve also had some false “hits” on the num-lock (when pressing “t”), which worries me more than a little
But: I really like it. Worth checking out if you are a keyboard person. Note that the first run is sold out, but the second run is taking orders.

EDIT: UPDATE ON THE NUM-LOCK


The "num lock" shown on the keyboard is not the OS-level "num lock"; it is a device-level key that switches the behavior of the block of keys "7890uiopjkl;nm,." - a bit like how some laptop keyboards work. The OS-level "num lock" is actually toggled by "fn"+"t". Because the "num lock" button is device-level, it cannot be mapped/disabled by the OS. I fixed it by removing the key-cap!