Cheap Start FPV

So if you have been on the Internet at all lately you have probably noticed someone saying something about drones and quadcopters or some other form of RC ‘copter. They are in the news when they cause mayhem and I am sure there is someone you know that has at least one.  Certain fields of technology take a lurch forward occasionally. Sometimes it drops in price to make it available for nearly anyone interested. (quad for under $20 anyone?) In rare and beautiful circumstances both happen at the same time.

Some fascinating things have happened in RC and FPV (first-person-view) in the last couple years.  Antanas for example were for years were little more than a sprig of wire sticking out of radio gear. Enter the circular polarized or ‘clover leaf’ antenna. This $10 part marks a fascinating simple step forward in technology. (watch Flitetest play with some) Then, I ask myself when did cameras get so small, like the Ultra Micro Camera by Spektrum?

While things have become very affordable and attainable, it may be a challenge to piece it all together to have a functioning set.  Understanding batteries took quite a bit of research for me to be comfortable with.  Therefore, I thought I would share an affordable entry level setup I recommend to friends. I am not a professional, or even that good, I just like to have fun flying. That was something that was priced out of my reach just a couple years ago.

The Basics

I recommend a really small quad (nano-quad if you will) even though it is hard to fly outdoors in wind. Flying indoors made me careful and let me fly without thinking about the weather outside. It is easy to take someplace and fly without a bunch of setup. nano-quads like the Blade Nano QX can crash repeatedly and not break and parts are cheap when you eventually do break something. (frame, prop CCCW, motor CC CW, electronics, even canopy)

I also recommend starting out RTF (Ready To Fly) to avoid the cost of extra control electronics. Quality nano-quads will allow you to later bind to a higher quality transmitter for finer control that will extend the fun and further hone your flying skills when you are ready.

Goggles for FPV are all over the place when it comes to quality and features. If you are going to go cheap, go all the way cheap because the midrange is a mangled mess of paying too much for what you get.  There is a sweet spot in the high-mid range befor it makes the jump to crazy expensive that you will learn to appreciate as you go.


The first option I can attest to being great is the ready to fly Razor Nano QX FPV all-in-one package. You get all you need without having to learn about batteries and transmitters. The Teleporter headset that comes with it is an entry level video receiver that will build appreciation, should you decide to upgrade in the future.  While this will get you flying in a crash resilient way fast, it comes with some limitations.

What I like about this option is that it is real 5.8G  video without the huge lag of WiFi video and the slow HD of the low-end.  It can also teach you the basics of acro flight. You can turn off SAFE on this tiny little quad and learn to fly like the pros… although a little slower.

You will have to be aware that you don’t have much option for flexibility and gradual upgrades. The goggles can be used with other transmitters, but you will find that you won’t like it too much with better cameras with a wider field of view down the road. You can get a better transmitter to work with the little quad, but the quad itself has a limited scope and lifetime.

The upgrade path + DIY

So if you have confidence that you will want to upgrade over a period of time, there are a couple substitutions to the above setup that you can make. I would also recommend buying separate pieces if you like a little DIY. Much can be gained with some solder and a little tape down the road.

First step is to learn to fly without FPV. To do that I would highly recommend the Nano QX ready to fly package without a camera (more on camera later). The documentation is good. Search youtube for some ‘how to fly’ exercises (links below) and learn with SAFE off. Don’t let learning to fly in acro. mode hold you back from having fun, fly with SAFE on for fun right away.


The next evolution to your flying is FPV. To assemble a working FPV kit you need a camera, transmitter, receiver, and a screen to view it on. There are endless brands and options here, but if you start cheap you can learn as you go. I would recommend starting with a goggle because the experience is frankly more addictive.

I do like I have talked to people and tried a couple things and an easy screen/google to start is with the Quanum DIY Goggle Kit V2. This may look bulky but it is very light. Its primary structure is a foam box that you can cut to fit your face like no other. Plus, you can use the monitor outside the headset.  You just can’t beat the value of these things. It is also a great solution if you wear glasses as they can just fit over the top.  The kit supplies everything but the a battery, transmitter/receiver, and camera.

Batteries are a complex subject. The best thing to do is don’t over think it and buy a battery with the right connector and and a charger (this charger is a good start) learn more at your leisure.

The reason I wouldn’t necessarily recommend Nano QX FPV, because the camera is hardwired. You can find it without the goggles and extras, and it is the best option if you want to avoid soldering completely. For a reusable camera that will make it possible to turn almost anything into FPV is the Ultra Micro Camera by Spektrum that I mentioned earlier. A cheaper option that I have not tried is the Blueskysea Super Mini. They look like the are the same thing, but the key is that they have a built-in transmitter on the 5.8G frequency and channels that are supported by many new receivers.

To make the third party camera work you have to either piggyback on the power going to the Nano’s board, or connect another battery. Either way, know that you will get shorter flight times. Having a bunch of batteries handy isn’t too expensive. If I get a chance I will make a post about one of those options.

Then to receive the signal before it can be shown on a screen you need a video receiver. There are plenty of receivers out there, but a cheap place to start is Boscam RC832. Some people will argue that there are much better receivers, but I have not found anything as cheap. The next step up in my mind would be Quanum RC540R if you are willing to wait for it to ship international.

An obvious con to buying everything separate is the assemble and debug step. To start right, you should charge your batteries. There are great videos on YouTube to get you going. Plug all the electronics in before you put it all together. Also, mind the video channels, a number on one may not correspond directly to another.

If you are still wondering if you should try out the hobby check out a couple videos to get you watered up.

The Shopping List


Next Steps

So I have been tinkering with flying for a little while, but I am a programmer at heart. What interests me is the next level of custom firmware and programmable things and IoT. I would love to find an Software Defined Radio based transmitter and a flight control firmware project built in .NET. Know of any? or have better ideas for a cheap entry setup. Let me know below…

Cheap Start FPV

Deployment Driven Micro-Modularity

In a previous post I mentioned that I am styling my current work in ‘micro-modularity’. You may be thinking I just made that term up. I did. The reason I use this expression is that it sums up a portion of my programming style.

Like an indie band, my work tends to be inspired by many different patterns, styles and methodologies.  I don’t fully embrace a genera like DDD, TDD, UXDD, XP, Scrum, Hack, Portal, SOA, Microservices…  Instead I use the ‘tool that fits the job’ (TTF if you want an acronym or maybe TTFDD).

The point is that because of the increased accessibility of tooling and increase in Internet bandwidth, one of the most significant changes in software development in the last 10 years has been how deployment happens.

Software Permafrost

If I can remind or instill appreciation for this change, in the late 90s I had the privilege of being involved in what we warmly referred to as the ‘Internet Install Kit’. (Which reminds me of an episode of The IT Crowd)

This was software on a CD that was primarily used to configure your computer for the Internet and install browsers to save you from having to download over dial-up. The CD also housed some utilities that helped us support people with various problems.

The issue with this was that it was hard to keep up with new issues.  Each time a new outbreak of issues would happen, we could put a solution on the CD. Then we had roll up the massive monolith of software on a ‘Golden CD’, ship it for pressing and wait for the postal service before a customer could see a resolution. Then, when the disks were out the door, if they contained any bugs, there was no fixing it. One bad release could be preserved for years in the plastic and chrome permafrost that is CD media.

With the publicity of browser security holes over the years, you can understand that such a model is simply not feasible. With a generally accepted shelf life of seven years, the fear of those security holes finding their way onto an adventurous users computer hopefully is well passed. Although, I can tell you that some of the ‘gold’ CDs are still readable.

‘Going Gold’ was a pucker point that was celebrated as the completion of a long cycle of rigorous, upfront requirements gathering, development, and testing. The heavy stress preceding such an event lead to many a heart attack, divorce, or other breakdown. Veterans of the ‘Going Gold’ release cycle can attest to the fortitude of their peers, perhaps while scoffing at how modern programmers have ‘gotten soft’.

Let Fossils Lie

In the software world hardcopy disks are well on their way to being relegated to the roll of cryptographic licence key. For an example examine how Xbox One is using them. After reflecting on past deployments, we should all thank everyone ever involved in ridding homes of dial-up.  Evergreen software is here to stay.

Over the years we have seen various cycles of central computing, various ‘thicknesses’ of clients and number of layers.  Most of these reflect the state of deployment methods. As bandwidth increased, architecture responded by taking advantage of what could be deployed and where. These were not necessarily new architectures, they were just in new places.

There are plenty of new buzzwords, don’t get me wrong. Trainers and tool makers need to keep up the appearance of change somehow or they stop making money. Also, it never hurts to expand your horizons and indulge the explanation of newly coined terms to extract the finite points that the wordsmith uses to differentiate it from like patterns.

Testing is Not a Fossil

Because deployments can be done more often it is easy to forget that bugs are just as disruptive as in the past. This means that testing is as important as ever. Tools for automated testing are even more plentiful than the hosts that offer the service. Since the barrier to entry has been lowered in this area we are all able to unburden ourselves of much our manual testing.

If we allow these tools to be integrated into our daily workflow, we can have added confidence in our deployments. I wish I could say this alone has yielded better quality software. However, I can say that if you wish to produce better software, leaning heavily on testing can certainly help.

Automated and Continuous Deployment

Continuous Deployment (CD in this context, because I hate typing ‘continuous’) is a methodology that lets a software product live in various states. These live off-stage and outside the view of the user, deployed to various testing and development environments.  These various states stand in a line waiting to be proven worthy by quality control. They can then be plucked up at any moment and promoted to production.

Most of us can appreciate the server oriented nature of most software these days. Most of us can make most any type of change and the only ‘code roll’ that needs to be done is to the server. This deployment can be done without the user’s knowledge or consent, or even so much as a download on their part.

To take things a step further, the whole workflow can be automated. As documented previously I make a change to code and push the changes to the server. The Continuous Integration server responds to the change by building new artifacts. This kicks off an intelligent validation process with automated tests and warm-body signoff. This further results in the most worthy bits being allowed to fight for their users in the production arena.

The ‘Micro’ in My Modularity

For all of it’s usefulness CD is still a linear process. Releasing features is still a juggling act. However, because of the advancement of deployment tools and the flexibility surrounding modern deployments, we can click deeper into the realm of modularity.

Domain Driven Design has proven the usefulness of drawing lines between business functions, even when they may cause minor duplication. Even though many applications do not embrace this idea, I seldom find myself writing software that does not integrate with one or more other systems to leverage functionality that would otherwise need to be reinvented and maintained. Because of this, maintaining mappings to what customers and items mean in various systems has become part of the common trappings of software. Treating these as modules seems natural.

Applying the ‘Tool That Fits’ mentality and taking a look at various ways to express Separation of Concerns, you may be reminded to modularize vertically as well as horizontally. When I am running through the various patterns, and architectural spinoffs, I like to keep in mind that many of them favor a certain environment, resource sets and customer. If you buy in fully into one over another, you are also buying into it’s pitfalls in full. If you have enough experience, hopefully you are confident enough to think for yourself.

Babies First Burrito

So if you are following me and you have turned your software into a bag of peas and diced carrots you may well be looking for a way to serve it up hot. The answer I proposed to my own situation was all about how I deploy. What I have done to myself is organized my repositories by target system, subdomain, and yes, layer.

As I mentioned above, my CD process results in bits in production after several types of testing. If you pinch out to see more of the picture in your mind, imagine this process duplicated many times. On the most basic layers the CD process ends with packages on my NuGet server.

On top of this layer, there are packages built that cross the first layers boundaries.  They might do things like push inventory information from my ERP to an instance of an eCommerce site using Azure Service Bus.  Such a library would depend on my ERP Item library to get information, an Inventory Service Contract library to push the info into the cloud, or an eCommerce Product Client library to send it directly to that system.

If you jump all the way to the top layers, there are oData API projects that gather up these packages and are deployed on servers. There are also Web Applications that do the same in a combination with consuming the the oData APIs. If you have read about UXDD or MVVM you may understand the benefit of expressing models optimized for the user experience, so those exist on a layer below the UI and progressively influence the underlying layers with the goal of performance.

No methodology is bulletproof, and times and resources seem to flex around me all the time. I always work to do the best with what I have and leverage the best of the best methodologies. My release mechanisms are an intricately choreographed heavy metal opera that is driven by what is possible with cutting edge deployment automation and good bandwidth. In the vein of keeping my mind open to new ideas, what do you like to do with your deployment or for managing modularity?

Deployment Driven Micro-Modularity

Release Management Using VSTS

If you have been tracking Azure at all lately you know that it is growing at breakneck pace. Microsoft has really turned up the volume on their enterprise cloud at all levels.  Just diving in can sometimes be a rough experience. Azure is a wide open field with many paths to solve your problem. I would like to show you the path I have found to get my release management up and running for my complex micro-modularity and microservices.

In the last post we created an ASP.NET Project for Visual Studio Team Services (VSTS) Release in a minimal way.  Now we will check ‘the check box of continuous integration’ and ‘push the button of continuous deployment’. Then we will add a second deployment environment to get your creative juices flowing.


So to be clear, there is some configuration you need to do upfront that I won’t cover here.  Some of the setup is clearly part of the awkward dance of preview services.  But if you want to get ahead of the curve and take advantage of these services, I can attest to it being well worth it. They only need to be setup once.

We will:

  • Create an Azure Build definition.
  • Create an Azure Release definition
  • Deploy Azure Resource Group containing a Web Site
  • Configure a second environment

For the purpose of this tutorial you need the following:

  • Project to Deploy with Azure Resource Templates (follow tutorial, or fork it)
  • A VSTS account with a project, the RELEASE hub enabled and version control system (VCS), it is easy enough to use the hosted Git or Github or most any Git repo visible to the Internet.
  • A service end point configured in Team Services or TFS with rights to connect to your Azure subscription.
  • An agent that is capable of running the deployment tasks. I am just using the one on VSTS.
  • if you are using GitHub for version control you can connect it using a personal access token

Once you have the above resources you should be able to walk through the rest of this Release Management tutorial for Visual Studio Team Services.

Azure Build Definition

In your project click over to Build.  You will see a nice little green plus on the lefthand side that will create a new build definition.  Select the Visual Studio template and select the repo you are going to use.

new build def
New Build



new build def - repo
Connect Version Control System


build def - edit
Edit Build


If you are using github you can specify it later when you edit the configuration.  This template gives you a nice list of tasks that will handle our simple situation.  It is easy to add and remove tasks. I would encourage you to do so when you finish this tutorial. The build config is very robust, able to trigger on-prem builds and even using crazy automation tools like like Ant, Gradle, Grunt, Gulp, Maven… go nuts.

Version Control

The easiest thing to do is use the Git repo built into the project, but I am going to build the example code from GitHub. You can follow the link above to connect it to your VSTS account for use. The configuration looks like the following using the built-in VCS or def - Repository

Web Deploy Package

To make the deployment package MSBuild basically deploys to a zip file.  To trigger this you need to add ‘MSBuild Arguments’ on the Visual Studio Build step. In that field type

/p:DeployOnBuild=true;PublishProfile="Web Deploy Package"

After you make the change, click save. A great feature of the system is version controlling the builds. You allowed to rename the build here and add a comment. If you comment every save you create a rich history of changes, each of which can be scrutinized on the History tab.

Save Dialog

Continuous Integration (CI)

At this point verify you have checked ‘the check box of continuous integration’. Note that you can also schedule builds if you like ‘Nightly’ instead of ‘Continuous’.


check the check box of continuous integration
the check box of continuous integration



Now comes the fun part: Click Queue Build and watch it do it’s thing. When it is done you will find that these logs are saved and you can brows the artifacts created or download them as a zip.

Azure Release definition

Now click over to the Release tab. Release uses the same engine as build, so you will see many of similarities, and you can do many of the same tasks. Layout should be familiar, so click the green plus sign over on the left that says it is used to ‘Create a release definition’.  Choose the Azure Website Deployment template and click ok. This starts you out with a single environment that has a couple tasks.

Here the term ‘environment’ means no more than a single set of tasks, variables and sign-offs that can be strung together to make an automated release process.

You will note that you are encouraged to test often since the second task you are given is a test task. I personally run Unit Tests during build and Integration Tests after a release. If you wanted you could even have a little sanity test to run when you release to production.

First it is good to configure the Azure Web App Deployment task so you can save. Simply select your Azure Subscription endpoint, give your app a name, and select a location near you for now. Give the release config a name and click save.

Infrastructure Deployment

Right now you could click deploy and the thing would just go and use defaults and you would get a website in your Azure account.  However, you would not have any real control over how it would be billed and it would use whatever web.config you checked in. So in this step we will take control of all of this with a couple little JSON files.

Click Add Task. Under deploy click Add next to Azure Resource Group Deployment. I will admit when I first saw this I thought it sounded grand an complex. However, a couple clicks later I was elated with how simple it is.

When you click close you will notice the task is incompletely configured by default and at the end of your tasks.  Make it the first task by dragging the task to the start of the list. Then select your Azure Subscription service endpoint (I use a Service Principal for this). Then name the resource group. Later when you master variables you can name everything by convention using an environment specific variable. I always add ‘-resource’ to the resource group name for additional clarity.

Now click the ellipsis on the Template line.  It will tell you that you don’t have any build definitions linked.  Click ‘Link to a build definition’ and select the build definition you made earlier as the Source and click the Link button.

Now you have a tree of the artifacts created by that build when you ran it last.  The files you want will be under ‘drop’ and the folder name you gave your Resource Group project. Then under bin/Release/Templates select the WebSite.json file and click OK. Assign the Template Parameters value by clicking the ellipsis again, browsing to the same location and selecting the WebSite.parameters.json.

Now you are to the point where the magic starts and where things really started to click for me. Because you defined the website name and connection string as a parameter you can assign those via the Override Template Parameters field.  In this field set your values like this:

-hostingPlanName "myPlanName" -websiteName "myWebsiteName" -connectionString ",1433;Database=myDbName;User ID=myUsername;Password=MyPassword;Encrypt=True;"


One last thing to check before you release is that your Locations match between the Resource Group Deployment and the Web App Deployment. Then click Save.

Now the point of all this is to get you to continuous deployment. To enable this click the Triggers tab and check the check box, select your artifact source and select your environment. Then click save.  Now when you check in a change, it will build. When done building it will release.

Release Trigger

To give it a test without checking anything in click the release drop down and select Create Release. You can choose the artifacts from the release you did earlier and select the environment you just configured and click create. If the process succeeds you can verify by viewing the Resource groups in the Azure Portal.  The great thing about resource groups is now, to remove everything you just released to Azure, simply delete the resource group. To deploy it again, make a commit or release it manually.

The great thing about using the resource template is that if you make changes the environment will be updated to reflect those while you keep the progression of the environment under version control.

Configure a second environment

To understand why I think Release environments are cool, click the ellipsis on the Default Environment and ‘Configure variables’.  You will notice there are a variety of them predefined. Lets create one of our own.


At the bottom of the list click Add Variable. Name it ‘websiteName’ and give it a value you like. Click OK, and go back to your Resource Group Development task. In the Resource Group filed type:

$(websiteName) -resources

Then in your Override Template Parameters put $(websiteName) after -websiteName. Select the Azure Web App Deployment task and put the variable $(websiteName) in Web App Name field.

For a second, imagine you have a much more complex release environment you have configured. Now, click the ellipsis next to Default Environment again and click ‘Cone environment’ and call it something like QA.


Now configure variables on that environment and change value next to websiteName, perhaps appending ‘-qa’ or something. Click ok and Save the definition.  I don’t know about you but the first time I did something like this I giggled enough to make everyone in cubes around me feel uneasy.

Where to go from here

From here you can add more variables, parameter and infrastructure. Add a database or other services and make a self contained set of services so you can spin up tests of all sorts.  There are many tutorials out there for expanding on your JSON templates and great functionality built into Visual Studio (maybe Code too) to help you edit these configurations. I would be interested on where you personally take things from here.

In a post in the future up I will dig a little deeper into how I have overcome the issues of managing many tiny libraries using private NuGet repositories and multiple Git repositories.

After looking at VSTS Release, how does this compare to other tools you have used?


Release Management Using VSTS

ASP.NET Project for VSTS Release

So you have discovered the intense desire to manage your infrastructure as code and continuously deploy with your eyes closed. It is true, continuous integration and continuous deployment, once implemented, open a whole new set of opportunities. For me personally, most attractive of these is micro-functionality. Microservices are all the rage, but on a more fundamental level, it is hard to manage complex modularity without CI and CD.

For this reason I am going to assemble a refrence ASP.NET project that will demonstrate a common sinario in which the developer and Visual Studio are removed from the deployment process and endless deployments are made possible through Azure Resource Group template parameters. In a second post I will walk through setting up the Azure side of things.

Connection Strings and Migrations with EF6

Entity Framework 7 seems to be shaping up as a key part of my development stack.  However there are a host of things it does not yet do so I have stuck with Entity Framework 6 for most of my current development.  One of the flaws I have found in EF6 is that it is difficult to get it to use a specific connection string for code migrations. This makes automated migrations with VSTS Release even more difficult. There are even stack overflow questions about it that venture down some dark paths to get it right.

The method I have chosen when using EF6 is to make sure my migrations are clean and selectively enable automatic migrations. Then I assign a connection string per deployment as described here. So far this is the best and only way I can find that makes sense.  If you have other ideas let me know in the comments.

Looking at Infrastructure as Code, it seems like a daunting task. Especially if you are like me and have had a brush with sickening array of tools that are needed in some environments to get the job done.  Tools like Jenkins, Chef, Puppet, Ansible… sure they are OSS so they deserve support. On Azure, all you need for IaC are a couple JSON files, no extra software and no Admin involvement to get the right packages installed or make sure the OS configured correctly.

So if you want to skip to the end, grab the JSON and jump to the next post to see where VSTS uses them. I will try my best to give you a concise walk through from here on out so you can understand the things I like about this solution.

Visual Studio Solution

I will assume a level of comfort with Visual Studio, and save bits on the internet by being terse here.  From VS, choose to create a new project. From the New Project dialog select ASP.NET Web Application. For fun select MVC in the New ASP.NET Project dialog and then check WebApi.  Leave ‘Host in the cloud’ unchecked because we will be taking a different path. Click ok to create the solution.

Next we need to configure the web project with a publish profile to create a deployment package when built on VSTS. Right Click the web application project and select ‘Publish’. On the Publish Web dialog select ‘Custom’. When prompted for a name I always name it ‘Web Deploy Package’ so I can use the same build template over and over without a bunch of reconfiguring. On the next screen select a Publish method of ‘Web Deploy Package’. For a package location you should choose somewhere that will be picked up by the Artifacts step(s) in your build task. This usually defaults to a mask like “**\bin\$(BuildConfiguration)\**”, so if you choose your “bin\Release” directory you can get going quickly. Then go ahead and click publish to see what happens. When you check things in, it is best to leave the bin directory out of your commit, so this configuration will save you that way too.

You will see that deploying a WebApi or oData project is as easy as deploying a website. Using this method you can add any service offered by Azure like Service Fabric or to setup VMs if you like mucking about at the OS level.  I hope that after you take this first step you will try some other crazy things. Just remember to delete the resource group after you play with it. As a side note, deleting a resource group in Azure removes everything in it. So when you get down to deploying this you can simply delete the resource group and not worry about unknown things hanging out to penny and dime you down the month.

After you have your solution with a website, right click it and add a new project.  In the Add New Project dialog select Azure Resource Group. In the Select Azure Template list select ‘Web app’ and click ok.  There are other interesting options here, but for this demo, I want to show that there is already a DB up and running that this app will connect to.

Infrastructure as Code (IAS)

Don’t be afraid. As I have written in the past, there is no need for name soup or acronym mehem. Just a JSON file and a mouse.

At the time of writing the Azure Resource Group template that I have installed is using API version ‘2015-08-01’.  You can verify this by opening the WebSite.Json in the Azure resource Group project. The default templates can certainly get you going as is. However there is no way to pass in a connection string or application configuration.  By default the template makes some junked up website name that I dislike as well so we will tweak that too.

In the WebSite.json we want to add website name and connection string so we can assign them during release. First you want to add parameters for `websiteName` and `connectionstring` as shown in this Gist. You can simply delete the websiteName variable and replace all instances of `variables(‘webSiteName’)` with `parameters(‘webSiteName’)`. Then you need to add the section that does the inserting of the values to the WebApp environment in the `Microsoft.Web/sites` section as seen in the gist.


I hope this gets you on your way, and perhaps lets you see the potential of parameterized Azure IaC. Next you should commit this to your project on VSTS. (Although you could just as easily use GitHub with VSTS Build and Release) In the next article I will walk you through configuring Azure to deploy the same code to different environments with their own connection strings.

git the code here

ASP.NET Project for VSTS Release

VSTS Release “no vms found in resource group”

I have spent a little time trying to figure out why my deployment using an all-but-stock Azure Website ARM template was failing with the error “No VMs Found in Resource Group”.  You see, I am trying to set up my Visual Studio Team Services (VSTS) Release environments. Currently the ‘Release’ extension for VSTS is in preview. As I have learned and come to accept, when things are in preview it usually means the documentation isn’t complete and the workflow may not entirely make sense.

Occasionally I have issues that there are no answers for via searching or asking on Stack Overflow. I will try to record some of those here.

One thing I recently discovered on the Azure Resource Group Deployment task is this funny little box called Output that has an input box labeled ‘Resource Group’.  When you hover over the information icon it tells you it is simply for you to provide a name for the variable for the resource group for later use in a task.  Being a forward thinker I gave it a name since it didn’t cost me more than a second or two. Later maybe I could use it… or it would cause me issues. Ok, you most likely figured out that this caused me the error “No VMs Found in Resource Group”.  I searched all over and eventually found an issue on Github for it. Also, now that I think of it, it doesn’t seem to make sense that this would be output to a variable anyway since it should be a variable from the start. so:

SOLVED: VSTS – Release – Azure Resource Group Deployment – “No VMs Found in Resource Group”

There seems to be a bug when you fill in the Output > ‘Resource Group’ field and you are not using any VMs. Simply clear this field. If you need to I would suggest placing the name in an environment variable and using that is input from the start. Then it will also be available in other tasks.

VSTS Release “no vms found in resource group”

Test & Release Control

“Our contractor just gave us a merge request for X, Y and Z. We need to release feature X and Z. Feature Y failed testing and it is holding up the rest from being released!”

Have you been in this situation before?  You are responsible for a release that has several features, and one of them did not pass UAT or other testing. The business wants to release the other features because fixing the one will take too long, or it has been aborted completely. You may be using a version control system (VCS). Even so, you may have found that trying to pluck the feature out for release made a big mess.

Leading to this issue, it is all too common to make a single branch in version control for a ‘sprint’ or a single project or SOW.  If one feature or bug fix fails, the whole thing is held up.  Even worse, if this ‘atomic change’ (as in bomb) was merged to master/trunk the whole train would be derailed. That is, nothing would be able to move forward until the culprit is fixed.  This is a great way to put yourself at the mercy of a contractor.

Changing the way things are developed can be challenging, however the rewards go beyond customer satisfaction. Getting a handle on this situation can bring personal satisfaction back to your job. Take a look at the diagram and I will introduce you to

The ‘Feature Branch’ Method

version control drawing

Chances are you may already be doing this. If you are, it may be good to reflect on the benefits.

The basic idea is to give each feature, task or ticket its own branch.  This allows for concise testing and uncluttered code audits.  This also gives you a measure of control over when to release the feature. It also adds the bonus of ignoring the feature if need be. You can also be free to experiment and prototype.

To keep the diagram simple, I left out some possible complexities.  The easiest way to circumvent VCS complexity is to architect the code to isolate change.  With such an architecture you may use multiple repositories. Multiple repositories moves some complexity to deployment, but if you are using CI/CD the automation could be made to addresses it.

Using Your VCS

Most modern version control systems can handle this method quite well if they handle merging well. Why is merging so important? Why not use a VCS that is able to remove features from code? Because only a programmer can decide what code belongs to a feature. They need to specify this to any person not acquainted with coding or any tools being used. Once they have done this, most modern VCS are able to address the rest of the issue.

All verson control systems require coders to act in a certain way to get the most out of them.  There are a few general guidelines that will help keep you from basic common issues related to verson control systems. The Feature Branch method outlined here work best if you operate under a couple of specific philosophies, or guidelines.


The first thing I will mention should be thought of as a rule and not a guideline: Only a programmer should merge.  Even if GitHub says you can merge without conflict, the outcome isn’t always so clean. If the person doing the merge is not familiar and responsible for the code, there will be issues.  This means your IT or QA person, however skillful at coding, should not be clicking the button.  Also, the person merging should be testing before they push the merge. This may seem menial and obvious, but you would be surprised if you think so.

Also, you will notice the term “other activity” on the continuity line in the diagram. This is there to remind you to always act as though there have been other changes to the branch you are merging into. This will make your activity and results consistent so you can focus on the code.

The Mainline Model

The first guideline that will lend to your sanity is to keep a dedicated branch for continuity.  This is sometimes referred to as The Mainline Model. This makes it possible for all kinds of changes to be performed and pushed to the central repository without throwing stumbling blocks for the rest of the team. Managing continuity is a hallmark of enterprise systems built to last.

Gelatin Model

Next use the Gelatin Model (sometimes referred to as the Tofu Scale).  If you think of your favorite wiggly desert molded into a mound of glistening goodness. If you poak or shake it, the top giggles more than the bottom.  It could be said that the top is less stable. From here on out we will build on this base assumption : top to bottom = risky to stable.

To build on that, I will mention the idea of ‘merge down, copy up’. This is really useful when branching from branches which is common in branching models other than Mainline.  This practice emerged from the difficulty of backporting changes using merging. I will save this discussion for another day, but if you are curious, I encourage you to investigate further.

In the diagram notice the Branching above the mainline. This is considered less stable as well as out of continuity.  To make it stable and part of continuity, it must be merged into the mainline.  Further, to stabilize for production, you can not freeze the mainline because it is needed to stabilize other branches. Once ready for release branch or tag.

This last step is important and you should already be doing it.  When it comes to bugs and hotfixes, if you kept track of the code you released, you can make a branch (if it isn’t already), make the fix, test and deploy. Then you can worry about getting it back into continuity. Fixing the issue further down the timeline may require different changes to fix the issue, and you don’t want new features holding up hotfixes.

bonus practices

There are two bonus practices that can make life more sane. First, consider git squash.  Squashing your changes compresses them all into one changeset so the person merging can see exactly what is going to change.

The second is to take advantage of pull requests.  This adds a little process flow to your lifecycle, and a checkpoint to control the merges. This is also recorded in the history.  If you are not using Git, your VCS may have similar functionality by different terminology. I encourage you to leverage it.


Depending on how you are testing now, this may add testing points. The truth is that there are no shortcuts when it comes to testing. The baseline for testing when using a ‘Feature Branch’ approach is: test the branch and then test continuity after it is merged and before it is pushed/committed.  This also means that the merges to mainline are very controlled, tested and tagged.

“Breathe. It’s just software, we’re not saving babies here.” – SCOTT HANSELMAN

When you first start this process, it will seem like more testing and merging, more work. However, this will result in less fire-drills, less nagging customers and more bandwidth to focus on what matters. You probably not saving babies, so the quantity of testing is up to you.

Also, the more changes that are made to continuity since a branch is made, the more difficult it will be to merge back into continuity. That is, a branch or pull request can go “stale”. If your code does not lend itself to compartmentalized code changes, letting a branch get too far out of date can cause headaches when it comes time to merge. The person managing the branch should get a feel for when they should be synchronizing with continuity (that is from mainline). Some VCS offer other options of dealing with this. However, it would be best to keep changes small and iterative so they don’t hang for long periods. This is better for reporting purposes and more agile, but accept that in reality there will be troublesome features. On a long-living branch synchronizing from mainline may need to be done on a regular basis.

The sad truth is some VCS simply do not handle merging well.  You can get better tools for fixing merge conflicts and diffs, but ultimately if you are on one of these systems you may need to change. If you haven’t noticed, the industry standard is Git. It is free as in beer and speech, so your only excuse is not learning it.


If your features are each on their own branch, they can be included in releases when they are ready. This can let you release more often to defuse go-fever. This also exposes testing points to prevent sketchy code from holding up releases. These may also help keep contractors in check.

Feature branching makes me happy. It is the best way I have found to manage collaboration on code. It took some effort to get into the habit of branching and merging often. However, the benefits include a smoother flowing deployment cadence, cleaner code in production and better customer satisfaction.


Test & Release Control

Implementing IQueryable on a Business Logic Layer – Part 2

In Part 1 of this series I explained what sort of situation I am building towards.  In short I am building a highly modularized scaffolding system geared toward a microservice architecture.

Central to the functionality is a data object decorator and a gateway class that completely abstract away Entity Framework.  It was trivial to implement Take() and Skip(), passing the values to an internal IQueryable from EF.  Then I implemented ToList() to do a select as decorator.  As I mentioned in the previous post, that is great for doing linq against the gateway, but you cant cast it to a IQueryable as is expected in various places like WebApi.

Querying about the web I found a project with great potential called ReLink.  The project claims to be used by Entity Framework 7 and NHibernate afterall.  However, as I read through the introduction page I noted the phrase “Specification framework” and realized I may be reaching too deep by doing this first.  So I kept looking.

Moving up the stack I found an interesting repository containing what was called Query Interceptor. The goal of the QueryInterceptor is to intercept the query just before Execute().  This is an interesting concept.  Even so, I decided I would keep looking for something simpler.  I may need to optimize later, so I am keeping these projects bookmarked.

Several times I came across a project called Linq To Anything. I will be honest and say that the name prompted me to overlook it on several occasions. When I was not finding what I knew must exist, I took a closer look.  Linq To Anything is built on QueryInterceptor and System.Linq.Dynamic. This prompted me to try an implementation and to take a look at the code.  It seems deceptively easy to use, but in my experiments so far it seems to fit my needs.

Looking at unit tests is my favorite way to see how a project is meant to be used.  What I found was I could simply implement the properties needed for IQueryable and use LinqToAnything.QueryProvider<T> to expose a protected method I called DataAccessMethod(). Inside this method I specified how to pass Where, OrderBy, Take and Skip.

It ended up looking something like this:

#region IQueryable implementation
        public System.Linq.Expressions.Expression Expression => System.Linq.Expressions.Expression.Constant(this);
        public Type ElementType => typeof(ILoggerEntry);
        public IQueryProvider Provider => new LinqToAnything.QueryProvider<IMySpecificDecorator>(this.ApplyQueryInfo, (qi => this.ApplyQueryInfo(qi).Count()));
        protected IEnumerable<IMySpecificDecorator> ApplyQueryInfo(LinqToAnything.QueryInfo queryInfo)
            foreach (var clause in queryInfo.Clauses.OfType<LinqToAnything.Where>())
                this.Where((System.Linq.Expressions.Expression<Func<IMySpecificDecorator, bool>>)clause.Expression);
            if (queryInfo.OrderBy != null)
                var orderBy = queryInfo.OrderBy.Name;
                if (queryInfo.OrderBy.Direction == LinqToAnything.OrderBy.OrderByDirection.Desc)
                    orderBy += " descending";
                this.CurrentQuery = this.CurrentQuery.OrderBy(orderBy);
            if (queryInfo.Take != null && queryInfo.Take.Value > 0) this.Take(queryInfo.Take.Value);
            if (queryInfo.Skip > 0) this.Skip(queryInfo.Skip);
            return this.ToList();

The key to what is here is in the this.CurrentQuery.  It keeps a reference to the IQueryable as it has it’s various Linq methods called.  Notice I have not implemented OrderBy() and so I simple add it to the current query and execute it before Take() and Skip().

I have to admit, I was headed for a complex implementation before this.  This will work well for my first implementation.  At some point I will re-evaluate it for possible refactoring.  When I do you can be sure I will post about it here.

What do you think of my solution? What has been you experience with IQueryable?

Implementing IQueryable on a Business Logic Layer – Part 2