Deployment Driven Micro-Modularity

In a previous post I mentioned that I am styling my current work in ‘micro-modularity’. You may be thinking I just made that term up. I did. The reason I use this expression is that it sums up a portion of my programming style.

Like an indie band, my work tends to be inspired by many different patterns, styles and methodologies.  I don’t fully embrace a genera like DDD, TDD, UXDD, XP, Scrum, Hack, Portal, SOA, Microservices…  Instead I use the ‘tool that fits the job’ (TTF if you want an acronym or maybe TTFDD).

The point is that because of the increased accessibility of tooling and increase in Internet bandwidth, one of the most significant changes in software development in the last 10 years has been how deployment happens.

Software Permafrost

If I can remind or instill appreciation for this change, in the late 90s I had the privilege of being involved in what we warmly referred to as the ‘Internet Install Kit’. (Which reminds me of an episode of The IT Crowd)

This was software on a CD that was primarily used to configure your computer for the Internet and install browsers to save you from having to download over dial-up. The CD also housed some utilities that helped us support people with various problems.

The issue with this was that it was hard to keep up with new issues.  Each time a new outbreak of issues would happen, we could put a solution on the CD. Then we had roll up the massive monolith of software on a ‘Golden CD’, ship it for pressing and wait for the postal service before a customer could see a resolution. Then, when the disks were out the door, if they contained any bugs, there was no fixing it. One bad release could be preserved for years in the plastic and chrome permafrost that is CD media.

With the publicity of browser security holes over the years, you can understand that such a model is simply not feasible. With a generally accepted shelf life of seven years, the fear of those security holes finding their way onto an adventurous users computer hopefully is well passed. Although, I can tell you that some of the ‘gold’ CDs are still readable.

‘Going Gold’ was a pucker point that was celebrated as the completion of a long cycle of rigorous, upfront requirements gathering, development, and testing. The heavy stress preceding such an event lead to many a heart attack, divorce, or other breakdown. Veterans of the ‘Going Gold’ release cycle can attest to the fortitude of their peers, perhaps while scoffing at how modern programmers have ‘gotten soft’.

Let Fossils Lie

In the software world hardcopy disks are well on their way to being relegated to the roll of cryptographic licence key. For an example examine how Xbox One is using them. After reflecting on past deployments, we should all thank everyone ever involved in ridding homes of dial-up.  Evergreen software is here to stay.

Over the years we have seen various cycles of central computing, various ‘thicknesses’ of clients and number of layers.  Most of these reflect the state of deployment methods. As bandwidth increased, architecture responded by taking advantage of what could be deployed and where. These were not necessarily new architectures, they were just in new places.

There are plenty of new buzzwords, don’t get me wrong. Trainers and tool makers need to keep up the appearance of change somehow or they stop making money. Also, it never hurts to expand your horizons and indulge the explanation of newly coined terms to extract the finite points that the wordsmith uses to differentiate it from like patterns.

Testing is Not a Fossil

Because deployments can be done more often it is easy to forget that bugs are just as disruptive as in the past. This means that testing is as important as ever. Tools for automated testing are even more plentiful than the hosts that offer the service. Since the barrier to entry has been lowered in this area we are all able to unburden ourselves of much our manual testing.

If we allow these tools to be integrated into our daily workflow, we can have added confidence in our deployments. I wish I could say this alone has yielded better quality software. However, I can say that if you wish to produce better software, leaning heavily on testing can certainly help.

Automated and Continuous Deployment

Continuous Deployment (CD in this context, because I hate typing ‘continuous’) is a methodology that lets a software product live in various states. These live off-stage and outside the view of the user, deployed to various testing and development environments.  These various states stand in a line waiting to be proven worthy by quality control. They can then be plucked up at any moment and promoted to production.

Most of us can appreciate the server oriented nature of most software these days. Most of us can make most any type of change and the only ‘code roll’ that needs to be done is to the server. This deployment can be done without the user’s knowledge or consent, or even so much as a download on their part.

To take things a step further, the whole workflow can be automated. As documented previously I make a change to code and push the changes to the server. The Continuous Integration server responds to the change by building new artifacts. This kicks off an intelligent validation process with automated tests and warm-body signoff. This further results in the most worthy bits being allowed to fight for their users in the production arena.

The ‘Micro’ in My Modularity

For all of it’s usefulness CD is still a linear process. Releasing features is still a juggling act. However, because of the advancement of deployment tools and the flexibility surrounding modern deployments, we can click deeper into the realm of modularity.

Domain Driven Design has proven the usefulness of drawing lines between business functions, even when they may cause minor duplication. Even though many applications do not embrace this idea, I seldom find myself writing software that does not integrate with one or more other systems to leverage functionality that would otherwise need to be reinvented and maintained. Because of this, maintaining mappings to what customers and items mean in various systems has become part of the common trappings of software. Treating these as modules seems natural.

Applying the ‘Tool That Fits’ mentality and taking a look at various ways to express Separation of Concerns, you may be reminded to modularize vertically as well as horizontally. When I am running through the various patterns, and architectural spinoffs, I like to keep in mind that many of them favor a certain environment, resource sets and customer. If you buy in fully into one over another, you are also buying into it’s pitfalls in full. If you have enough experience, hopefully you are confident enough to think for yourself.

Babies First Burrito

So if you are following me and you have turned your software into a bag of peas and diced carrots you may well be looking for a way to serve it up hot. The answer I proposed to my own situation was all about how I deploy. What I have done to myself is organized my repositories by target system, subdomain, and yes, layer.

As I mentioned above, my CD process results in bits in production after several types of testing. If you pinch out to see more of the picture in your mind, imagine this process duplicated many times. On the most basic layers the CD process ends with packages on my NuGet server.

On top of this layer, there are packages built that cross the first layers boundaries.  They might do things like push inventory information from my ERP to an instance of an eCommerce site using Azure Service Bus.  Such a library would depend on my ERP Item library to get information, an Inventory Service Contract library to push the info into the cloud, or an eCommerce Product Client library to send it directly to that system.

If you jump all the way to the top layers, there are oData API projects that gather up these packages and are deployed on servers. There are also Web Applications that do the same in a combination with consuming the the oData APIs. If you have read about UXDD or MVVM you may understand the benefit of expressing models optimized for the user experience, so those exist on a layer below the UI and progressively influence the underlying layers with the goal of performance.

No methodology is bulletproof, and times and resources seem to flex around me all the time. I always work to do the best with what I have and leverage the best of the best methodologies. My release mechanisms are an intricately choreographed heavy metal opera that is driven by what is possible with cutting edge deployment automation and good bandwidth. In the vein of keeping my mind open to new ideas, what do you like to do with your deployment or for managing modularity?

Deployment Driven Micro-Modularity

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s