Implementing IQueryable on a Business Logic Layer – Part 1

So you might be thinking this is a bad approach from the start.  That this sort of functionality belongs on the DAL or at least in a Repository.  However, in this age of Microservices and in the context of complex applications, Business logic will exist in several layers.  Think of it in MVVM or N-Tier style where there are complex validations and business logic that runs faster when closer to the DAL in a multi-tier environment. In this particular instance I am exposing this sort of module via oData and as an Actor via. Azure Service Fabric.

Getting the internals right is particularly important for me because I am using extensive code generation via T4. If I get it right, it quickly extends to my whole infrastructure of microservices.

Early on I thought I could implement just parts of IQueryable and be able to get away with it. I tried only implementing Where(), Skip() and Take() without the host of classes needed for an actual IQueryable.  This worked great when my code is loosely coupled to the implementation and blindly executes only these operations.

The catch is that I couldn’t just cast a partial implementation to IQueryable for things like WebApi to use. It would be great to just implement these three operations and have some sort of generic implementation of a decorator that bridges the implementation to an IQueryable. Alas, there is no such bridge in native .NET.  Thus, we must help ourselves.

Poking around the web you will find an ancient Microsoft secret Walkthrough: Creating an IQueryable LINQ Provider.  Many posts about this subject fail to address the fact that you may not be using any sort of IQueryable under the hood. The MSDN post shows you how without directly addressing it directly.

At a high level: during the Execute() phase you will need to figure out what you can pass on, do so, and then execute the underlying query to return your list.  This list then becomes the subject of the remainder of the query.

The following post will walk through my implementation thought process.

Implementing IQueryable on a Business Logic Layer – Part 1

Security – We Are Doing It All Wrong

A while back I was at a conference where a question to a panel got me thinking.  If ever there was a good question during a panel, the best are the ones that stump the panel and make everyone think.  The participant basically acknowledged all the testing and best practices aren’t working because our software is still insecure.  The statement was made that we are doing security all wrong and asked when are we going to realize this and change. He then asked ‘What can we change to get it right?’

The panel were respected authorities on programming and platforms and all were well versed in handling hard questions without inserting foot in mouth.  Thus most of the immediate answer focused on testing and best practices.  The moderator made a good point about a recent hack of an insurance company that exposed personal information.  The statement was made that we should trust them for insurance, not for identity.

Occasionally there is a comment made that sticks with me permanently.  This is one.  After spending two days on identity authentication it was easy for me to come to the conclusion that the fundamental flaw in security is trusting the wrong people with the wrong thing. Also, are we really taking it seriously enough when we provide identity services ourselves?

We should not be trusting every random site with our personal identity. Do we really want or need to trust social networks with identity?  This reminded me of one of my favorite quotes:

FYI man, alright. You could sit at home, and do like absolutely nothing, and your name goes through like 17 computers a day. 1984? Yeah right, man. That’s a typo. Orwell is here now. He’s livin’ large. We have no names, man. No names. We are nameless! – Cereal Killer – Hackers (1985)

If only this were the case with the Internet at large.  If we were nameless to the Internet. Then they would not have the opportunity to mishandle our identity.  It seems to me that reasonable anonymity is possible now. Some credit cards have even toyed with temporary Credit Card numbers.

It also seems to me that anonymity would not even harm retailers who depend on statistics and tracking. Why does anyone need with my real name?  What about guys named Bob Smith?  What value does that name really have? In this time of near instant flow of information aren’t there better ways of identifying people than names or any other personal information? You have to reason that tracking a person requires some sort of arbitrary identity anyway.

I feel like there is an issue with attitude regarding this.  I used to listen to a technology buzz podcast.  What made me stop listening was a particular episode where all of the participants were ostracizing a listener because they didn’t want to give their real identity to a particular social network.  They made grandiose statements about how they are celebrities and that they don’t have trouble with identity theft and their identity is all over the web.

Today there are many sites that push hard for concrete personal information.  Many of them prompt for a cell phone number every time you login until they get it.  They all want your date of birth and then require security questions like ‘What is your mother’s maiden name?’ Individually they mean little. Together in the right hands they could be quite handy.

The other attitude that seems to prevail when it should not is about trust.  “Do you trust site X to keep your identity safe?”  is the wrong question.  The real questions are: What do they do and do I trust them with that? When they are compromised (yes ‘WHEN‘ not ‘if’) how will it affect me?  Are they required to provide restitution, and of what?  If your social network leaks enough information to compromise your identity and credit, I doubt it has a plan for restitution.

We, as creators of software, need to start the change.  If your startup thinks that it needs my reall name address and cell number in addition to my email address I can say ‘you are wrong’. I will be correct in nearly all circumstances.  Identity should be left to specific trusted entities. Also we need to stop treating our programmers like they are the only point of security failure.

Stop and think: What is the quickest route to PCI compliance? Don’t store or transmit credit card numbers, ever.

To illustrate: have you ever tent camped where you need to be bear safe?  The easiest way to be safe is to not try to defend food and trash out in the open. You let the camp facility keep it from the bears by placing your trash in the proper receptacle and simply leaving food locked in your car. You don’t advertise bacon by pouring your bacon grease in the grass near your tent.  You don’t leave trash in the fire ring and expect to chase animals away when they come. Yet this tactic seems all too common in eCommerce.

By asking your users for all their personal information you are sleeping with a slab of bacon on your chest.

When was the last time you got an email from a company that said ‘We got hacked, but we don’t store any information about you so you are safe.  Just wanted to let you know.’  How refreshing that would be.

How far are we from just being a number? Not close enough, because we are doing security all wrong.

Security – We Are Doing It All Wrong

Custom Visual Studio Solution Template with Subfolders

Creating a project template in Visual Studio can save a bunch of time if you are planning on creating a bunch of similar projects.  I have chosen to build a series of projects that resemble Microservice architecture I am calling Service Stacks.  Creating Visual Studio solution templates seems like one of those things that is so easy there isn’t much documentation or many tutorials to help you do it. Elizabeth Andrews has a nice post that helped me take the initial step beyond a single project template to creating solution templates that contain several projects.

One of the first things I wanted to do is organize the projects into solution folders.  I have seen other solution templates that divide projects into subfolders named things like Web, Backend, Winforms.  The way you do this is in the <ProjectCollection> section you simply wrap the <ProjectTemplateLink> tags in a <SolutionFolder> tag like this:

<TemplateContent> 
    <ProjectCollection> 
        <SolutionFolder Name="My Folder"> 

            <ProjectTemplateLink>             
ProjectName="$safesolutionname$.Domain"> Domain\Domain.vstemplate 
            </ProjectTemplateLink>

        </SolutionFolder> 
    </ProjectCollection> 
</TemplateContent>

This results in the solution containing a folder called “My Folder” that contains the project “Domain”.  Give it a try and let me know if it works for you.

Custom Visual Studio Solution Template with Subfolders

Chain of Application Responsibility: a way of keeping sane

Imagine a legacy application that processes orders.  The application was grown from a small LOB utility.  If you asked the original developer he would be surprised that it is still in use.  The application is made up of nearly 50,000 lines of code with little organization, and no standard has been defined for interactions with other systems.  It was a straight forward application at inception after all. But this means there is no defined place to start a task.  A programmer is reduced to keyword searches in hopes of finding a fringe piece of functionality. That, in some rare case, contains descriptive naming or on the off chance it has comments that match the vocabulary that one might ascribe to the item that needs fixed.

Imagine if you had to find and fix a tax calculation.  What if you were told that someone thought that there may be three or more places where this is done and as you talk to people it quickly becomes apparent that no one really knows where it is happening.  There is some code that sends an email confirmation to the customer that uses a calculation. Also there is some code that puts the order in the database as well as other code that actually charges the customer.  Each one may do its own calculation and they all need to match, or the customer might lose confidence and shop somewhere else.

Addressing the situation requires starting at the most fundamental levels and enhancing up the stack from code to the system level.

The Code

In an architectural void duplicate functionality is free to run rampant.  Analysis can shed light on which system is best suited to ensure an operation is carried out in a safe manner.  However, being unclear about where to expect certain types of operations in the first place stifles analysis.  This hampers problem resolution  and increases technical debt.  Analysis should be repeated periodically to ensure clarity, performance and maintainability.

On the other hand, when the code has a specific purpose, that is to deal with the entirety of a function or a business subject, its functionality can located in a single module or subset of modules.  By assigning responsibility to a module it would make it clear what code belonged and which did not, significantly reducing the amount of code within that module.  Further it would certainly make it easier to find code to enhance. Even in a system loosely organized this way would make it much easier to support.

The System

Rarely is a system grown in a void without communicating with other systems and exchanging data and functionality.  When this is the case it is prudent to analyze each system to assign appropriate responsibility and reduce redundant functionality as much as it makes sense.  When landing new functionality it would also be important to first scrutinize which, if any, existing systems should own that responsibility.

Benefit

Analysing responsibility at the module and system level not only reduces time investment, it magnifies transparency to the business when enhancement activity must take place.

Doing this takes planning. The tradeoff being that an organized system has more flexibility. Modularising By responsibility could provide a way of assembling chains of application functionality that is not brittle.  This architectural style could also be made to scale horizontally as well if each of the modules were able to be distributed and operations were made asynchronously.

Ultimately placing responsibility in clear meaningful buckets and segregating the code will help keep this programmer sane in the most complicated systems.  Is this how you do it now?  How do you ensure there is no duplication in your system?

Chain of Application Responsibility: a way of keeping sane

T4 Templates in Visual Stuido

There are not many places on the web to understand T4.  As I have been poking around T4 lately here are a few notes I have gathered together.  One thing of interest is Razor Generator I have talked about before.  It can be used as a T4-like preprocessor template although I feel like it is less mature.

Output Multiple Files

Generating multiple files from a single template seems to be a fundamental need in the community.  I personal prefer multiple files because you can see inclusions and deletions quickly in the git log instead of having to do the extra step of a diff.  Some folks think that it shouldn’t be done because it doesn’t work well.  Despite it being such a simple thing, it seems everyone has a solution, or at least an opinion.  As always I want to reuse the components closest to the core.  So I start with reusing the EF File Manager.

Entity Framework has a nifty file manager that you can use even if you are not using EF in the assembly you are generating in.  You start by adding `<#@ include file=”EF.Utility.CS.ttinclude” #>` to the beginning of your .tt file.  Then create an instance via `var fileManager =EntityFrameworkTemplateFileManager.Create(this);`.  Then every time you want to start a new file you simply call `fileManager.StartNewFile(newFileName);`.  Then at the end tell it to process by calling `Process()` on the `fileManager` object.

This method works fine, but I had trouble with my files disappearing every other time I hit save.  That is the first time I execute the script, it makes the file.  The second time I execute the script the file is deleted.  The third time I execute the script it is generated again.  If anyone has a solution to that, leave me a comment.

To investigate further I decided to try out the Tangible T4 template called TemplateFileManagerV2.1. It is intended to operate simularly: include `TemplateFileManagerV2.1.ttinclude`, instantiate `var fileManager = TemplateFileManager.Create(this);` and use it the same way.  The down side was it had the same problem.

Then I tried the DamienG  (git hub) solution with the same results.  That version really only adds a `fileManager.EndBlock()` after each new file content.

The next step was to go completely low tech with Oleg’s SaveOutput() function.  The issue there is that it doesn’t add/remove the file to/from the solution.

Since T4 Toolbox conflicts with Tangible T4 editor that I was using at the time, the T4 Toolbox option was the last I tried.  It is also the most different.  The documentation pushes you to use it’s template class method.  This isn’t much of an issue but it is a change.  The first step is to create a T4 class that extends `Template` with your output being encapsulated within a method called `TansformText()`.  Because of scope, you need to pass in any values you need to the constructor of the class or otherwise assign it to the instance.  Once your class is created, to create output to a particular file, you need to set the property `<instance>.Output.File` to the your file name.  Then you call `Render()` on your instance.  This method was predictable, but it took away my T4 Editor since Tangible is the only T4 editor for VS 2015.

Since I want an independent solution, and I like my T4 editor at the moment (I will investigate others as they become available for VS 2015) I returned to Oleg’s post and downloaded the source for `MultiOutput.tt` and tried it.  To my delight it worked by simply calling `SaveOutput(<filename>)` at the end of the file content as described in the blog post mentioned above.  But did it have the issue where the file disappeared every other time I hit save? Not at all after extensive testing.  Thus this is the solution I am using currently and it seems to be rock solid.

I like the template method used in the T4 Toolbox.  It seems a little cumbersome at first, but it brings the added benefit of template reuse and decoupling the template from the manor you are acquiring the objects you are generating the template with.  One could also create a custom class for passing into the template to further obfuscate the result from how you acquire the source data.  At some time in the future I may do this and start putting my templates up on Github to share.  If you are interested let me know.

Generation Based Existing Classes

Often you may want to generate factories, decorators, bridges, facades or just interfaces for existing generated files.  One proven method is to use System.Reflection to iterate over the types.  This requires a complied assembly.  You can use the one in the current project or use an external one.  Once you have the path to the assembly you can call `Assembly.LoadFile(<dll.path>)` to load the assembly and then call GetTypes() on that to iterate over the public types.  This looks something like this:

var assembly = Assembly.LoadFile(location);
foreach (var type in assembly.GetTypes())
   
{
    ... Do stuff here with the type
}

Get The Current Namespace

Something commonly done is outputing of namespace information into a generated class.  To get the current namespace:

string currentNamespace = System.Runtime.Remoting.Messaging.CallContext.LogicalGetData("NamespaceHint").ToString();

More

As I find more I will share them here.  If you have any you would like to share, leave them in the comments below.

T4 Templates in Visual Stuido

Testing WordPress on Azure

It is hard to keep up on all the different ways to get an application out to a public facing host.  You hear names like Chef, Vagrant, Puppet, Ansible, Docker, Cloudstuff… At the end of the day you just have to pick the one that makes sense for you.  Hopefully you can do that without going all the way down the road of fully implementing your production environment with all this before you can decide.  Here I will show you one way it can be done without too much fuss and without a 3rd party tool family so big you cant remember all their names.

Some of the old-timers may remember “poor man’s” IT tools.  My flavor I like to call ‘Ghetto Hacks’, or when they are elegant I like to call them a ‘Dixie Mix’.  This solution isn’t exactly the most elegant, but it is possible to run a production environment from it as it does not introduce any non-production worthy dependencies.  It basically does it’s thing and can be thrown out.  In fact I run a bunch of domains that I have setup this exact way.

Some people like to adopt a bunch of cute tools so they can talk ‘name soup’ at the water cooler.  My first goal was to sidestep this for a more unified front.  I also like to keep environments pure.  What I mean is I don’t like to use a Ruby script to drive a Python script that installs a PHP script.  This does not mean I wont use shell script or OS built-ins, but I like to minimize the external installs and dependencies and use native tools.

Breath

So now lets take a deep breath before we take this shallow dive.  I will introduce the stack and then show you how easy it is to create a WordPress Website and WP-CLI to configure it from CMD, even if you use a Mac.

The Stack

So lets start with the host. As the title states we are using Azure, which is a Windows-ish environment.  I say this because many users unfamiliar with the OS in the last 10 years may be surprised how at home they can be quickly.  You may be familiar with ipconfig on Windows sharing use cases with ifconfig in unix.  What you may not know is that there are many things that can make you feel at home.  For example in the CMD shell on Azure you can type ls -la and get a directory listing. As an interesting side note there is a shell for Azure written in Node.js called KuduExec, if that makes you feel at home.  We obviously wont be using that here.

Now is where you would normally get into ‘Cheppet or ‘Vagrible. I have instead chosen to use ‘Websites’, a name I can remember.  No OS management necessary.  You may be asking, based on the name, what is the difference between Websites and your $1.35/month shared hosting.  Without editorializing more than I already have, think “all the power and none of the ceremony” of a fully virtualized environment, a non-“Ceremonialized” environment.

I have chosen WordPress because it has a low barrier to entry for users and I think it is somewhat of a common tool.  I also choose it because it is PHP and isn’t the first thing you would think to run on Azure.  I must point out that programming language discussions sometimes degrade to unproductive arguments. All sides usually expose their ignorance by arguing against the negatives that existed 20 years ago (often before most participants were born or could talk).  “ET was the worst game ever.” So a PHP on Azure is some what of a ‘PB and ketchup sandwich’ in the world of cloud.  To set you at ease Windows and Azure has run PHP well for years, so I hope you like the simplicity of the combination.

STEPS: The Instance

The obvious first step is to get a single instance up and running.  Preferably this would be a free operation so we can be sure of the option and the path.  Gladly, Azure gives us a free tier to make it happen, and scale when we are ready.

  1. In this case I will walk you through using the portal.  At a later date I will do this same tutorial via shell (cmd or PowerShell).

  2. From the portal home you will select ‘New’ > ‘Web + Mobile’.
    new-create
  3. At the bottom select ‘Azure Marketplace’.
    web-mobile-marketplace
  4. On the right side you will see a search box where it says ‘Search Everything’.  Type “wordpress” and hit enter.
    marketplace-search
  5. There will be two items at the top for WordPress, we want the one that is “WordPress” only with a Publisher of “WordPress” and Category of “Web Apps”. (not Scalable WordPress)
    marketplace-search-wordpress
  6. When you click on that you will get a description. Click the ‘Create’ button at the bottom.
  7. The next frame will show what you want to call the resource group and some configurations.  For the sake of this experiment start with ‘Web App- Configure required settings’.
    • URL: you need to pick something unique
    • Pricing Tier: to select the free tier:
      • click on this option and in the next frame click ‘View all’.
      • At the bottom click ‘F1 Free’ and ‘Select’.
      • This will be good enough for testing.
    • Click ‘OK’ at the bottom of that frame.
  8. Give your database a descriptive name by clicking ‘Database’ and changing it to something like “MyWpDb” at the top.
    • To proceed you need to click ‘Leagal Terms’ and accept the terms if you agree.
  9. Click ‘OK’ at the bottom of the Database frame.  This will return you to the ‘WordPress’ frame.
  10. Click Create.

You will see ‘1’ next to notifications on the left-hand menu.  Clicking it will show that Azure is creating your instance.

Kudu ServicesWhen you create an Azure Website in the background a second site is created at http://.scm.azurewebsites.net which exposes the Kudu Console (Kudu is also the deployment engine for Azure Websites).  This is where you will gain access to Powershell and CMD, more on that later. (but if you are ancious click ‘Debug console’ > ‘CMD’ and type ‘ls -la’ then ‘cd’-around and notice how things above the shell window change).

If you click ‘Home’ and then the tile that was created there you will see a panel that describes the resources you have created.  You have created a Web app, an Applicion Insights profile and a MySQL Database.  You also see various other information about the site, including ‘estimated spend’ which will help you keep ontop of the ‘for pay’ sites should you decide to upgrade for any additional features in the future.

STEPS: WP-CLI Install

Since we are going to use wp-cli to configure the site, you should install that before hitting the site.  Click on your web app and right at the top click “All Settings”.  Scroll down to “Extensions” and click it.  Then click the add button at the top.Installed web app extensions - Microsoft Azure You will then be presented with a list, from which you should choose WordPress CLI by Cory Fowler.

Now, to show you it works, visit the Diagnostic Console via http://.scm.azurewebsites.net.  Click Debug console.  Then, for the purposes of this post, click CMD.  Then to make sure everything is working type 'wp cli version'.  This should display the version of wp-cli you are using.

To make things easy for me, I like to make a wp-cli.yml file in the site root.  This allows me to do operations directly on the root site without specifying it every time.  So create a wp-cli.yml file with the following contents and upload to your d:\home\site\wwwroot:

url: .azurewebsites.net

core config:
 dbname: 
 dbuser: 
 dbpass: 
 extra-php: |
 define( 'WP_DEBUG', true );
 define( 'WP_POST_REVISIONS', 50 );

Obviously replacing the content in the brackets with your own.  This information can be obtained from the portal (All settings > Application settings > Connection strings [you may need to click “Show connection strings”]).  Note that the WP_DEBUG constant should only be used while testing.

STEPS: CMD Configuration

Now, to setup the first site, it is simply one command (replacing %name% with your values):

wp --path=%install_dir% core multisite-install --subdomains --title=%siteTitle% --admin_user=%admin% --admin_password=%adminpw% --admin_email=%adminaddr%

Breaking down this command a little, we know that the %install_dir% is going to be D:\home\site\wwwroot.  Here we are doing a install for WordPress Multisite, so we can point several domains at the same piece of code which makes updates and maintenance quite a bit easier.  For you to understand the rest of the options the documentation is quite good.

Wp-cli makes installing extensions easy also.  For example, if you are planning on importing your previous site I would recomend generating thumbs seperate from the import process with the regenerate-thumbnails extension, which is installed like this:

plugin install regenerate-thumbnails --activate

Then say you want to install and activate a great Bootstrap based theme named after a different part of the shoe:

wp theme install toebox --activate

Perhapse you want to install the WP Theme Unit Test data  to test your own theme.  You could do the following:

php -r "readfile('https://raw.githubusercontent.com/manovotny/wptest/master/wptest.xml');" > wptest.xml
wp import wptest.xml --authors=create --skip="image_resize"
wp media regenerate --yes

Now, visit your site and see what you have.  Hit the /wp-admin url and login with the username and password you specified in the install command above and see that everything is there.

Summary

You may think that this tutorial ended abruptly. Hopefully you only reach this summary if you were not following along on your own free account.  I hope you were totally distracted from this post when you started playing around with the Azure Debug console and started reading the WP-CLI documentation.  But in the event that you find yourself back here to finish reading this post, thank you.

As with any platform Azure has it’s quirks.  Ones that you probably aren’t used to. It is developing at a break-neck pace, and they are highly attuned to their users.  Managing websites without the ceremony of VMs is something that is achievable now, and I hope you see this.  Personally I have moved away from these “Ceremonalized” environments because I don’t have the time to invest in DevOps or finding and paying someone else to do it for me.  I build and tear-down whole sites daily for testing using little more than Composer, WP-CLI and some batch files.  I hope you can remember those names.  I might not be one of the cool kids speaking the slur of name soup, but my lean running, massively scalable site makes my customers money and keeps their customers happy and coming back.  I hope the same or better success in your endeavor.  What have you had success with? Leave a comment.

Testing WordPress on Azure

Why I am Switching From Paperless To Papered

We all get browbeat about paper waste on a daily basis.  At some grocery stores the checker will act like you just pulled a gun on them if you answer paper to the paper-or-plastic question.  There have been reasons from time to time that made the switch away from paper make sense.

Recently however, more and more people are recognizing the value of paper as a renewable resource. Take boxed water as an example.  It saves on plastic disposable bottles, simple math.

The move to ‘the cloud’, IMHO, has been blindly implemented in true viral fashion. Because of this, going paperless is often a step backward in efficiency and more work for consumer. Going paperless on your pay-stub has long been a possibility.  However, many payroll systems are only able to send notifications, not your actual pay-stub.  You are required to log in and manually download the file.  At first glance this may seem like a benefit, they store the digital version of your pay-stub.  The issue lies in archival and accessibility.  Change employers and more than likely you will find yourself with more than one archive of pay-stubs.  I keep all finance related items handy for three years and keep another four years in storage.  Am I likely to be able to access 7yo pay-stubs.  Maybe.

My bank was eager to offer a paperless option.  How it works is they send me a notification that my statement is ready, and I have to go to their website, log in and manually download the statement. You may see where this is going.

A while back, I decided to go entirely paperless.  The first opinion I formed was that all financial related websites suck.  Logging in is a chore, finding the document is a pain, and downloading it is slow.  Due to my archival habits I found myself blowing hours tracking down documents that previously were handed to me and that I simply turned around and dropped in a file cabinet.  After three months I am now fully back to paper statements on everything including things I have had paperless for years.

This made me ask myself: how could it work?  OneDrive, DropBox or Google Drive support was my first thought before I quickly dismissed it.  This would be problematic since each institution would support on or the other, leaving you without the option of using only one online storage.  Plus, who would trust DropBox with their bank statements?  The simplest solution would be simply send the statements via email.  The problem there is that the world is devoid of easy to use email encryption.  My thoughts then go to a pass-phrase protected zip file.  Given adequate encryption, I would simply organize and let them set in my gmail.

I see paper as a renewable resource.  I live near fast growing lodgepole pine forests. Managed correctly, forests can be harvested and re-planted in a manner that has much less impact than that of oil drilling.

What do you think?  Personally, until a proper solution presents itself, I will be ‘Going Papered’.

Why I am Switching From Paperless To Papered