Release Management Using VSTS

If you have been tracking Azure at all lately you know that it is growing at breakneck pace. Microsoft has really turned up the volume on their enterprise cloud at all levels.  Just diving in can sometimes be a rough experience. Azure is a wide open field with many paths to solve your problem. I would like to show you the path I have found to get my release management up and running for my complex micro-modularity and microservices.

In the last post we created an ASP.NET Project for Visual Studio Team Services (VSTS) Release in a minimal way.  Now we will check ‘the check box of continuous integration’ and ‘push the button of continuous deployment’. Then we will add a second deployment environment to get your creative juices flowing.

Pre-requisites

So to be clear, there is some configuration you need to do upfront that I won’t cover here.  Some of the setup is clearly part of the awkward dance of preview services.  But if you want to get ahead of the curve and take advantage of these services, I can attest to it being well worth it. They only need to be setup once.

We will:

  • Create an Azure Build definition.
  • Create an Azure Release definition
  • Deploy Azure Resource Group containing a Web Site
  • Configure a second environment

For the purpose of this tutorial you need the following:

  • Project to Deploy with Azure Resource Templates (follow tutorial, or fork it)
  • A VSTS account with a project, the RELEASE hub enabled and version control system (VCS), it is easy enough to use the hosted Git or Github or most any Git repo visible to the Internet.
  • A service end point configured in Team Services or TFS with rights to connect to your Azure subscription.
  • An agent that is capable of running the deployment tasks. I am just using the one on VSTS.
  • if you are using GitHub for version control you can connect it using a personal access token

Once you have the above resources you should be able to walk through the rest of this Release Management tutorial for Visual Studio Team Services.

Azure Build Definition

In your project click over to Build.  You will see a nice little green plus on the lefthand side that will create a new build definition.  Select the Visual Studio template and select the repo you are going to use.

new build def
New Build

 

 

new build def - repo
Connect Version Control System

 

build def - edit
Edit Build

 

If you are using github you can specify it later when you edit the configuration.  This template gives you a nice list of tasks that will handle our simple situation.  It is easy to add and remove tasks. I would encourage you to do so when you finish this tutorial. The build config is very robust, able to trigger on-prem builds and even using crazy automation tools like like Ant, Gradle, Grunt, Gulp, Maven… go nuts.

Version Control

The easiest thing to do is use the Git repo built into the project, but I am going to build the example code from GitHub. You can follow the link above to connect it to your VSTS account for use. The configuration looks like the following using the built-in VCS or Github.build def - Repository

Web Deploy Package

To make the deployment package MSBuild basically deploys to a zip file.  To trigger this you need to add ‘MSBuild Arguments’ on the Visual Studio Build step. In that field type

/p:DeployOnBuild=true;PublishProfile="Web Deploy Package"

After you make the change, click save. A great feature of the system is version controlling the builds. You allowed to rename the build here and add a comment. If you comment every save you create a rich history of changes, each of which can be scrutinized on the History tab.

Save Dialog

Continuous Integration (CI)

At this point verify you have checked ‘the check box of continuous integration’. Note that you can also schedule builds if you like ‘Nightly’ instead of ‘Continuous’.

 

check the check box of continuous integration
the check box of continuous integration

 

Build

Now comes the fun part: Click Queue Build and watch it do it’s thing. When it is done you will find that these logs are saved and you can brows the artifacts created or download them as a zip.

Azure Release definition

Now click over to the Release tab. Release uses the same engine as build, so you will see many of similarities, and you can do many of the same tasks. Layout should be familiar, so click the green plus sign over on the left that says it is used to ‘Create a release definition’.  Choose the Azure Website Deployment template and click ok. This starts you out with a single environment that has a couple tasks.

Here the term ‘environment’ means no more than a single set of tasks, variables and sign-offs that can be strung together to make an automated release process.

You will note that you are encouraged to test often since the second task you are given is a test task. I personally run Unit Tests during build and Integration Tests after a release. If you wanted you could even have a little sanity test to run when you release to production.

First it is good to configure the Azure Web App Deployment task so you can save. Simply select your Azure Subscription endpoint, give your app a name, and select a location near you for now. Give the release config a name and click save.

Infrastructure Deployment

Right now you could click deploy and the thing would just go and use defaults and you would get a website in your Azure account.  However, you would not have any real control over how it would be billed and it would use whatever web.config you checked in. So in this step we will take control of all of this with a couple little JSON files.

Click Add Task. Under deploy click Add next to Azure Resource Group Deployment. I will admit when I first saw this I thought it sounded grand an complex. However, a couple clicks later I was elated with how simple it is.

When you click close you will notice the task is incompletely configured by default and at the end of your tasks.  Make it the first task by dragging the task to the start of the list. Then select your Azure Subscription service endpoint (I use a Service Principal for this). Then name the resource group. Later when you master variables you can name everything by convention using an environment specific variable. I always add ‘-resource’ to the resource group name for additional clarity.

Now click the ellipsis on the Template line.  It will tell you that you don’t have any build definitions linked.  Click ‘Link to a build definition’ and select the build definition you made earlier as the Source and click the Link button.

Now you have a tree of the artifacts created by that build when you ran it last.  The files you want will be under ‘drop’ and the folder name you gave your Resource Group project. Then under bin/Release/Templates select the WebSite.json file and click OK. Assign the Template Parameters value by clicking the ellipsis again, browsing to the same location and selecting the WebSite.parameters.json.

Now you are to the point where the magic starts and where things really started to click for me. Because you defined the website name and connection string as a parameter you can assign those via the Override Template Parameters field.  In this field set your values like this:

-hostingPlanName "myPlanName" -websiteName "myWebsiteName" -connectionString "Server=tcp:mydatabase.database.windows.net,1433;Database=myDbName;User ID=myUsername;Password=MyPassword;Encrypt=True;"

Release

One last thing to check before you release is that your Locations match between the Resource Group Deployment and the Web App Deployment. Then click Save.

Now the point of all this is to get you to continuous deployment. To enable this click the Triggers tab and check the check box, select your artifact source and select your environment. Then click save.  Now when you check in a change, it will build. When done building it will release.

Release Trigger

To give it a test without checking anything in click the release drop down and select Create Release. You can choose the artifacts from the release you did earlier and select the environment you just configured and click create. If the process succeeds you can verify by viewing the Resource groups in the Azure Portal.  The great thing about resource groups is now, to remove everything you just released to Azure, simply delete the resource group. To deploy it again, make a commit or release it manually.

The great thing about using the resource template is that if you make changes the environment will be updated to reflect those while you keep the progression of the environment under version control.

Configure a second environment

To understand why I think Release environments are cool, click the ellipsis on the Default Environment and ‘Configure variables’.  You will notice there are a variety of them predefined. Lets create one of our own.

env-ellipsis

At the bottom of the list click Add Variable. Name it ‘websiteName’ and give it a value you like. Click OK, and go back to your Resource Group Development task. In the Resource Group filed type:

$(websiteName) -resources

Then in your Override Template Parameters put $(websiteName) after -websiteName. Select the Azure Web App Deployment task and put the variable $(websiteName) in Web App Name field.

For a second, imagine you have a much more complex release environment you have configured. Now, click the ellipsis next to Default Environment again and click ‘Cone environment’ and call it something like QA.

env-ellipsis-clone

Now configure variables on that environment and change value next to websiteName, perhaps appending ‘-qa’ or something. Click ok and Save the definition.  I don’t know about you but the first time I did something like this I giggled enough to make everyone in cubes around me feel uneasy.

Where to go from here

From here you can add more variables, parameter and infrastructure. Add a database or other services and make a self contained set of services so you can spin up tests of all sorts.  There are many tutorials out there for expanding on your JSON templates and great functionality built into Visual Studio (maybe Code too) to help you edit these configurations. I would be interested on where you personally take things from here.

In a post in the future up I will dig a little deeper into how I have overcome the issues of managing many tiny libraries using private NuGet repositories and multiple Git repositories.

After looking at VSTS Release, how does this compare to other tools you have used?

 

Advertisements
Release Management Using VSTS

ASP.NET Project for VSTS Release

So you have discovered the intense desire to manage your infrastructure as code and continuously deploy with your eyes closed. It is true, continuous integration and continuous deployment, once implemented, open a whole new set of opportunities. For me personally, most attractive of these is micro-functionality. Microservices are all the rage, but on a more fundamental level, it is hard to manage complex modularity without CI and CD.

For this reason I am going to assemble a refrence ASP.NET project that will demonstrate a common sinario in which the developer and Visual Studio are removed from the deployment process and endless deployments are made possible through Azure Resource Group template parameters. In a second post I will walk through setting up the Azure side of things.

Connection Strings and Migrations with EF6

Entity Framework 7 seems to be shaping up as a key part of my development stack.  However there are a host of things it does not yet do so I have stuck with Entity Framework 6 for most of my current development.  One of the flaws I have found in EF6 is that it is difficult to get it to use a specific connection string for code migrations. This makes automated migrations with VSTS Release even more difficult. There are even stack overflow questions about it that venture down some dark paths to get it right.

The method I have chosen when using EF6 is to make sure my migrations are clean and selectively enable automatic migrations. Then I assign a connection string per deployment as described here. So far this is the best and only way I can find that makes sense.  If you have other ideas let me know in the comments.

Looking at Infrastructure as Code, it seems like a daunting task. Especially if you are like me and have had a brush with sickening array of tools that are needed in some environments to get the job done.  Tools like Jenkins, Chef, Puppet, Ansible… sure they are OSS so they deserve support. On Azure, all you need for IaC are a couple JSON files, no extra software and no Admin involvement to get the right packages installed or make sure the OS configured correctly.

So if you want to skip to the end, grab the JSON and jump to the next post to see where VSTS uses them. I will try my best to give you a concise walk through from here on out so you can understand the things I like about this solution.

Visual Studio Solution

I will assume a level of comfort with Visual Studio, and save bits on the internet by being terse here.  From VS, choose to create a new project. From the New Project dialog select ASP.NET Web Application. For fun select MVC in the New ASP.NET Project dialog and then check WebApi.  Leave ‘Host in the cloud’ unchecked because we will be taking a different path. Click ok to create the solution.

Next we need to configure the web project with a publish profile to create a deployment package when built on VSTS. Right Click the web application project and select ‘Publish’. On the Publish Web dialog select ‘Custom’. When prompted for a name I always name it ‘Web Deploy Package’ so I can use the same build template over and over without a bunch of reconfiguring. On the next screen select a Publish method of ‘Web Deploy Package’. For a package location you should choose somewhere that will be picked up by the Artifacts step(s) in your build task. This usually defaults to a mask like “**\bin\$(BuildConfiguration)\**”, so if you choose your “bin\Release” directory you can get going quickly. Then go ahead and click publish to see what happens. When you check things in, it is best to leave the bin directory out of your commit, so this configuration will save you that way too.

You will see that deploying a WebApi or oData project is as easy as deploying a website. Using this method you can add any service offered by Azure like Service Fabric or to setup VMs if you like mucking about at the OS level.  I hope that after you take this first step you will try some other crazy things. Just remember to delete the resource group after you play with it. As a side note, deleting a resource group in Azure removes everything in it. So when you get down to deploying this you can simply delete the resource group and not worry about unknown things hanging out to penny and dime you down the month.

After you have your solution with a website, right click it and add a new project.  In the Add New Project dialog select Azure Resource Group. In the Select Azure Template list select ‘Web app’ and click ok.  There are other interesting options here, but for this demo, I want to show that there is already a DB up and running that this app will connect to.

Infrastructure as Code (IAS)

Don’t be afraid. As I have written in the past, there is no need for name soup or acronym mehem. Just a JSON file and a mouse.

At the time of writing the Azure Resource Group template that I have installed is using API version ‘2015-08-01’.  You can verify this by opening the WebSite.Json in the Azure resource Group project. The default templates can certainly get you going as is. However there is no way to pass in a connection string or application configuration.  By default the template makes some junked up website name that I dislike as well so we will tweak that too.

In the WebSite.json we want to add website name and connection string so we can assign them during release. First you want to add parameters for `websiteName` and `connectionstring` as shown in this Gist. You can simply delete the websiteName variable and replace all instances of `variables(‘webSiteName’)` with `parameters(‘webSiteName’)`. Then you need to add the section that does the inserting of the values to the WebApp environment in the `Microsoft.Web/sites` section as seen in the gist.

NEXT

I hope this gets you on your way, and perhaps lets you see the potential of parameterized Azure IaC. Next you should commit this to your project on VSTS. (Although you could just as easily use GitHub with VSTS Build and Release) In the next article I will walk you through configuring Azure to deploy the same code to different environments with their own connection strings.

git the code here

ASP.NET Project for VSTS Release

Chain of Application Responsibility: a way of keeping sane

Imagine a legacy application that processes orders.  The application was grown from a small LOB utility.  If you asked the original developer he would be surprised that it is still in use.  The application is made up of nearly 50,000 lines of code with little organization, and no standard has been defined for interactions with other systems.  It was a straight forward application at inception after all. But this means there is no defined place to start a task.  A programmer is reduced to keyword searches in hopes of finding a fringe piece of functionality. That, in some rare case, contains descriptive naming or on the off chance it has comments that match the vocabulary that one might ascribe to the item that needs fixed.

Imagine if you had to find and fix a tax calculation.  What if you were told that someone thought that there may be three or more places where this is done and as you talk to people it quickly becomes apparent that no one really knows where it is happening.  There is some code that sends an email confirmation to the customer that uses a calculation. Also there is some code that puts the order in the database as well as other code that actually charges the customer.  Each one may do its own calculation and they all need to match, or the customer might lose confidence and shop somewhere else.

Addressing the situation requires starting at the most fundamental levels and enhancing up the stack from code to the system level.

The Code

In an architectural void duplicate functionality is free to run rampant.  Analysis can shed light on which system is best suited to ensure an operation is carried out in a safe manner.  However, being unclear about where to expect certain types of operations in the first place stifles analysis.  This hampers problem resolution  and increases technical debt.  Analysis should be repeated periodically to ensure clarity, performance and maintainability.

On the other hand, when the code has a specific purpose, that is to deal with the entirety of a function or a business subject, its functionality can located in a single module or subset of modules.  By assigning responsibility to a module it would make it clear what code belonged and which did not, significantly reducing the amount of code within that module.  Further it would certainly make it easier to find code to enhance. Even in a system loosely organized this way would make it much easier to support.

The System

Rarely is a system grown in a void without communicating with other systems and exchanging data and functionality.  When this is the case it is prudent to analyze each system to assign appropriate responsibility and reduce redundant functionality as much as it makes sense.  When landing new functionality it would also be important to first scrutinize which, if any, existing systems should own that responsibility.

Benefit

Analysing responsibility at the module and system level not only reduces time investment, it magnifies transparency to the business when enhancement activity must take place.

Doing this takes planning. The tradeoff being that an organized system has more flexibility. Modularising By responsibility could provide a way of assembling chains of application functionality that is not brittle.  This architectural style could also be made to scale horizontally as well if each of the modules were able to be distributed and operations were made asynchronously.

Ultimately placing responsibility in clear meaningful buckets and segregating the code will help keep this programmer sane in the most complicated systems.  Is this how you do it now?  How do you ensure there is no duplication in your system?

Chain of Application Responsibility: a way of keeping sane

Schedule with Dante and PHP

Many PHP programmers are comfortable with the Unix/Linux server world.  While PHP has certainly flourished as part of a LAMP stack, one must not forget it’s portability.  PHP melds well with a Unix tool chain of pipes.  Something that is often over looked is how it melds with a Windows environment.  What if you want to express your creativity with Internet applications, but find yourself working in a entirely Microsoft environment?  PHP should find its way high on your list of technologies.  What if you are already using PHP and have access to Windows servers?  You should be taking advantage in their hidden potential.

One of the features of the windows operating system that deserves exploiting is Task Scheduler 2.  Task Scheduler 2 was introduced on Windows Vista and Windows Server 2008.  At first glance you may just see Cron with a GUI.  It isn’t like they dedicated a Microsoft Certification to it and you would be hard pressed to find Task Scheduler training.  However Task Scheduler 2 is much more than run Foo at the given date and time.  You have a wide variety of triggers, conditions and options surrounding the execution of the given command.  Time being one trigger of many including immediate queuing.   This single ‘hidden’ potential could be reason enough to have a single Windows Server in your environment.  Fortunately for us Task Scheduler 2 comes with all versions of Windows desktop and server.  Licensing often being the pivot point in many IT organizations, Windows Webserver 2008 is a modest price and does come with the two important pieces we will be taking a look at now.  That is: Task Scheduler 2 and IIS.

From a programmers perspective, using a piece of technology as different as Task Scheduler and COM may seem to require more effort than they are worth.  Here is where I would like to introduce Dante Job Scheduler.  A project in development, Dante is meant as the broker to your Windows Server’s job scheduling and queuing abilities.  Dante is built on the Crossley Framework which contains a growing number of classes that wrap COM and .NET functionality.  You can easily take advantage of this  in your application.  For now, I would like to show you the early functionality built into Dante’s UI and Rest style application services.

To get started, download Dante, and uncompressed it on your computer running Windows  in a place you can point an IIS site at.  The directory structure is simple, but the one we will be concerned with first is called ‘public’.  Open Internet Information Services Manager and create a new site.  Use the path to the public directory as your value for the ‘Physical path’.

When you point your browser at the site, you will be prompted for a user name and password.  The easiest one to use is one for a known profile on the box. Administrator can work fine but you may want to consider creating a dedicated account for Dante.  If you are working on a development machine it would be great for the sake of this demo if you just logged in as the user you are logged in as now.  This will allow you to see things in action.

The first thing you see is a list of scheduled tasks that may already be there for that profile.  If you logged in with a regular user account you may see things like Google Update listed.  To create a new task click the Create tab at the top of the page.  In this form, you should give it a recognizable unique name.  Then in the ‘Command’ field, enter the path to notepad.exe (c:\Windows\notepad.exe worked for me).  Enter a run date of the present date, and then a time in the near future.  Click submit and then you can view it under the ‘List’ tab.

While you are waiting for it to run (it may not show up the second it is scheduled, it may need a second to show the GUI), click on some of the tasks to see their details.  You may notice the XML link at the bottom of the details list.  This is the XML definition of the task.  An application can create such an XML and pass it to the scheduler as a task envelop.  The ‘Run’ and ‘Stop’ links do just as you would assume: run and stop a executing task.

You can experiment on the task you just created.  Click the details of the task and click ‘Run’ to  see the notepad GUI appear.  Then Click the ‘Stop’ link to see it close.  Not overly exciting unless you understand the potential.  If you have Firebug running you may have noticed the XHR calls being made via jQuery.  Each of these can be imitated from a remote computer.  Say you want to run a task called “MyTest”  you would simply make a request for https://<username&gt;:<password>@<server_address>/tasks/run/MyTest.  The same can be said of creating a new task.  To create a task simply form a request with your favorite HTTP client to: https://<username&gt;:<password>@<server_address>/tasks/new/?tskname=<taskname>&cmd=<command>&rundate=<date>&runtime=<time>.

As mentioned before there are many triggers and options available in Task Scheduler 2.  Most of which will find their way into Dante and Crossley Framework.  Soon a client class will also be available to further simplify your development.  Exposing COM functionality with PHP takes some patience.  Even so, if you are running PHP on Windows, great benefits can be gleaned.

I hope this taste has peaked your interest.  If you are so motivated I encourage you to experiment with Dante and share your thoughts or even code.

Schedule with Dante and PHP

Code Generation

Code Generation has long been an interest of mine.  No matter if it is PHP, C# or now VB.NET code generation can gets me up to speed with the basics fast.  RAD is something that left a bad taste in my mouth.  In fact most anyone who actually supports their work cringes when you say RAD or mention code generation.  With the Framework craze these days though, there are many places where code isn’t generated, but the same problems are encountered with the convention over configuration crowd.  So I approach code generation with caution.  The sad fact is that there are things that we repeat over and over again.

How many times have you typed “SECT * FROM foo”?  The key is keeping code generation extensible.  Where technology in general falls apart is where it tries to think for you and have it’s own preference where preference is the only deciding factor.  Frameworks that are good but require you to conform to their school  of thought only find acceptance amongst those who already think that way or don’t have enough experience under their belt to have an opinion yet.  The more generic and customizable the better.

Enter T4, the new code template system in Visual Studio 2008.  This is a great technology. As I learn it I find myself weaving more repeditive code, but with better quality.  It makes CRUD controllers a dream to deal with, and holds the potential for saving alot of time.  As I come across usefull T4 resources I will post them here.

I am interested to see how this changes my perception of my own code generation.  The T4 template usage is interesting.  They really allow for alot of logic within the template makign complicated generation possible.  Already I see this making it’s way into my own generator.

Code Generation

OOD In an MVC Land

If you are familiar with asp you are probably familiar with the concept of code behind and more recently MVC. Over the years I have bounced around using many different languages, frameworks and technologies. Using the pattern in another language emphasizes that this is best implemented as a conceptual pattern. OOD is best done first if you have any logic at all. If you are just editing a list and it IS just a CRUD app, an MVC up front approach might work fine. Every where I look the MVC is pushed as THE solution. When, like any other design pattern, it is never more than a piece. Try to think outside the MVC box and do what works. Don’t try to cram everything you want to do into the MVC or you will then need to create classes called ‘helpers’ and then the only business object you really have is the MVC, and you might as well have not done OOP in the first place, and you should have stuck with Clasic ASP or PHP4 and your procedural, monolithic methodologies.

Remember to keep your business logic out of your display logic. If you call it a template or a view is up to you. Just don’t be caught with your pants down when you find out your model is hard-coded to your business logic. Or worse, you find application has become a big string of classes that are untestable and codependent on your control structure. I always keep focused on being loosely coupled and dynamic without being magic.
Which brings me to the ASP MVC.NET implementation. As with many things in .NET, it seems to be a situation of ‘we want that buzzword tech. too!’. There are many advantages, and as usual I get the feeling it is a step in the right direction. Also not a surprise, it feels like it is not what is supposed to be. Like the point was lost in the translation. The .NET crowd wanted Ajax, they got the word but missed the point. their websites were freed from having to load the whole page, but they still were cursed with throning the state string back and forth and never got the benefit of good efficient client side processing with JavaScript.
Oh man, I don’t want to get started on JavaScript in the MS camp. Almost all the MS shops I have talked to in the last year think ‘Java’ and ‘JavaScript’ are terms that are interchangeable. To their credit, for all they knew of either they might as well be.
Then the .NET crews whined about not having flash, so they got Silverlight. (which I like what I have seen, but why cant every one just love SVG???)
My largest point of contention with ASP.NET is that all the training and documentaiton pushes you to use the built in controls. The built in controls are obviously not meant for the Internet, but are for LAN apps. Thus ASP.NET is best suited for application developers moving into the Web Interface realm. I once saw a ASP.NET application that fed 6MB HTML (not including graphics) at the user. This was the worst case, but was far from atypical.
Enter the ASP MVC. None of the state dependent controls work. Pat them on the back for destroying the only thing the rest of the web development world envies them for. Wait, this isn’t a totally negative rant. ASP MVC encourages you to be conscious and intentional with your HTML. It also opens the door for using jQuery instead of Microsoft Ajax. Thus, as far as the client/browser side you are free to do as you please.
But, then you may think that all is well. Until you start dealing with the server side. I am not going to get down on it, since it is still in development. Although, as with any software if it is out of development it is dead, so take that as you will. The real problem that I do see is that it tries too hard to be an MVC. Your Controllers end up getting huge with ‘actions’ (and where is that /really/ in the pattern). There is only one layer of organization in the controller. This means you cant go beyond controller/action. Granted you can customize your routing, which feels like you are configuring someone else’s app, instead of building your own. Then all your views tend to be one per action which defeats at least the bulk of the benefit of application and presentation separation.
Hear again, another MS team tries to step in and save the day. The Entity Framework (EF) is the official flavor of the day. This essentially is your ORM Model to supply your CRUD. None of which lend themselves to good OOD. You still are encouraged to draw up a action that feels oddly procedural mashing your EF object lists into a view. EF does not allow for any type of static extension so there is no natural way to build business objects. The solution to this is to just build your business objects facade over the EF ORM. But no examples encourage this, neither does any of the community publicize this. So you should shove your monolithic processing in an action and pray you can rewrite instead of maintain. Now to add to the sadness of it all, there is no straight forward way to call and render an action from another controller. So now that you have a huge controller class you now start copy and pasting code between actions. I have seen Quick Basic apps that are better.
Maybe I am just jumping the gun. It is still in development after all. Maybe the ASP MVC community will get there.
Currently the way I am dealing with everything is having a nice thick jQuery driven client. I load partial views in via Ajax and call JSON services. No community so far has dogged the MVC fail bullet. I just want to know when everyone is going to move on from MVC and realize OOP != OOD

OOD In an MVC Land

Remote Error Messages IIS

By default IIS does not display detailed errors to anything but a browser on the local server.  This can be a pain if you are using a VM or server with IDE and Browser are on a different machine.  This should obviously only be done on a development machine.  This being said, open IIS Manager and navigate to Error Pages.IIS_ErrorPages

remote_errors

Then Click ‘Edit Feature Settings. Then select ‘Detailed Errors’ to display them to the client.  Click OK and restart your web server service to make sure, and develop quicker.  For more information look on the official IIS Blog.

UPDATE: When using Windows 7 and IIS 7 with PHP Manager, make sure you open PHP Manager and select ‘Configure error reporting’ and set it to “Development machine”.  This will allow output of PHP errors to the browser.

 

Remote Error Messages IIS