Testing in Modern App Development

I’m back to writing blogs after a small break. Things have changed for me personally since, I have made a career move recently. I hope to be more regular going forward.

In my new team, we follow Kanban flow. Our application is customer facing and due to nature of our project we need to do a frequent releases. This means we need to deliver the features faster and an external application the code quality and UI needs to be of the highest quality.

While quality of the product has been centre piece of all my projects,  I have been in discussion with my manager lately on our approach towards testing. The project complexity is growing each day and we are adding new features to application at very a fast pace, but team size remains constant. Hence, there has been increased need of changing the way we look at our development and especially QA process.

In modern app development it has become increasingly important to write ready to ship code. While no one can claim to write a bug free code, ensuring the quality of the product is no longer just the responsibility of QA. The responsibility of writing a reliable code lies equally on the developer team. In the past dev and QA used to be different departments in an organization. But, this trend is changing. The Dev and QA are now part of one project/ one team. The line between a developer and QA is diminishing. As a  developer you should be prepared to test your own and your peer’s code.

In this blog, I’m not trying to explain meaning of each level of testing, I believe every developer knows it already. I’m trying to explain their significance in the app development and how they can help you develop feature faster, prevent bugs and improve quality. I also try to give examples from my own experiences.

Unit Testing

Unit testing is the first and probably the most important pillar of a resilient code. Many times it may seem the unit testing is rather imposed by the organization than developer understanding the true value for it. The result is bad unit tests. I feel a bad unit test case is even worse than no unit test case. If as a developer you are writing just to get the unit test case coverage or because someone else has asked you to do so, you need to think again. Writing a unit test case to just achieve the code coverage gives the team false assurance that the each line of code has been unit tested. Code coverage is a necessary but not a sufficient condition. Quality of unit test case is more important than code coverage. Additionally, the naming convention of a unit test case should reflect the intension of the test case. Example of a bad unit test case name is PopTest, and a good unit test case name could be PopStackWithNoItemShouldFail

While, I do not have any strong preference for Test Driven Development (TDD), I do feel it is one of the better ways to unit test your application. Even if you do not follow TDD your UTs should be written after you have developed a part of your logic/ feature rather than after writing the entire functionality/ feature.

Consider this scenario: You are working on a web application with a significantly complex flow. Now, you need to make a small change but one which has a huge impact in your entire application.  If you test these changes  directly from web app, you would end up spending significant amount of time to validate all the basic test cases like null check, input validation etc. And even then, you cannot be sure to cover all the test cases.  That’s where, unit test cases come handy. You just test your Unit of work. You can test all the flows/ conditions in your code way faster. Once, you are satisfied with the changes you may proceed to test your app at higher level. Remember even if  it is time consuming to write unit test cases, it is way cheaper to make changes to the code at this stage.

Integration Testing

Integration testing is the second level of defence for your code. Consider the same example as before, your small change has an indirect impact on some other module of your web application and you may not be even aware of it. Since, you have tested your part you happily deliver this code to your test. But what do you get? Regression bugs. The cost of a regression bug is very high in software development lifecycle. Integration testing helps you prevent those regression bugs. Many times, developers do not understand the importance of integration tests. It might look like it is waste of time to write and maintain them. But, it is not a waste of time. The integration tests you gives the team confidence that code changes they have made do not break other functionality.

You can also bind the business requirements to your code (BDD) through integration tests. Specflow  is one of widely use framework in .NET to define business behaviour in your code. It bridges the gap between business and technology. The granularity of test cases can vary from project to project. In one of my previous projects, we used to have a test case for all the acceptance criteria (ACs) for a User Story. This helped us validate and confirm the requirements well before it reaches to testing. If your integration test cases are good enough and complete, then you can reduce the chances of functional bugs at the time of QA significantly.

UI Automation Testing

Automation Testing, I believe has never not got enough love from developers. It is not consider part of development and many times it is left to QA. The QAs too keep it restricted to Build Verification Tests/ Smoke Tests. The dynamics of team has changed rapidly as the organizations move towards Agile. The number of testers per developers are lesser than traditional development lifecycle. Yet, the complexity of code has increased significantly and you are suppose to deliver high quality ready-to-ship code. UI Automation plays a very important role in this. I believe the responsibility of writing the UI Automation tests for their features should lie on developers. This may seem overkill initially and it may look like that it reduces team velocity. But again, once your initial set up is done it will be much faster to write the automation tests. Selenium and Coded UI are two UI automation frameworks in .NET. In our team, while we have very good UI Automation already, we are undergoing a shift towards our approach to Automation testing. We intend to automate complex workflows and scenarios, making it data-driven using external data source like excel. Additionally, we plan to run the UI test cases as part of Pull Request build (we use Git as source control). This essentially means the code cannot be merge to the master if the UI Test cases are failing. This also means our master branch is  always ready to ship. Well… Almost!! 🙂

Manual Testing

The last pillar of testing before the code is ready to ship or move to UAT is Manual testing. If you have followed all the previous steps thoroughly, then manual testing becomes more of a validation. If a tester is able to find the most basic or obvious bugs in the application, then there is something wrong with process you are following as a team.

Personally, I prefer the QA to be part of development team only and not a separate department. When QA comes from a different department within an organization, then their end goals can be different. Instead of developers and QAs having discussion on the priority or severity of bug or whether it is a regression or existing bug, the conversation needs to move to what it takes to provide a stable quality build.

Please share your experiences of testing in your projects. 🙂

 

Advertisements

Visual Studio Team Services = Git/TFS + JIRA + Team City + Octopus

Warning: The content of this post is highly opinionated. Please exercise caution. :)

A bit of Background

Back in 2012-13, the term Dev-Ops was unheard of in my team. We were still living in dark ages where a developer would developed the code, then write unit test cases more to get the code coverage look good than to actually “test” the code. The code would then go through a review and queued for check-in. Some magic which the developer never really cared about would then tell us if the code was successfully checked-in or it failed. That magic was TFS XAML  build. The build was managed by a dedicated team and we never felt it was part of development process. After the end of sprint iteration a huge code base would then go to the test team and they would start testing the code based on their test cases and log bugs to the dev team. More often than not a developer would talk to tester only during that phase. This resulted in 100s of bugs in a big team. Few of these bugs would be invalid due lack of understanding of requirements, few others would be basic bugs which should have been handled by the developer in the first place. All the requirements, tasks, bugs, issues etc were logged in TFS. But, there was no dashboard. So, every developer and dev lead had to be expert in excel to track the items more effectively. It is said Ignorance is bliss, and same was true for us. We were happy in our shell, delivering code in this fashion and never felt need of change.

During the similar time-frame, I got an opportunity to work in a customer location. This is when I got a rude shock. The customer used tools which I had not heard before. They used confluence to collaborate, JIRA to track work items, Team City for build and a source control that was not TFS :). Suddenly, I realized that development was more than just writing code. I fell in love with these tools immediately. I realized there is more in the world than TFS.

However, there was still one big pain point with all these tools and application. We were using just too many tools and plugins – TortotiseSVN, JIRA, Team City, some other tool for deployment etc. Each one of these tools looked different and they worked differently.

Visual Studio Team Services

Microsoft was late to join the party and TFS was surely lagging the features that Agile projects and modern development practices demand. In 2013, Microsoft introduced Visual Studio Online (VSO). At the time of launch VSO appeared to be nothing more than TFS on cloud. However, Microsoft started adding more and more features to VSO. Many of these features were “inspired” from competitive tools. Over a period of time, VSO was aptly renamed to Visual Studio Team Services (VSTS). Today, VSTS has become one stop for all our development.

VSTS has solved one big problem of fragmentation. Its features today are on-par if not more with JIRA, Team City and Octupus.  With VSTS as a user you do not need multiple accounts,  your single account will give you access to everything you need for software development. Additionally, VSTS now offers full support for more and more non-Microsoft services. That means, you are first class citizen irrespective of your editor, source control (TFVC,  Git) and technology. If you do not want to use VSTS for everything, you still have a choice to choose what you want. For example, you can choose Octopus over VSTS Release Management and push package from VSTS build to Octopus directly. It works seamlessly.

To know more about features of VSTS go here. What tools do you use for your development?

VS 2017 – Revamped Start Page

When you open VS, the first thing that you notice is the Start Page. In VS 2015 the Start Page provided a useful way to open recent projects, look into tech news. But this is where it stopped.

VS 2017 has totally revamped the Start Page experience. It is visually more appealing and offers more options to improve developer’s productivity.

Visual Design and Layout

VS 2017 has improved Visual Design and Layout as you can see below.

Improved Start Page.PNG
VS 2017 Start Page

 

First thing that you immediately notice is that news section no longer takes more than 3/4th of the page. It is in fact toggle panel on the right hand side. This makes quite sense to me since as a developer when I open the Visual Studio most of the time i just want to start my development. I do not open VS just for news. 🙂

Recent

The recent section now offers more options to the developer. The user has now option to pin his favorite projects. The projects are now arranged neatly in chronological order making it far more intuitive and easy to use.

Visual Studio Recent.PNG
Start Page – Recent Section

One important thing to note here is that your Visual Studio settings go along with you wherever you sign-in. For example, let us say you created a project and committed it to source control in one machine. If you then sign-in and open VS 2017 in some other machine, that project would be available in your recent list. You can simply click the project and VS will give you option to set up the source control on your new machine.

Open

Open section now lets you checkout your project in source control directly from Start Page. This section will not only show Visual Studio Team Services but any other 3rd party source controls like GitHub.

Visual Studio Open.PNG
Connecting to your source control is just a click away

In addition to this you can also open a Project/ Solution, Folder or a Web Site directly from here.

New Project

The new project section saves you few clicks by showing your recent project templates to help you quickly start development. Again, recent templates moves with machines where you sign-in.

New Project template - Recent.PNG
Recent Project Templates

In addition to this you can search project templates directly from here. You can search by template name, type or language.

New Project template - Search.PNG
Search Project Templates from Start Page

Developer News

As mentioned earlier, the developer news section now comes as right toggle on Start Page. This section will be visible or hidden based on your preference. That is, you can either keep it open or collapsed as per your choice.

Developer News.PNG
Developer News section

If news section is collapsed, you do not need to worry about losing on news. You will be notified by a badge on the top right corner of the toggle icon.

Developer News Notification
Developer news – Batch

Hope this helps to you improve your productivity. Please share do your comments. Happy coding!!!

Visual Studio 2017 – The best IDE ever

Visual Studio 2017 was launched with much fan fare yesterday (March 7, 2017). I started exploring Visual Studio 2017 from RC and I must say, after using VS 2017 I felt I was earlier leaving in stone age. It is so much better.

In short the Visual Studio 2017 is equivalent to following:

VS 2017 = VS 2015 + Loads of 3rd party plugins (like NChrunch, few ReSharper features etc.) + Improved tooling, performance, experience, productivity etc.

Below, I have tried to highlight major features in Visual Studio 2017. This is not an exhaustive list but only few features which has helped me to improve my productivity significantly.

Faster Installation – Choose your workload

The first thing that you will notice while starting Visual Studio is that you got to chose what you want. Are you just a web developer? No worries, you only install just web workload. In fact, you can even chose what individual component you want to install within that workload. That means, lesser space, faster install time.

01-VS Installer
Visual Studio 2017 installer

Faster Load Time – Increase Productivity

One of the major pain-points with previous version of VS was that it used to taken an eternity to load a solution with lot of projects. You could actually launch your VS go for coffee, come back and it would still be loading. But, VS 2017 actually loads these project very fast. So, you no longer need to go to coffee, just open the project and start coding and leave home early 🙂

From my own experience, my solution contained around 92 projects. Opening them in Visual Studio 2015 could take anywhere between 2 to 3 minutes or even more. Sometimes, it would hang and I would need to start all over again. Worst still, if you had ReSharper installed like me, I could go for lunch along with coffee and come back before it loads.

 

VS 2015 - Preparing solution1
VS 2015 – Preparing Solution (You can go for a coffee)

With VS 2017, the same solution opens in less than 30 seconds. No more coffee breaks!

C# 7.0 Support

Visual Studio 2017 comes with C# 7.0. C# 7.0 has introduce lot of new features like Tuples, Switch Case improvements, Pattern Matching, Local Functions, etc.

// Example of tuple feature of C# 7.0
public (int sum, int difference) GetSumAndDiffernce(int a, int b)
{
       return (a + b, a - b);
}

You can get more details on C# 7.0 here.

Live Unit Testing

If you have used NChrunch, probably you would know what I’m talking about. Visual Studio 2017 brings in the support for live unit testing. VS 2017 runs unit test cases in background as you write your code. It means you simply write code and you will get an instant feedback on what unit test cases are failing or passing due to your change. Ideal for TDD. Makes your smarter and increase your productivity.

To enable Live Unit Testing in your solution go to Test -> Live Unit Testing -> Start

03-Live Unit Testing
Start Live Unit Testing
06-Live Unit test Example
Example of Live Unit Testing
Important Note: If you are using a .NET Core project. Then you are out of your luck. .NET Core does not support Live Unit Testing currently.
You will get following error in output window:
"Live Unit Testing does not yet support .NET Core"

Improvements in .NET Core Tooling

Back in VS 2015 days, .NET Core tooling was still in preview. If you were early adopters of .NET Core you would know what a pain it was. With VS 2017 the tooling has come out of preview and moved to 1.0. In addition to this you have MS Build support.

Once, you open an existing .NET Core project written in Visual Studio 2015 is “One-way upgrade” dialog as shown below. On clicking OK it will migrate your existing VS project to a newer version automatically.

02-Project Upgrade1.PNG
.NET Core One-way upgrade

Why this upgrade? This is because, VS 2017 no longer supports project.json and xproj. It is replaced by csproj. The csproj itself is no longer complicated as earlier. You can edit csproj file and add/ remove references without unloading the project. The csproj file also supports intellisense.

05-Edit CSProj.PNG
Simplified csproj file with intellisense 

 

Docker Support

Vs 2017 supports containers out of the box. While creating a new project you get an option to enable Docker Support.

04-Enable Docker
Enable Docker Support
Important Note: Before you enable Docker Support in your project, make sure you have Docker installed in your machine. 

Else, your build will fail with error: "Microsoft.DotNet.Docker.CommandLineClientException: Unable to run 'docker-compose'. Verify that Docker for Windows is installed and running locally."

If you do not have the tools, you can enable Docker support later as well.

There are many other features which I have not listed down here. For the complete list please refer to VS 2017 release notes.

Happy Coding!!!

async await best practices

aysnc await is probably one of the most important features of C#. It has made the life of developers easy. It helps developers to write clean code without any callbacks which are messy and difficult to understand.

However, if used in incorrectly async await it can cause havoc. It can lead to performance issues, deadlocks which are hard to debug. I have burnt hands due to incorrect use of  async await in the past and based on my little experience I can tell these issues will make your life hell, you will start questioning your very existence on this earth or why you chose to be a developer 🙂

I have tried to add common pitfalls while using async await  below. These are some of my learnings while working on problems that arise due to incorrect use of async await Much of these is inspired from Stephen Cleary blogs and Lucian Wischik six essential tips for Async channel 9 videos.

Here are tips:

  • AVOID using Task.Result or Task.Wait(). They make the calls synchronous and block async code.
  • Make your calls async all the way.
  • USE `Task.Delay` instead of `Thread.Sleep`
  • Understand the difference between CPU bound and IO bound operation before using Task Parallel Library or TPL
  • USE Task.Run or Parallel.ForEach for CPU bound operation.
  • USE await for IO bound operation.
  • USE ConfigureAwait(false)on the web APIs or library code. On the WPF application do not use ConfigureAwait(false) at the top level methods.
  • AVOID using `Task.Factory.StartNew`. Use Task.Run
  • DO NOT expose a synchronous method as asynchronous or visa vesra. In other words your library method should expose the true nature of method.
  • DO NOT use async void other than top level Events. ALWAYS return async Task

I hope these tips help few of you and help avoid common mistakes. Please suggest any other tips in the comments section.

Clean Visual Studio Solution

Today, every project we work on big or small, easy or complex, small team or large team  is probably on Source Control. The source control of course can be git, VSTS, SVN etc.

Still, there are times where you need to share your code as zip in an email, or shared link. It could be because your customer, colleague or partner do not have access to your source control or simply you have not added your code to Source Control itself.

Now, if you just zip the solution folder and email or share the link then you would include folders like bin, obj, packages or files like .sou, .user etc. These files are not required to build the solution. These files increase your zip file size significantly. The solution is simple, delete all the files which are not required. However, what if you have over 50 projects in the solution? And what if you have to this activity multiple times? It is too much of manual effort to do perform this activity.

I had a similar issue in my one of  my engagements recently.  However, instead of spending hours to do this manual work, I decided to automate the process by creating a small console app. The app deletes all the unwanted folders and files recursively from the solution. I have included following folders and files as per my requirements as the part of  deletion list:

Folders:  bin, obj, TestResults, packages
Files:  "*.vssscc", "*.ncrunchproject", "*.user", "*.suo"

Source code has been shared in GitHub here.

Alternatively, You can also download the executable directly from here.

Hope it helps some you to save your time and be more productive. Please do provide your comments and feedback.

Dispose HttpClient or have a static instance?

Recently, I came across this blog post from ASP.NET Monsters which talks about correct using HttpClient.

The post talks about issues of related to disposing HttpClient object for each request. As per the post calling HttpClient method can lead to issues.

using (var httpClient = new HttpClient())
{
    await httpClient.GetAsync(new Uri("http://bing.net"));
}

I have been using the HttpClient object like this for almost all of my projects. Hence, this post was an eye opener for me.

Also, as per the patterns and practices documentation:

In a web application this technique is not scalable. Each user request results in the creation of a new HttpClient object. Under a heavy load, the web server can exhaust the number of sockets available resulting in SocketException errors.

From above two articles I could conclude, below are the major issues with disposing the HttpClient object for each request:

  • The execution time of the HttpClient request is higher. This is obvious since we create and dispose the object every time for a new request.
  • Disposing HttpClient object every time could potentially lead to SocketException. This is because disposing the HttpClient object does not really close TCP connection. Quoting from the ASP.NET monster post:

..the application has exited and yet there are still a bunch of these connections open to the Azure machine which hosts the ASP.NET Monsters website. They are in the TIME_WAIT state which means that the connection has been closed on one side (ours) but we’re still waiting to see if any additional packets come in on it because they might have been delayed on the network somewhere

I wanted to test the performance improvements when we create a static instance of HttpClient. The aim of my test was ONLY to see the difference of execution time between the two approaches when we open multiple connections. To test this, I wrote following code:


namespace HttpClientTest
{
   using System;
   using System.Net.Http;

   class Program
   {
      private static readonly int _connections = 1000;
      private static readonly HttpClient _httpClient = new HttpClient();

      private static void Main()
      {
         TestHttpClientWithStaticInstance();
         TestHttpClientWithUsing();
      }

      private static void TestHttpClientWithUsing()
      {
         try
         {
             for (var i = 0; i < _connections; i++)
             {
                using (var httpClient = new HttpClient())
                {
                   var result = httpClient.GetAsync(new Uri("http://bing.com")).Result;}
                }
}
}
         catch (Exception exception)
         {
             Console.WriteLine(exception);
         }
      }

     private static void TestHttpClientWithStaticInstance()
     {
         try
         {
             for (var i = 0; i < _connections; i++)
             {
                  var result = _httpClient.GetAsync(new Uri("http://bing.com")).Result;
             }
         }
         catch (Exception exception)
         {
             Console.WriteLine(exception);
         }
}
}

 

For testing:

  • I ran the code with 10, 100, 1000 and 1000 connections.
  • Ran each test 3 times to find out the average
  • Executed ONLY one method at a time

My machine configuration was:

machineconfiguration
System Configuration

Below are the results from the Visual Studio Instrumentation Profiling:

Method No Of Connections Time in Seconds Difference in Seconds Performance Improvement in %
TestHttpClientWithUsing 10 2.6
TestHttpClientWithStaticInstance 1.8 1 44
TestHttpClientWithUsing 100 408
TestHttpClientWithStaticInstance 240 168 70
TestHttpClientWithUsing 1000 241
TestHttpClientWithStaticInstance 160 81 51
TestHttpClientWithUsing 10000 2456
TestHttpClientWithStaticInstance 1630 826 51

As you can see the time of execution for the static instance is far lesser than disposable object.

Does it means we should use static client object all the time? It depends.

One of the issues people have found with static HttpClient Instance is that it does not support DNS changes. Refer this article. For .NET application, there is a workaround available where you can you can set connnectonLeaseTimeOut by using ServicePoint object as mentioned in post.

However, for an ASP.NET Core, you may be out of luck as per this issue in GitHub as similar property does not seem to exist.

Hope this post help you take informed decision in your projects. Please share your thoughts in comments section.

Edit csproj Project file programatically

In my current engagement, we have more than 80 projects in a solution (don’t ask me why :)). Recently, as per quality guidelines, we needed to make few changes to each project.
For example:  Treat warnings as errors, enable code analysis for each project, sign assembly etc.

I realized doing it manually can take me entire day so I spent few mins to create a small script in C# to save my time. Here is the code snippet:

using System.Collections.Generic;
using System.Linq;
using Microsoft.Build.Evaluation;
class Program
{
   static void Main(string[] args)
   {
     var projectList = new List()
     {
        // Your Project file paths
     };
     foreach (var project in projectList)
     {
        var projectCollection = new ProjectCollection();
        var proj = projectCollection.LoadProject(project);
        // Select Debug configuration
         var debugPropertyGroup = 
           proj.Xml.PropertyGroups.First(
           e => e.Condition == " '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ");
         debugPropertyGroup.SetProperty("TreatWarningsAsErrors", "true");
         debugPropertyGroup.SetProperty("RunCodeAnalysis", "true");
      
        // Select Release configuration
        var releasePropertyGroup = 
           proj.Xml.PropertyGroups.First(
           e => e.Condition == " '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ");
        releasePropertyGroup.SetProperty("TreatWarningsAsErrors", "true");
        releasePropertyGroup.SetProperty("RunCodeAnalysis", "true");
     
        //Sign assembly with a with strong name key
        proj.SetProperty("SignAssembly", "true");
        proj.SetProperty("AssemblyOriginatorKeyFile", "test.pfx");

        //Save
        proj.Save();
      }
    } 
}

Hope it helps save some time for some of you 🙂

Update: I have created a command line utility for the solution and added it to GitHub. This utility accepts different set arguments to based on operations required to be performed. Please refer to ReadMe.md in GitHub for more details.

Azure Load Balancer on Virtual Machines

Recently, I needed to scale out my web app hosted on Virtual Machine. After a few hiccups and learnings, I was finally able to Load Balance my web app hosted over multiple Virtual Machines. I have tried to document the steps in the form of a blog here.

To scale out I used following configuration:

  • Two Azure virtual machines, Windows Server 2012 R2 hosting web app on IIS
  • Azure load Balancer (By Microsoft)

1. Create Resource Group

We will start with creating a Resource Group. The VMs and Load Balancer will be created in the same Resource Group. This helps us to keep things together.

On the Azure portal, go to Resource groups -> click Add -> Provide Resource group name, select Subscription and Resource Group location -> Click Create

Create_Resource_Group
Create Resource group

2. Create Azure VM1 (First Virtual Machine)

Select New -> Virtual Machines -> Select VM Windows Server 2012 R2 -> Select deployment model to Resource Manager -> Click Create.

You will be taken to Create Virtual Machine wizard.

a. Basics – Configure basic settings

Provide the name of Virtual Machine, Server User name, password -> Select the Resource Group created in the previous step -> click OK

Create-Azure-VM1.png
Create Virtual Machine

b. Size – Choose virtual machine size

Select the size of Virtual Machine and click Ok

c. Settings – Configure optional features

Under Settings, Create new Virtual network.

Create-VM-Virtual Network
Create new Virtual network

Next, create new Availability set

Create-new-AvailablitySet
Create New availability set

d. Summary

Under Summary, validate the details and Click Ok to create the Virtual machine.

3. Create Azure VM 2 (Second Virtual Machine)

Create second  virtual machine similar to the first Virtual Machine. Make sure to select same Virtual Network and Availability Set.

Create-Azure-VM2
Create Second Virtual Machine with same Virtual network and Availability set

4. Publish Web App to Azure VM

The next step is to publish the Web App to Azure VM. You can follow steps as explained in my previous blog post.  For this demo, I have deployed a simple web app which displays the machine name of the server.

Virtual Machine1: test-vm1

test-vm1.PNG
Web App hosted on Virtual Machine 1

Virtual Machine2: test-vm2

test-vm2.PNG
Web App hosted on Virtual Machine 2

4. Configure Load Balancer

a. Create Load Balancer

From the Azure portal, Click New -> Search Load Balancer -> Select Load Balancer with publisher as Microsoft -> Click Create

Create-LoadBalancer.PNG
Create new Load balancer – 1

Next, in the Create load balancer wizard, Provide the Name of Load Balancer  -> Create new IP address -> Provide the Resource group same as created in earlier step -> Click Create

Create-lb-step2.png
Create new Load balancer – 2

b. Add Probe

Once, the load balancer has been created, select the load balancer -> Click Settings -> Select Probes -> Click Add -> Provide the name of the probe, keep the Port number as 80 -> click Ok.

Add-Probe.PNG
Add probe

c. Add backend pool

Select the load balancer -> Click Settings -> Select Backend pools -> Click Add -> Provide the Name of the backend pool -> Select the Availability set created while creating Virtual Machines -> Choose both the Virtual Machines -> Click Select and Ok

Add-BackendPool.PNG
Add backend pool

d. Add Load balancing rule

Select the load balancer -> Click Settings -> Select Add Load balancing rule -> Click Add -> Provide the Name of load balancing rule -> Select the backend pool created in the previous step -> Click Ok

Add-lb-rule.PNG
Add Load balancing rule

e. Configure DNS Name for Load balancer

Select Public Ip Address -> Click Settings -> Click Configuration -> Provide DNS name label  –> Click Save

Add-DNS-To-LB.PNG
Configure DNS name for Load balancer

That’s It!. We are done. Navigate to the DNS address you provided for the load balancer and you will be navigated to one of the Azure VM. To verify the load balancing, shut down one of the machines and see all the requests being redirected to the second Azure Virtual Machine.

Load-Balance-Url
Load Balancer URL

Gulp with Visual Studio

Recently, I worked on a ASP.NET 4.6 MVC 5 project which didn’t have anything MVC about it. 🙂

It was a Single Page Application built on TypeScript, Knockout JS, CSS. Now, since it we didn’t have any server side code, we decided to give Gulp a try to concatenate and minify the JS and CSS files. Below I have explained the steps to configure gulp on ASP.NET 4.6 application with Visual Studio 2015. I created a sample application to explain the steps.

Disclaimer: This is my first attempt to use gulp in any of my projects. I do not claim to follow all the best practices. I you see there is anything I could have differently, please feel free to comment and share your ideas 🙂

 

Install and set up Node.js

  • Download and Install node.js v4.4.4 from here
  • Once Node.js is installed, open Node.js Command Prompt
  • Execute command: npm install –global gulp-cli – This command will install gulp globally.
  • Optionally, execute command npm install -g npm3.This will install npm3 alongside npm.The reason I installed npm3 alongside npm was because npm3 installs the dependencies in flat file structure while npm v2 installs the dependencies in hierarchical structure. For windows machine this can be issue as full file path could exceed more than 255 characters.

Configure Gulp in Visual Studio

  • Right click your Visual Studio project, and click new item. Search for NPM template. and select NPM Configuration File. This will add package.json file to your Visual Studio project.
    add-npm-package
  • Similarly, add Gulp Configuration File from installed template. This will add a file with name gulpfile.js to the Visual Studio Project. You will add your gulp tasks to concatenate and minify in this file.
    add-gulp-file.PNG
  • Now, add a JavaScript file with name gulp.config. This is a configuration file which will be later used by our gulpfile. It contains configuration settings like html source, js/css files source that needs to be minified, name of minified js file etc. A sample gulp.config file is shown below:
    module.exports = function() {
            var config = {
            htmlSource: [
                "index.html" /* The HTML file */
            ],
            js: [
                "./lib/js/Javascript1.js",
                "./lib/js/Javascript2.js" /* List of js files in the order as they appear in index.html*/
            ],
            minJs: "js-min.js", /* Minified JS file name */
            minJsDestination: "./lib/js/", /* Minified JS file destination */
            css: [
                "./lib/css/*.css" /* List of css files that need to be minified */
            ],
            minCss: "css-min.css", /* Minified CSS file name */
            minCssDestination: "./lib/css/" /* Minified CSS file destination */
        }
        return config;
    }
    
  • Now, open your html file where the minified js and css files need to be injected. Remove the css files reference from html file and add below code to your html file.
    <!-- inject:css -->
    <!-- endinject -->
    

    Similarly, remove the js files reference and add below code to your html.

    <!-- inject:js -->
    <!-- endinject -->
    
  • Next, we need to install the Gulp packages to first concatenate, minify js/ css files and later inject the minified files to html. To achieve this following packages need to be installed through NPM:
    • gulp – The streaming build system
    • gulp-csso – Minifies CSS
    • gulp-uglify – Minifies JS
    • gulp-inject – Injects file references into html
    • gulp-concat – Concatenates files
  • Open, Node.js Command Prompt. Go to your project folder and execute the below command.
    npm3 install --save-dev gulp gulp-csso gulp-uglify gulp-inject gulp-concat

    As you can see we used npm3 to install gulp packages locally to install the dependencies in a flat structure. And instead of installing each package one by one we used a single command to install all the packages together.

  • Once the packages are installed, go to package.json and you will see the installed packages under devDependencies. You will also notice a folder node_modules created under your project where all the packages are installed.
  • Next, we will start writing gulp tasks in gulpfile.js. Go to gulpfile.js and add the required packages you need for your tasks.
    var gulp = require("gulp");
    var concat = require("gulp-concat");
    var uglify = require("gulp-uglify");
    var minify = require("gulp-csso");
    var inject = require("gulp-inject");
    var config = require("./gulp.config")();
    
  • Now, add the gulp tasks to minify js and css files.
    // Task to minify JS
    gulp.task("min-all-js", function () {
        return gulp
            .src(config.js)
            .pipe(concat(config.minJs))
            .pipe(uglify())
            .pipe(gulp.dest(config.minJsDestination));
    });
    // Task to minify CSS
    gulp.task("min-all-css", function () {
        return gulp
       .src(config.css)
       .pipe(concat(config.minCss))
       .pipe(minify())
       .pipe(gulp.dest(config.minCssDestination));
    });
    

    To verify the above tasks go to View-> Other Windows -> Task Runner Explorer. In the Task Runner Explorer window you will see the two tasks created by you. Run these tasks from the window and verify that it creates the minified files in destination folder.

  • Now, add the gulp tasks to inject js and css files into html source
    // Task to inject minifed JS
    gulp.task("inject-min-js", function () {
        return gulp
        .src(config.htmlSource)
        .pipe(inject(gulp.src(config.minJsDestination + config.minJs)))
        .pipe(gulp.dest("."));
    });
    // Task to inject minifed CSS
    gulp.task("inject-min-css", function () {
        return gulp
        .src(config.htmlSource)
        .pipe(inject(gulp.src(config.minCssDestination + config.minCss)))
        .pipe(gulp.dest("."));
    });
    
  • With this we have created all our required gulp tasks. Next step is to run these tasks at the time of build. Go to Task Runner Explorer, right click task min-all-css, select Bindings -> Before Build. This will tell Visual Studio to run this task before the build starts. Similarly, add tasks min-all-js, min-inject-css, min-inject-js. Make sure these tasks are added in correct order.

    Gulp-task-binding.png
    Gulp task binding
  • That’s it. Now, just build the application. And you will see the gulp tasks are run before the build starts. If you go to your html source file, you will see the minified css and js files are injected into your html file.
    <!DOCTYPE html>
    <html>
    <head>
    <title>Gulp Test</title>
    <meta charset="utf-8" />
    <!-- inject:css -->
    <link rel="stylesheet" href="/lib/css/css-min.css">
    <!-- endinject -->
    </head>
    <body>
    	Gulp Test
    </body>
    </html>
    <!-- inject:js -->
    /lib/js/js-min.js
    <!-- endinject -->
    
Advertisements

Hi… I'm Ankit

%d bloggers like this: