Motherhood, a special feeling coupled with divine emotions, sharing the strongest bond with your child. But being a mother is not an easy job. It is probably one of the toughest jobs, which comes with new responsibility. Unlike any corporate job, the job of a mother is not limited to just weekdays. As a mother you need to be ready 24*7, no matter what condition you may be in.
An year back we were blessed with a baby boy. The feeling when you hold your baby for the first time cannot be described in words. From the day your child is born, the transformation begins from a woman to a super mom.
Life changes completely when you have a baby. You have to be fearless and confident while at the same time you are nervous and scared .
I’m back to writing blogs after a small break. Things have changed for me personally since, I have made a career move recently. I hope to be more regular going forward.
In my new team, we follow Kanban flow. Our application is customer facing and due to nature of our project we need to do a frequent releases. This means we need to deliver the features faster and an external application the code quality and UI needs to be of the highest quality.
While quality of the product has been centre piece of all my projects, I have been in discussion with my manager lately on our approach towards testing. The project complexity is growing each day and we are adding new features to application at very a fast pace, but team size remains constant. Hence, there has been increased need of changing the way we look at our development and especially QA process.
In modern app development it has become increasingly important to write ready to ship code. While no one can claim to write a bug free code, ensuring the quality of the product is no longer just the responsibility of QA. The responsibility of writing a reliable code lies equally on the developer team. In the past dev and QA used to be different departments in an organization. But, this trend is changing. The Dev and QA are now part of one project/ one team. The line between a developer and QA is diminishing. As a developer you should be prepared to test your own and your peer’s code.
In this blog, I’m not trying to explain meaning of each level of testing, I believe every developer knows it already. I’m trying to explain their significance in the app development and how they can help you develop feature faster, prevent bugs and improve quality. I also try to give examples from my own experiences.
Unit testing is the first and probably the most important pillar of a resilient code. Many times it may seem the unit testing is rather imposed by the organization than developer understanding the true value for it. The result is bad unit tests. I feel a bad unit test case is even worse than no unit test case. If as a developer you are writing just to get the unit test case coverage or because someone else has asked you to do so, you need to think again. Writing a unit test case to just achieve the code coverage gives the team false assurance that the each line of code has been unit tested. Code coverage is a necessary but not a sufficient condition. Quality of unit test case is more important than code coverage. Additionally, the naming convention of a unit test case should reflect the intension of the test case. Example of a bad unit test case name is PopTest, and a good unit test case name could be PopStackWithNoItemShouldFail
While, I do not have any strong preference for Test Driven Development (TDD), I do feel it is one of the better ways to unit test your application. Even if you do not follow TDD your UTs should be written after you have developed a part of your logic/ feature rather than after writing the entire functionality/ feature.
Consider this scenario: You are working on a web application with a significantly complex flow. Now, you need to make a small change but one which has a huge impact in your entire application. If you test these changes directly from web app, you would end up spending significant amount of time to validate all the basic test cases like null check, input validation etc. And even then, you cannot be sure to cover all the test cases. That’s where, unit test cases come handy. You just test your Unit of work. You can test all the flows/ conditions in your code way faster. Once, you are satisfied with the changes you may proceed to test your app at higher level. Remember even if it is time consuming to write unit test cases, it is way cheaper to make changes to the code at this stage.
Integration testing is the second level of defence for your code. Consider the same example as before, your small change has an indirect impact on some other module of your web application and you may not be even aware of it. Since, you have tested your part you happily deliver this code to your test. But what do you get? Regression bugs. The cost of a regression bug is very high in software development lifecycle. Integration testing helps you prevent those regression bugs. Many times, developers do not understand the importance of integration tests. It might look like it is waste of time to write and maintain them. But, it is not a waste of time. The integration tests you gives the team confidence that code changes they have made do not break other functionality.
You can also bind the business requirements to your code (BDD) through integration tests. Specflow is one of widely use framework in .NET to define business behaviour in your code. It bridges the gap between business and technology. The granularity of test cases can vary from project to project. In one of my previous projects, we used to have a test case for all the acceptance criteria (ACs) for a User Story. This helped us validate and confirm the requirements well before it reaches to testing. If your integration test cases are good enough and complete, then you can reduce the chances of functional bugs at the time of QA significantly.
UI Automation Testing
Automation Testing, I believe has never not got enough love from developers. It is not consider part of development and many times it is left to QA. The QAs too keep it restricted to Build Verification Tests/ Smoke Tests. The dynamics of team has changed rapidly as the organizations move towards Agile. The number of testers per developers are lesser than traditional development lifecycle. Yet, the complexity of code has increased significantly and you are suppose to deliver high quality ready-to-ship code. UI Automation plays a very important role in this. I believe the responsibility of writing the UI Automation tests for their features should lie on developers. This may seem overkill initially and it may look like that it reduces team velocity. But again, once your initial set up is done it will be much faster to write the automation tests. Selenium and Coded UI are two UI automation frameworks in .NET. In our team, while we have very good UI Automation already, we are undergoing a shift towards our approach to Automation testing. We intend to automate complex workflows and scenarios, making it data-drivenusing external data source like excel. Additionally, we plan to run the UI test cases as part of Pull Request build (we use Git as source control). This essentially means the code cannot be merge to the master if the UI Test cases are failing. This also means our master branch is alwaysready to ship. Well… Almost!! 🙂
The last pillar of testing before the code is ready to ship or move to UAT is Manual testing. If you have followed all the previous steps thoroughly, then manual testing becomes more of a validation. If a tester is able to find the most basic or obvious bugs in the application, then there is something wrong with process you are following as a team.
Personally, I prefer the QA to be part of development team only and not a separate department. When QA comes from a different department within an organization, then their end goals can be different. Instead of developers and QAs having discussion on the priority or severity of bug or whether it is a regression or existing bug, the conversation needs to move to what it takes to provide a stable quality build.
Please share your experiences of testing in your projects. 🙂
Warning: The content of this post is highly opinionated. Please exercise caution. :)
A bit of Background
Back in 2012-13, the term Dev-Ops was unheard of in my team. We were still living in dark ages where a developer would developed the code, then write unit test cases more to get the code coverage look good than to actually “test” the code. The code would then go through a review and queued for check-in. Some magic which the developer never really cared about would then tell us if the code was successfully checked-in or it failed. That magic was TFS XAML build. The build was managed by a dedicated team and we never felt it was part of development process. After the end of sprint iteration a huge code base would then go to the test team and they would start testing the code based on their test cases and log bugs to the dev team. More often than not a developer would talk to tester only during that phase. This resulted in 100s of bugs in a big team. Few of these bugs would be invalid due lack of understanding of requirements, few others would be basic bugs which should have been handled by the developer in the first place. All the requirements, tasks, bugs, issues etc were logged in TFS. But, there was no dashboard. So, every developer and dev lead had to be expert in excel to track the items more effectively. It is said Ignorance is bliss, and same was true for us. We were happy in our shell, delivering code in this fashion and never felt need of change.
During the similar time-frame, I got an opportunity to work in a customer location. This is when I got a rude shock. The customer used tools which I had not heard before. They used confluence to collaborate, JIRA to track work items, Team City for build and a source control that was not TFS :). Suddenly, I realized that development was more than just writing code. I fell in love with these tools immediately. I realized there is more in the world than TFS.
However, there was still one big pain point with all these tools and application. We were using just too many tools and plugins – TortotiseSVN, JIRA, Team City, some other tool for deployment etc. Each one of these tools looked different and they worked differently.
Visual Studio Team Services
Microsoft was late to join the party and TFS was surely lagging the features that Agile projects and modern development practices demand. In 2013, Microsoft introduced Visual Studio Online (VSO). At the time of launch VSO appeared to be nothing more than TFS on cloud. However, Microsoft started adding more and more features to VSO. Many of these features were “inspired” from competitive tools. Over a period of time, VSO was aptly renamed to Visual Studio Team Services (VSTS). Today, VSTS has become one stop for all our development.
VSTS has solved one big problem of fragmentation. Its features today are on-par if not more with JIRA, Team City and Octupus. With VSTS as a user you do not need multiple accounts, your single account will give you access to everything you need for software development. Additionally, VSTS now offers full support for more and more non-Microsoft services. That means, you are first class citizen irrespective of your editor, source control (TFVC, Git) and technology. If you do not want to use VSTS for everything, you still have a choice to choose what you want. For example, you can choose Octopus over VSTS Release Management and push package from VSTS build to Octopus directly. It works seamlessly.
To know more about features of VSTS go here. What tools do you use for your development?
When you open VS, the first thing that you notice is the Start Page. In VS 2015 the Start Page provided a useful way to open recent projects, look into tech news. But this is where it stopped.
VS 2017 has totally revamped the Start Page experience. It is visually more appealing and offers more options to improve developer’s productivity.
Visual Design and Layout
VS 2017 has improved Visual Design and Layout as you can see below.
First thing that you immediately notice is that news section no longer takes more than 3/4th of the page. It is in fact toggle panel on the right hand side. This makes quite sense to me since as a developer when I open the Visual Studio most of the time i just want to start my development. I do not open VS just for news. 🙂
The recent section now offers more options to the developer. The user has now option to pin his favorite projects. The projects are now arranged neatly in chronological order making it far more intuitive and easy to use.
One important thing to note here is that your Visual Studio settings go along with you wherever you sign-in. For example, let us say you created a project and committed it to source control in one machine. If you then sign-in and open VS 2017 in some other machine, that project would be available in your recent list. You can simply click the project and VS will give you option to set up the source control on your new machine.
Open section now lets you checkout your project in source control directly from Start Page. This section will not only show Visual Studio Team Services but any other 3rd party source controls like GitHub.
In addition to this you can also open a Project/ Solution, Folder or a Web Site directly from here.
The new project section saves you few clicks by showing your recent project templates to help you quickly start development. Again, recent templates moves with machines where you sign-in.
In addition to this you can search project templates directly from here. You can search by template name, type or language.
As mentioned earlier, the developer news section now comes as right toggle on Start Page. This section will be visible or hidden based on your preference. That is, you can either keep it open or collapsed as per your choice.
If news section is collapsed, you do not need to worry about losing on news. You will be notified by a badge on the top right corner of the toggle icon.
Hope this helps to you improve your productivity. Please share do your comments. Happy coding!!!
Visual Studio 2017 was launched with much fan fare yesterday (March 7, 2017). I started exploring Visual Studio 2017 from RC and I must say, after using VS 2017 I felt I was earlier leaving in stone age. It is so much better.
In short the Visual Studio 2017 is equivalent to following:
VS 2017 = VS 2015 + Loads of 3rd party plugins (like NChrunch, few ReSharper features etc.) + Improved tooling, performance, experience, productivity etc.
Below, I have tried to highlight major features in Visual Studio 2017. This is not an exhaustive list but only few features which has helped me to improve my productivity significantly.
Faster Installation – Choose your workload
The first thing that you will notice while starting Visual Studio is that you got to chose what you want. Are you just a web developer? No worries, you only install just web workload. In fact, you can even chose what individual component you want to install within that workload. That means, lesser space, faster install time.
Faster Load Time – Increase Productivity
One of the major pain-points with previous version of VS was that it used to taken an eternity to load a solution with lot of projects. You could actually launch your VS go for coffee, come back and it would still be loading. But, VS 2017 actually loads these project very fast. So, you no longer need to go to coffee, just open the project and start coding and leave home early 🙂
From my own experience, my solution contained around 92 projects. Opening them in Visual Studio 2015 could take anywhere between 2 to 3 minutes or even more. Sometimes, it would hang and I would need to start all over again. Worst still, if you had ReSharper installed like me, I could go for lunch along with coffee and come back before it loads.
With VS 2017, the same solution opens in less than 30 seconds. No more coffee breaks!
C# 7.0 Support
Visual Studio 2017 comes with C# 7.0. C# 7.0 has introduce lot of new features like Tuples, Switch Case improvements, Pattern Matching, Local Functions, etc.
// Example of tuple feature of C# 7.0
public (int sum, int difference) GetSumAndDiffernce(int a, int b)
return (a + b, a - b);
If you have used NChrunch, probably you would know what I’m talking about. Visual Studio 2017 brings in the support for live unit testing. VS 2017 runs unit test cases in background as you write your code. It means you simply write code and you will get an instant feedback on what unit test cases are failing or passing due to your change. Ideal for TDD. Makes your smarter and increase your productivity.
To enable Live Unit Testing in your solution go to Test -> Live Unit Testing -> Start
Important Note: If you are using a .NET Core project. Then you are out of your luck. .NET Core does not support Live Unit Testing currently.
You will get following error in output window:
"Live Unit Testing does not yet support .NET Core"
Improvements in .NET Core Tooling
Back in VS 2015 days, .NET Core tooling was still in preview. If you were early adopters of .NET Core you would know what a pain it was. With VS 2017 the tooling has come out of preview and moved to 1.0. In addition to this you have MS Build support.
Once, you open an existing .NET Core project written in Visual Studio 2015 is “One-way upgrade” dialog as shown below. On clicking OK it will migrate your existing VS project to a newer version automatically.
Why this upgrade? This is because, VS 2017 no longer supports project.json and xproj. It is replaced by csproj. The csproj itself is no longer complicated as earlier. You can edit csproj file and add/ remove references without unloading the project. The csproj file also supports intellisense.
Vs 2017 supports containers out of the box. While creating a new project you get an option to enable Docker Support.
Important Note: Before you enable Docker Support in your project, make sure you have Docker installed in your machine.
Else, your build will fail with error: "Microsoft.DotNet.Docker.CommandLineClientException: Unable to run 'docker-compose'. Verify that Docker for Windows is installed and running locally."
If you do not have the tools, you can enable Docker support later as well.
There are many other features which I have not listed down here. For the complete list please refer to VS 2017 release notes.
aysnc await is probably one of the most important features of C#. It has made the life of developers easy. It helps developers to write clean code without any callbacks which are messy and difficult to understand.
However, if used in incorrectly async await it can cause havoc. It can lead to performance issues, deadlocks which are hard to debug. I have burnt hands due to incorrect use of async await in the past and based on my little experience I can tell these issues will make your life hell, you will start questioning your very existence on this earth or why you chose to be a developer 🙂
Today, every project we work on big or small, easy or complex, small team or large team is probably on Source Control. The source control of course can be git, VSTS, SVN etc.
Still, there are times where you need to share your code as zip in an email, or shared link. It could be because your customer, colleague or partner do not have access to your source control or simply you have not added your code to Source Control itself.
Now, if you just zip the solution folder and email or share the link then you would include folders like bin, obj, packages or files like .sou, .user etc. These files are not required to build the solution. These files increase your zip file size significantly. The solution is simple, delete all the files which are not required. However, what if you have over 50 projects in the solution? And what if you have to this activity multiple times? It is too much of manual effort to do perform this activity.
I had a similar issue in my one of my engagements recently. However, instead of spending hours to do this manual work, I decided to automate the process by creating a small console app. The app deletes all the unwanted folders and files recursively from the solution. I have included following folders and files as per my requirements as the part of deletion list:
In a web application this technique is not scalable. Each user request results in the creation of a new HttpClient object. Under a heavy load, the web server can exhaust the number of sockets available resulting in SocketException errors.
From above two articles I could conclude, below are the major issues with disposing the HttpClient object for each request:
The execution time of the HttpClient request is higher. This is obvious since we create and dispose the object every time for a new request.
Disposing HttpClient object every time could potentially lead to SocketException. This is because disposing the HttpClient object does not really close TCP connection. Quoting from the ASP.NET monster post:
..the application has exited and yet there are still a bunch of these connections open to the Azure machine which hosts the ASP.NET Monsters website. They are in the TIME_WAIT state which means that the connection has been closed on one side (ours) but we’re still waiting to see if any additional packets come in on it because they might have been delayed on the network somewhere
I wanted to test the performance improvements when we create a static instance of HttpClient. The aim of my test was ONLY to see the difference of execution time between the two approaches when we open multiple connections. To test this, I wrote following code:
private static readonly int _connections = 1000;
private static readonly HttpClient _httpClient = new HttpClient();
private static void Main()
private static void TestHttpClientWithUsing()
for (var i = 0; i < _connections; i++)
using (var httpClient = new HttpClient())
var result = httpClient.GetAsync(new Uri("http://bing.com")).Result;}
catch (Exception exception)
private static void TestHttpClientWithStaticInstance()
for (var i = 0; i < _connections; i++)
var result = _httpClient.GetAsync(new Uri("http://bing.com")).Result;
catch (Exception exception)
I ran the code with 10, 100, 1000 and 1000 connections.
Ran each test 3 times to find out the average
Executed ONLY one method at a time
My machine configuration was:
Below are the results from the Visual Studio Instrumentation Profiling:
No Of Connections
Time in Seconds
Difference in Seconds
Performance Improvement in %
As you can see the time of execution for the static instance is far lesser than disposable object.
Does it means we should use static client object all the time? It depends.
One of the issues people have found with static HttpClient Instance is that it does not support DNS changes. Refer this article. For .NET application, there is a workaround available where you can you can set connnectonLeaseTimeOut by using ServicePoint object as mentioned in post.
However, for an ASP.NET Core, you may be out of luck as per this issue in GitHub as similar property does not seem to exist.
Hope this post help you take informed decision in your projects. Please share your thoughts in comments section.
In my current engagement, we have more than 80 projects in a solution (don’t ask me why :)). Recently, as per quality guidelines, we needed to make few changes to each project.
For example: Treat warnings as errors, enable code analysis for each project, sign assembly etc.
I realized doing it manually can take me entire day so I spent few mins to create a small script in C# to save my time. Here is the code snippet:
static void Main(string args)
var projectList = new List()
// Your Project file paths
foreach (var project in projectList)
var projectCollection = new ProjectCollection();
var proj = projectCollection.LoadProject(project);
// Select Debug configuration
var debugPropertyGroup =
e => e.Condition == " '$(Configuration)|$(Platform)' == 'Debug|AnyCPU' ");
// Select Release configuration
var releasePropertyGroup =
e => e.Condition == " '$(Configuration)|$(Platform)' == 'Release|AnyCPU' ");
//Sign assembly with a with strong name key
Hope it helps save some time for some of you 🙂
Update: I have created a command line utility for the solution and added it to GitHub. This utility accepts different set arguments to based on operations required to be performed. Please refer to ReadMe.md in GitHub for more details.
Recently, I needed to scale out my web app hosted on Virtual Machine. After a few hiccups and learnings, I was finally able to Load Balance my web app hosted over multiple Virtual Machines. I have tried to document the steps in the form of a blog here.
To scale out I used following configuration:
Two Azure virtual machines, Windows Server 2012 R2 hosting web app on IIS
Azure load Balancer (By Microsoft)
1. Create Resource Group
We will start with creating a Resource Group. The VMs and Load Balancer will be created in the same Resource Group. This helps us to keep things together.
On the Azure portal, go to Resource groups -> click Add -> Provide Resource group name, select Subscription and Resource Group location -> Click Create
2. Create Azure VM1 (First Virtual Machine)
Select New -> Virtual Machines -> Select VM Windows Server 2012 R2 -> Select deployment model to Resource Manager -> Click Create.
You will be taken to Create Virtual Machine wizard.
a. Basics – Configure basic settings
Provide the name of Virtual Machine, Server User name, password -> Select the Resource Group created in the previous step -> click OK
b. Size – Choose virtual machine size
Select the size of Virtual Machine and click Ok
c. Settings – Configure optional features
Under Settings, Create new Virtual network.
Next, create new Availability set
Under Summary, validate the details and Click Ok to create the Virtual machine.
3. Create Azure VM 2 (Second Virtual Machine)
Create second virtual machine similar to the first Virtual Machine. Make sure to select same Virtual Network and Availability Set.
4. Publish Web App to Azure VM
The next step is to publish the Web App to Azure VM. You can follow steps as explained in my previous blog post. For this demo, I have deployed a simple web app which displays the machine name of the server.
Virtual Machine1: test-vm1
Virtual Machine2: test-vm2
4. Configure Load Balancer
a. Create Load Balancer
From the Azure portal, Click New -> Search Load Balancer -> Select Load Balancer with publisher as Microsoft -> Click Create
Next, in the Create load balancer wizard, Provide the Name of Load Balancer -> Create new IP address -> Provide the Resource group same as created in earlier step -> Click Create
b. Add Probe
Once, the load balancer has been created, select the load balancer -> Click Settings -> Select Probes -> Click Add -> Provide the name of the probe, keep the Port number as 80 -> click Ok.
c. Add backend pool
Select the load balancer -> Click Settings -> Select Backend pools -> Click Add -> Provide the Name of the backend pool -> Select the Availability set created while creating Virtual Machines -> Choose both the Virtual Machines -> Click Select and Ok
d. Add Load balancing rule
Select the load balancer -> Click Settings -> Select Add Load balancing rule -> Click Add -> Provide the Name of load balancing rule -> Select the backend pool created in the previous step -> Click Ok
e. Configure DNS Name for Load balancer
Select Public Ip Address -> Click Settings -> Click Configuration -> Provide DNS name label –> Click Save
That’s It!. We are done. Navigate to the DNS address you provided for the load balancer and you will be navigated to one of the Azure VM. To verify the load balancing, shut down one of the machines and see all the requests being redirected to the second Azure Virtual Machine.