Castle Windsor: Change Lifestyle

In my current project, we use Castle Winsdor for Dependency Injection. I must admit before this project I had never used or even heard of it. I have had some love-hate relationship with Castle Windsor , with more hate than love initially. However, over a period of time, I realized that Castle Windsor is probably among the best IoC containers out there. It is extremely flexible and powerful.

Recently, I got stuck with up an issue where Integration Test Cases of our application started failing. It took me some time to find out the root cause of the issue.

Why were Integration Test Cases failing?

We use ASP.NET Web API self Host for our integration test cases and had shared code to register components to WindsorContainer . Few of these components were PerWebRequest lifestyle components. Web API Self Host did not like PerWebRequest lifestyle and started throwing Internal Server Error (500)

Of course, the easiest solution was to use separate code to register the containers for Integration test cases and register PerWebRequest components as Singleton in integration tests. However, that would mean that we would need to have two identical copies of the same code. It would be a maintenance headache. While searching for a solution I came across this article which talked about IContributeComponentModelConstruction.

IContributeComponentModelConstruction is the easiest way of extending and modifying Windsor container. You can implement this interface to override component lifestyle.

Here is the usage:

Now, you can plug the above code to your container by simply adding to below line of code while registering your Windsor Container:

Hope this helps you to save some debugging effort ūüôā


MICROSOFT BOT FRAMEWORK ‚Äď PART 3: Add channels to bots

This is part 3 (and possibly the final) of Microsoft Bot Framework Series. If you have jumped right here, you may want to look into my part 1 and part 2 of my blog. In this post, I will explain how to add different channels to the Microsoft Bot

Microsoft Bot Framework provides the option to add a variety of channels like Web, Bing, Team, Facebook Messanger, Slack, Telegram, Twilio etc. I will focus on the web, Skype and Facebook Messanger channels here.

You can add these channels directly from Azure Portal or through Bot Framework Portal. I personally found the experience of managing Bots from the Bot Framework Portal more convenient.

  • Go to Bot Framework Portal -> My Bot and select the Bot you created in the last step
  • Under the “Channels” link you will find Skype and Web Chat channel are added to Bot by default.
Default Channels

Connect to Skype Channel

  • To test the Skype Bot, select Skype link and you will be redirected to Add Skype Bot to Contact screen. Click on the button to add the Bot to your Skype Account.
Add to Contacts
  • Once, you have added the Bot, you can chat with the Bot same way shown in the last post.
  • To configure the Skype Channel, click on edit button. This provides you an option to embed Skype directly to your website. Additionally, you can also update other settings like messaging, calling, and groups.
  • Once, you have configured your Bot. You can go ahead and publish the Skype Bot to distribute it to an unlimited number of users. Your request will be first reviewed, and if you adhere to review guidelines your Bot will be published.
  • After your Bot has been published, any Skype User can add it to their contacts to connect to you or your organization.
Add Bot to Skype Contact

Connect to Web Chat

  • Like Skype, Web Chat is another channel that is added by default when you create your Bot
  • Configuring Web Chat is very easy. Click on Edit button and you will be presented with the HTML code that needs to be embedded to your Website. Adding the web chat is as easy as adding <iframe> to your website.
Embed bot as webchat to your website

WARNING: Embedding the web chat control in your website using the secret is NOT SECURE as your secret key gets exposed with the HTML. Please exercise caution before using this option. Read more about connecting to Web Chat channel here.

Connect to Facebook Messager

  • As a pre-requisite to Add Facebook Messanger channel to your Bot you would need a Facebook Page and Facebook App.
  • Select Facebook Messager option to Facebook Messanger channel to your Bot.
Add Facebook Messanger Channel
  • Next, you will need to provide Facebook Page Id, Facebook App Id, Facebook App Secret and Page Access Token. Please follow this link to understand how to do so in depth.
Your Facebook Messanger Credentials
  • You would now, need to provide the Callback URL and Verify Token to Facebook. The above link also explains it in details. Hence, I will skip this step.
  • Once, you have filled in all the required information you would need to Publish your app for review. Please go through the submission guidelines and submit your app for review.

That’s it. You now have FAQs of your organization available as chat on your website, Skype and Facebook messenger in few simple steps.

What’s next? The possibilities with Microsoft Bot Framework are enormous. What I have shown in these 3 posts is just a small preview of what you can achieve in a matter of hours. For more details on Bot Framework, please go through their docs.

Hope you liked the post. Please keep the feedback coming ūüôā

Microsoft Bot Framework – Part 2: Publish Bot Service to Azure

This blog post is Part 2 of how to create a chat bot with Microsoft Bot Framework which can answer FAQs on your website. This is in continuation to my previous post where I explained how to create a QnA service using Microsoft QnA service maker. You can read Part 1 of my post here.

In this post, I will demonstrate how to deploy the service we previously created on Azure. The only pre-requisite is to have an Azure account. You can sign up for free.

  • Log in to your Microsoft Azure account and search for “Microsoft Bot Framework”, you should get one result “Bot Service”. Select the Bot Service to proceed and click Create. At the time of writing Bot Service is still in preview.Azure-BotService
  • Next, provide the name of your app, select hosting plan and click¬†Create.¬†The app service will be created within few minutes.
  • Next, select the service created in the last step and you will get a screen similar to below.


  • Follow the steps to register your bot with¬†Microsoft App Id.
  • In the next step, choose the language you are comfortable in developing your bot framework. Currently, C# and NodeJS are supported.

Note:  Your initial code will be auto-generated when you create a bot. So, you do not need to jump to the code right away.

  • Next, you get an option to choose a template. Select template¬†Questions and Answer¬†and click Create.


  • In the next step, you can integrate the QnA service created in Part 1. ¬†Sign in with your QnA maker account credentials, select your existing knowledge base from the drop-down and click OK.



  • This will provision your bot¬†and deploy your Bot Service. It can take few minutes to complete this step.
  • Next, you will be asked how to work with your code. You can choose to edit in online editor, download source code Zip or set up continuous¬†deployment from your source control.
  • That’s it, you can now test your Bot by clicking “Test” ¬†button on the top right.



In the next post, I will explain how to connect to different channels like Skype, Web Chat, Facebook Messenger etc. from your Bot Framework.

Stay Tune! ūüôā

One year of Motherhood

Deepika Vijay Blog

"A woman is a soul who carries a soul within her"


Motherhood, a special feeling coupled with divine emotions, sharing the strongest bond with your child. But being a mother is not an easy job. It is probably one of the toughest jobs, which comes with new responsibility. Unlike any corporate job, the job of a mother is not limited to just weekdays. As a mother you need to be ready 24*7, no matter what condition you may be in.

An year back we were blessed with a baby boy. The feeling when you hold your baby for the first time cannot be described in words. From the day your child is born, the transformation begins from a woman to a super mom.

Life changes completely when you have a baby. You have to be fearless and confident while at the same time you are nervous and scared .


View original post 229 more words

Testing in Modern App Development

I’m back to writing blogs after a small break. Things have changed for me¬†personally since, I have made a career move recently. I hope to be more regular going forward.

In my new team, we follow Kanban flow. Our application is customer facing and due to nature of our project we need to do a frequent releases. This means we need to deliver the features faster and an external application the code quality and UI needs to be of the highest quality.

While quality of the product has been centre piece of all my projects,  I have been in discussion with my manager lately on our approach towards testing. The project complexity is growing each day and we are adding new features to application at very a fast pace, but team size remains constant. Hence, there has been increased need of changing the way we look at our development and especially QA process.

In modern app development it has become increasingly¬†important to write ready to ship code. While no one can claim to write a bug free code, ensuring the quality of the product is no longer just the responsibility of QA.¬†The responsibility of writing a reliable code lies equally on the developer team. In the past dev and QA used to be different departments in an organization. But, this trend is changing. The Dev and QA are now part of one project/ one team. The line between a developer and QA is diminishing. As a¬† developer you should be prepared to test your own and your peer’s code.

In this blog, I’m not trying to explain meaning of each level of testing, I believe every developer knows it already. I’m trying to explain their significance in the app development and how they can help you develop feature faster, prevent bugs and improve quality. I also try to give examples from my own experiences.

Unit Testing

Unit testing is the first and probably the most important pillar of a resilient code. Many times it may seem the unit testing is rather imposed by the organization than developer understanding the true value for it. The result is bad unit tests. I feel a bad unit test case is even worse than no unit test case. If as a developer you are writing just to get the unit test case coverage or because someone else has asked you to do so, you need to think again. Writing a unit test case to just achieve the code coverage gives the team false assurance that the each line of code has been unit tested. Code coverage is a necessary but not a sufficient condition. Quality of unit test case is more important than code coverage. Additionally, the naming convention of a unit test case should reflect the intension of the test case. Example of a bad unit test case name is PopTest, and a good unit test case name could be PopStackWithNoItemShouldFail

While, I do not have any strong preference for Test Driven Development (TDD), I do feel it is one of the better ways to unit test your application. Even if you do not follow TDD your UTs should be written after you have developed a part of your logic/ feature rather than after writing the entire functionality/ feature.

Consider this scenario: You are working on a web application with a significantly complex flow. Now, you need to make a small change but one which has a huge impact in your entire application.¬† If you test these changes¬† directly from web app, you would end up spending significant amount of time to validate all the basic test cases like null check, input validation etc. And even then, you cannot be sure to cover all the test cases.¬† That’s where, unit test cases come handy. You just test your Unit of work. You can test all the flows/ conditions in your code way faster. Once, you are satisfied with the changes you may proceed to test your app at higher level. Remember even if¬† it is time consuming to write unit test cases, it is way cheaper to make changes to the code at this stage.

Integration Testing

Integration testing is the second level of defence for your code. Consider the same example as before, your small change has an indirect impact on some other module of your web application and you may not be even aware of it. Since, you have tested your part you happily deliver this code to your test. But what do you get? Regression bugs. The cost of a regression bug is very high in software development lifecycle. Integration testing helps you prevent those regression bugs. Many times, developers do not understand the importance of integration tests. It might look like it is waste of time to write and maintain them. But, it is not a waste of time. The integration tests you gives the team confidence that code changes they have made do not break other functionality.

You can also bind the business requirements to your code (BDD) through integration tests. Specflow  is one of widely use framework in .NET to define business behaviour in your code. It bridges the gap between business and technology. The granularity of test cases can vary from project to project. In one of my previous projects, we used to have a test case for all the acceptance criteria (ACs) for a User Story. This helped us validate and confirm the requirements well before it reaches to testing. If your integration test cases are good enough and complete, then you can reduce the chances of functional bugs at the time of QA significantly.

UI Automation Testing

Automation Testing, I believe has never not got enough love from developers. It is not consider part of development and many times it is left to QA. The QAs too keep it restricted to Build Verification Tests/ Smoke Tests. The dynamics of team has changed rapidly as the organizations move towards Agile. The number of testers per developers are lesser than traditional development lifecycle. Yet, the complexity of code has increased significantly and you are suppose to deliver high quality ready-to-ship code. UI Automation plays a very important role in this. I believe the responsibility of writing the UI Automation tests for their features should lie on developers. This may seem overkill initially and it may look like that it reduces team velocity. But again, once your initial set up is done it will be much faster to write the automation tests. Selenium and Coded UI are two UI automation frameworks in .NET. In our team, while we have very good UI Automation already, we are undergoing a shift towards our approach to Automation testing. We intend to automate complex workflows and scenarios, making it data-driven using external data source like excel. Additionally, we plan to run the UI test cases as part of Pull Request build (we use Git as source control). This essentially means the code cannot be merge to the master if the UI Test cases are failing. This also means our master branch is¬† always ready to ship. Well… Almost!! ūüôā

Manual Testing

The last pillar of testing before the code is ready to ship or move to UAT is Manual testing. If you have followed all the previous steps thoroughly, then manual testing becomes more of a validation. If a tester is able to find the most basic or obvious bugs in the application, then there is something wrong with process you are following as a team.

Personally, I prefer the QA to be part of development team only and not a separate department. When QA comes from a different department within an organization, then their end goals can be different. Instead of developers and QAs having discussion on the priority or severity of bug or whether it is a regression or existing bug, the conversation needs to move to what it takes to provide a stable quality build.

Please share your experiences of testing in your projects. ūüôā


Visual Studio Team Services = Git/TFS + JIRA + Team City + Octopus

Warning: The content of this post is highly opinionated. Please exercise caution. :)

A bit of Background

Back in 2012-13, the term Dev-Ops was unheard of in my team. We were still living in dark ages where a developer would developed the code, then write unit test cases more to get the code coverage look good than to actually “test” the code. The code would then go through a review and queued for check-in. Some magic which the developer never really cared about would then tell us if the code was successfully checked-in or it failed. That magic was TFS XAML ¬†build. The build was managed by a dedicated team and we never felt it was part of development process. After the end of sprint iteration a huge code base would then go to the test team and they would start testing the code based on their test cases and log bugs to the dev team. More often than not a developer would talk to tester only during that phase. This resulted in 100s of bugs in a big team. Few of these bugs would be invalid due lack of understanding of requirements, few others would be basic bugs which should have been handled by the developer in the first place. All the requirements, tasks, bugs, issues etc were logged in TFS. But, there was no dashboard. So, every developer and dev lead had to be expert in excel to track the items more effectively. It is said Ignorance is bliss, and same was true for us. We were happy in our shell, delivering code in this fashion and never felt need of change.

During the similar time-frame, I got an opportunity to work in a customer location. This is when I got a rude shock. The customer used tools which I had not heard before. They used confluence to collaborate, JIRA to track work items, Team City for build and a source control that was not TFS :). Suddenly, I realized that development was more than just writing code. I fell in love with these tools immediately. I realized there is more in the world than TFS.

However, there was still one big pain point with all these tools and application. We were using just too many tools and plugins – TortotiseSVN, JIRA, Team City, some other tool for deployment etc. Each one of these tools looked different and they worked differently.

Visual Studio Team Services

Microsoft was late to join the party and TFS was surely lagging the features that Agile¬†projects and modern development practices demand.¬†In 2013, Microsoft introduced¬†Visual Studio Online¬†(VSO). At the time of launch VSO appeared to be¬†nothing¬†more than TFS on cloud. However, Microsoft started adding more and more features to VSO. Many of these features were “inspired” from competitive tools. Over a period of time, VSO was aptly renamed to Visual Studio Team Services (VSTS). Today, VSTS has become one stop for all our development.

VSTS has solved one big problem of fragmentation. Its features today are on-par if not more with JIRA, Team City and Octupus.  With VSTS as a user you do not need multiple accounts,  your single account will give you access to everything you need for software development. Additionally, VSTS now offers full support for more and more non-Microsoft services. That means, you are first class citizen irrespective of your editor, source control (TFVC,  Git) and technology. If you do not want to use VSTS for everything, you still have a choice to choose what you want. For example, you can choose Octopus over VSTS Release Management and push package from VSTS build to Octopus directly. It works seamlessly.

To know more about features of VSTS go here. What tools do you use for your development?

Visual Studio 2017 – The best IDE ever

Visual Studio 2017 was launched with much fan fare yesterday (March 7, 2017). I started exploring Visual Studio 2017 from RC and I must say, after using VS 2017 I felt I was earlier leaving in stone age. It is so much better.

In short the Visual Studio 2017 is equivalent to following:

VS 2017 = VS 2015 + Loads of 3rd party plugins (like NChrunch, few ReSharper features etc.) + Improved tooling, performance, experience, productivity etc.

Below, I have tried to highlight major features in Visual Studio 2017. This is not an exhaustive list but only few features which has helped me to improve my productivity significantly.

Faster Installation – Choose your workload

The first thing that you will notice while starting Visual Studio is that you got to chose what you want. Are you just a web developer? No worries, you only install just web workload. In fact, you can even chose what individual component you want to install within that workload. That means, lesser space, faster install time.

01-VS Installer
Visual Studio 2017 installer

Faster Load Time – Increase Productivity

One of the major pain-points with previous version of VS was that it used to taken an eternity to load a solution with lot of projects. You could actually launch your VS go for coffee, come back and it would still be loading. But, VS 2017 actually loads these project very fast. So, you no longer need to go to coffee, just open the project and start coding and leave home early ūüôā

From my own experience, my solution contained around 92 projects. Opening them in Visual Studio 2015 could take anywhere between 2 to 3 minutes or even more. Sometimes, it would hang and I would need to start all over again. Worst still, if you had ReSharper installed like me, I could go for lunch along with coffee and come back before it loads.


VS 2015 - Preparing solution1
VS 2015 – Preparing Solution (You can go for a coffee)

With VS 2017, the same solution opens in less than 30 seconds. No more coffee breaks!

C# 7.0 Support

Visual Studio 2017 comes with C# 7.0. C# 7.0 has introduce lot of new features like Tuples, Switch Case improvements, Pattern Matching, Local Functions, etc.

// Example of tuple feature of C# 7.0
public (int sum, int difference) GetSumAndDiffernce(int a, int b)
       return (a + b, a - b);

You can get more details on C# 7.0 here.

Live Unit Testing

If you have used NChrunch, probably you would know what I’m talking about. Visual Studio 2017 brings in the support for live unit testing. VS 2017 runs unit test cases in background as you write your code. It means you simply write code and you will get an instant feedback on what unit test cases are failing or passing due to your change. Ideal for TDD. Makes your smarter and increase your productivity.

To enable Live Unit Testing in your solution go to Test -> Live Unit Testing -> Start

03-Live Unit Testing
Start Live Unit Testing
06-Live Unit test Example
Example of Live Unit Testing
Important Note: If you are using a .NET Core project. Then you are out of your luck. .NET Core does not support Live Unit Testing currently.
You will get following error in output window:
"Live Unit Testing does not yet support .NET Core"

Improvements in .NET Core Tooling

Back in VS 2015 days, .NET Core tooling was still in preview. If you were early adopters of .NET Core you would know what a pain it was. With VS 2017 the tooling has come out of preview and moved to 1.0. In addition to this you have MS Build support.

Once, you open an existing .NET Core project written in Visual Studio 2015 is “One-way upgrade” dialog as shown below. On clicking OK it will migrate your existing VS project to a newer version automatically.

02-Project Upgrade1.PNG
.NET Core One-way upgrade

Why this upgrade? This is because, VS 2017 no longer supports project.json and xproj. It is replaced by csproj. The csproj itself is no longer complicated as earlier. You can edit csproj file and add/ remove references without unloading the project. The csproj file also supports intellisense.

05-Edit CSProj.PNG
Simplified csproj file with intellisense 


Docker Support

Vs 2017 supports containers out of the box. While creating a new project you get an option to enable Docker Support.

04-Enable Docker
Enable Docker Support
Important Note: Before you enable Docker Support in your project, make sure you have Docker installed in your machine. 

Else, your build will fail with error: "Microsoft.DotNet.Docker.CommandLineClientException: Unable to run 'docker-compose'. Verify that Docker for Windows is installed and running locally."

If you do not have the tools, you can enable Docker support later as well.

There are many other features which I have not listed down here. For the complete list please refer to VS 2017 release notes.

Happy Coding!!!

async await best practices

aysnc await is probably one of the most important features of C#. It has made the life of developers easy. It helps developers to write clean code without any callbacks which are messy and difficult to understand.

However, if used in incorrectly async await it can cause havoc. It can lead to performance issues, deadlocks which are hard to debug. I have burnt hands due to incorrect use of¬† async await¬†in the past and based on my little experience I can tell these issues will make your life hell, you will start questioning your very existence on this earth or why you chose to be a developer ūüôā

I have tried to add common pitfalls while using async await  below. These are some of my learnings while working on problems that arise due to incorrect use of async await Much of these is inspired from Stephen Cleary blogs and Lucian Wischik six essential tips for Async channel 9 videos.

Here are tips:

  • AVOID using¬†Task.Result or Task.Wait(). They make the calls synchronous and block async code.
  • Make your calls async all the way.
  • USE¬†`Task.Delay` instead of `Thread.Sleep`
  • Understand the difference between CPU bound and IO bound operation before using Task Parallel Library or TPL
  • USE¬†Task.Run¬†or Parallel.ForEach for CPU bound operation.
  • USE await for IO bound operation.
  • USE ConfigureAwait(false)on the web APIs or library code. On the WPF application do not use ConfigureAwait(false) at the top level methods.
  • AVOID using `Task.Factory.StartNew`. Use Task.Run
  • DO NOT expose a synchronous¬†method as asynchronous or visa vesra. In other words your library method should expose the true nature of method.
  • DO NOT use¬†async void other than top level Events. ALWAYS return async Task

I hope these tips help few of you and help avoid common mistakes. Please suggest any other tips in the comments section.

Clean Visual Studio Solution

Today, every project we work on big or small, easy or complex, small team or large team  is probably on Source Control. The source control of course can be git, VSTS, SVN etc.

Still, there are times where you need to share your code as zip in an email, or shared link. It could be because your customer, colleague or partner do not have access to your source control or simply you have not added your code to Source Control itself.

Now, if you just zip the solution folder and email or share the link then you would include folders like bin, obj, packages or files like .sou, .user etc. These files are not required to build the solution. These files increase your zip file size significantly. The solution is simple, delete all the files which are not required. However, what if you have over 50 projects in the solution? And what if you have to this activity multiple times? It is too much of manual effort to do perform this activity.

I had a similar issue in my one of  my engagements recently.  However, instead of spending hours to do this manual work, I decided to automate the process by creating a small console app. The app deletes all the unwanted folders and files recursively from the solution. I have included following folders and files as per my requirements as the part of  deletion list:

Folders:  bin, obj, TestResults, packages
Files:  "*.vssscc", "*.ncrunchproject", "*.user", "*.suo"

Source code has been shared in GitHub here.

Alternatively, You can also download the executable directly from here.

Hope it helps some you to save your time and be more productive. Please do provide your comments and feedback.

Dispose HttpClient or have a static instance?

Recently, I came across this blog post from ASP.NET Monsters which talks about correct using HttpClient.

The post talks about issues of related to disposing HttpClient object for each request. As per the post calling HttpClient method can lead to issues.

using (var httpClient = new HttpClient())
    await httpClient.GetAsync(new Uri(""));

I have been using the HttpClient object like this for almost all of my projects. Hence, this post was an eye opener for me.

Also, as per the patterns and practices documentation:

In a web application this technique is not scalable. Each user request results in the creation of a new HttpClient object. Under a heavy load, the web server can exhaust the number of sockets available resulting in SocketException errors.

From above two articles I could conclude, below are the major issues with disposing the HttpClient object for each request:

  • The execution time of the HttpClient request is higher. This is obvious since we create and dispose the object every time for a new request.
  • Disposing HttpClient object every time could potentially lead to SocketException. This is because disposing the HttpClient object does not really close TCP connection.¬†Quoting from the ASP.NET monster post:

..the application has exited and yet there are still a bunch of these connections open to the Azure machine which hosts the ASP.NET Monsters website. They are in the TIME_WAIT state which means that the connection has been closed on one side (ours) but we’re still waiting to see if any additional packets come in on it because they might have been delayed on the network somewhere

I wanted to test the performance improvements when we create a static instance of HttpClient. The aim of my test was ONLY to see the difference of execution time between the two approaches when we open multiple connections. To test this, I wrote following code:

namespace HttpClientTest
   using System;
   using System.Net.Http;

   class Program
      private static readonly int _connections = 1000;
      private static readonly HttpClient _httpClient = new HttpClient();

      private static void Main()

      private static void TestHttpClientWithUsing()
             for (var i = 0; i < _connections; i++)
                using (var httpClient = new HttpClient())
                   var result = httpClient.GetAsync(new Uri("")).Result;}
         catch (Exception exception)

     private static void TestHttpClientWithStaticInstance()
             for (var i = 0; i < _connections; i++)
                  var result = _httpClient.GetAsync(new Uri("")).Result;
         catch (Exception exception)


For testing:

  • I ran¬†the code with 10, 100, 1000 and 1000 connections.
  • Ran each¬†test 3¬†times to find out the average
  • Executed ONLY one method at a time

My machine configuration was:

System Configuration

Below are the results from the Visual Studio Instrumentation Profiling:

Method No Of Connections Time in Seconds Difference in Seconds Performance Improvement in %
TestHttpClientWithUsing 10 2.6
TestHttpClientWithStaticInstance 1.8 1 44
TestHttpClientWithUsing 100 408
TestHttpClientWithStaticInstance 240 168 70
TestHttpClientWithUsing 1000 241
TestHttpClientWithStaticInstance 160 81 51
TestHttpClientWithUsing 10000 2456
TestHttpClientWithStaticInstance 1630 826 51

As you can see the time of execution for the static instance is far lesser than disposable object.

Does it means we should use static client object all the time? It depends.

One of the issues people have found with static HttpClient Instance is that it does not support DNS changes. Refer this article. For .NET application, there is a workaround available where you can you can set connnectonLeaseTimeOut by using ServicePoint object as mentioned in post.

However, for an ASP.NET Core, you may be out of luck as per this issue in GitHub as similar property does not seem to exist.

Hope this post help you take informed decision in your projects. Please share your thoughts in comments section.