Sunday, July 19, 2015

Blog has moved....

After seeing what I could get from Ghost as a blogging platform, I decided to move my blog to Ghost - blog.many-monkeys.com - I hope you like it.

Friday, April 3, 2015

Using GMock with Visual Studio CppUnitTestFramework

One of the things I have been a bit disappointed with myself during the development of OpenCover is the lack of unit testing around the C++ code that makes up the profiler. I did toy with GTest and got some decent tests around the instrumentation engine but I was never able to actually test the profiler callbacks, also I found the lack of GTest integration with Visual Studio quite irritating; I know I have been spoilt by ReSharper. Recently however, during handling Fakes through OpenCover, I had an opportunity to work out how to load the profiler using registry free loading and realised that perhaps such testing might be within my reach, what I was missing however was a mocking library and one that I could use with Visual Studio tooling.

Frankly GMock was the only candidate, the commercial alternatives being out as this was for an OSS project, but the instructions all seemed to want to build a number of libraries (64/32 bit Debug/Release) that I would have to statically link to and maintain these builds should the source or build options change. I decided to try a different tack that wouldn't involve building libraries and it has worked out reasonably successful, so I thought it would be worth commenting on here.

Step 1 

Get the latest GMock (1.7.0) library as a zip file and uncompress it somewhere within your repository.

Step 2

From within Visual Studio update the Additional Include Directories to include the following paths

$(SolutionDir)lib\gmock-1.7.0
$(SolutionDir)lib\gmock-1.7.0\include
$(SolutionDir)lib\gmock-1.7.0\gtest
$(SolutionDir)lib\gmock-1.7.0\gtest\include

Step 3

Add the following to your "stdafx.h"

#include "gmock/gmock.h"
#include "gtest/gtest.h"

Step 4

Add the following to your "stdafx.cpp"

// The following lines pull in the real gmock *.cc files.
#include "src/gmock-cardinalities.cc"
#include "src/gmock-internal-utils.cc"
#include "src/gmock-matchers.cc"
#include "src/gmock-spec-builders.cc"
#include "src/gmock.cc"

// The following lines pull in the real gtest *.cc files.
#include "src/gtest.cc"
#include "src/gtest-death-test.cc"
#include "src/gtest-filepath.cc"
#include "src/gtest-port.cc"
#include "src/gtest-printers.cc"
#include "src/gtest-test-part.cc"
#include "src/gtest-typed-test.cc"

Step 5

Now all you need to do is add initialise GMock and you are ready; as I am using the CppUnitTestFramework I do the following.

TEST_MODULE_INITIALIZE(ModuleInitialize)
{
    // enable google mock
    ::testing::GTEST_FLAG(throw_on_failure) = true;
    int argc = 0;
    TCHAR **argv = NULL;
    ::testing::InitGoogleMock(&argc, argv);
}

Now all you need to do is follow the GMock documentation and add some expectations etc you can as I discovered even mock COM objects and have expectations on them e.g.

EXPECT_CALL(*profilerInfo, SetEventMask(EVENT_MASK_WHEN_FAKES))
                .Times(1)
                .WillRepeatedly(Return(S_OK));

Bonus Round

There were a few little niggles however the first of which is that if an expectation fails, the Visual Studio test runner takes a little too long to close down (I suspect this may be something on my machine related to DrWatson). 

The second was that if an expectation did fail I could only initially see the result using DebugView - ugh - however I found a solution at http://www.durwella.com/post/96457792632/extending-microsoft-cppunittestframework which involves using some extra macros, which I added to my "stdafx.h" and voila the results are now available in Visual Studio.

Finally, I found the mocks were not very lightweight and in fact if I left them hooked in caused performance issues however replacing them with an admittedly less useful stub I could avoid this when necessary.

Sunday, February 22, 2015

Happy Birthday OpenCover

Happy Birthday


Today OpenCover is 4 (four) years old, where has the time gone? In that time it has had over 60,000 nuget downloads, been adopted by the SharpDevelop community as the coverage tool for their IDE, and, as I found out the other day, is also being used by the corefx team to supply coverage information on their tests.

Four years ago I started on OpenCover (first commit - not very interesting but a stake in the ground) in order to create a code coverage tool for the .NET platform that could be used by anyone, but especially so that those of us in the open source community could have a tool available to us to help enhance our testing feedback; in the past we have seen some tools go commercial, some just vanish and others just abandoned. I also wanted to share some of the knowledge I had picked up in this area but no longer used in my day-to-day activities and to ensure it remains within the community by making it maintainable and available without restriction.

It took nearly 6 months to get the first beta release and since that time we have added sequence and branch coverage, support for .NET 2 and .NET 4+, 32 and 64 bit support, and even Silverlight. Later features such as coverage by test and hooking into services and IIS support; not everything works as seamlessly as I would like but the community has either lived with it or improved it - which was the outcome I was seeking. Just recently we even added support for Microsoft.Fakes because some people wanted to use OpenCover for coverage with their tests that used Fakes rather than the coverage tool that they already had available; that was an interesting learning exercise, as well due to  some very fortuitous googling.

There even seems to be some movement to make a Mono version of OpenCover which was not something I saw coming but is also quite exciting, especially as Visual Studio now has support for Android and iPhone development, we knew about Xamarin/Mono but actual Visual Studio integration? Who 4 years ago would have seen that one coming ...?

Highlights


One of the highlights of the past few years was starting at my current place of work (MYOB) and then overhearing a conversation within the devops/build team who were discussing the coverage results of this free coverage tool they had found on github, imaging my delight when I realised it was OpenCover they were discussing and in mostly favourable terms; this was the first place I had seen OpenCover being used and it wasn't introduced by me. I even implemented a feature in response to their comments.

Another highlight is seeing that at least two Visual Studio integrations involving OpenCover are currently in play, both of these have been started independently, and though I am currently partly involved with one of them it will be interesting to see how they both progress.

I'd like to thank everyone who has contributed to OpenCover either through direct contribution, suggestions, free stuff (more please) and just using it. Here's to another 4+ interesting years and I wonder what will happen to OpenCover in that time - suggestions?

Wednesday, October 29, 2014

Microservices... Where to Start?

Micro-services are becoming a "thing" now and are probably de-facto when someone begins a new project and are thinking about hosting in the cloud but where do you start when you have a brown field project. Now I don't have any hot answers or amazing insights here all I can do is describe what my first "micro-service" was and how it came into being.

Over time the application was getting more use and the number of servers involved started to increase; we were using auto-scaling and the number of servers increased in line with usage but wavered between 8 and 24 instances. This quite rightly caused some consternation so we tinkered with number of core settings for each instance and thresholds for triggers to scale up and down but nothing seemed to alter the number of total cores being used. We actually have a hefty bit of logging and we can control the output through logging levels so we decided to change the logging to try and get more diagnostic information and this is when things got interesting. As this is a production system getting hold of this log information was initially problematic and slow so we had already started forwarding all the messages to SplunkStorm using the available API and all was well (for over a year) and we were very impressed with how we could use that information for ad-hoc queries. However when we changed the logging levels the servers started scaling and we started to get database errors; unusual ones involving SQL connection issues rather than SQL query errors. We quickly reverted the changes and decided to try and replicate the problem in our CI/SIT environments.

What we realized was that it was our own logging that was causing our performance issues and even more awkwardly was also responsible for the SQL connection issues as the logging to SplunkStorm via its API was using up the available TCPIP connections; this was even more pronounced when we changed the logging level. What we needed to do was refactor our logging such that we could get all our data into SplunkStorm (and Splunk as we were also in the process of migrating to SplunkStorm's big brother) with minimum impact to the actual production systems. Thankfully our logging framework used NLog, which we had wrapped in another entity for mocking purposes, so what we decided to do was write a new NLog target that would instead log to a queue (service-bus) and then have another service read messages from that queue and forward them to Splunk and SplunkStorm and thus our first micro-service was born.

The new NLog target took the log messages, batch pushed them to the queue, then a microservice was written that monitors the queue, pulls messages off in batches, and then pushes them to Splunk and SplunkStorm, also in batches. The initial feasibility spike took 1/2 a day with the the final implementation being ready and pushed into production the following week. Because we were using .NET we could also take advantage of multiple threads so we used thead-pools to limit the number of active Splunk/SplunkStorm messages being sent in parallel. What we found after deployment was that we could scale back our main application servers to 4 instances with only a pair of single core services dealing with the logging aspect, we also noticed that the auto scaling never reaches its old thresholds and the instance count has been stable ever since. Another advantage is that the queue can now be used by other services to push messages to Splunk and can even use the same NLog target in their projects to deal with all the complexities.

I hope the above shows that your first micro-service does not have to be something elaborate but instead deal with a mundane but quite essential task and the benefits can be quite astounding.

Monday, October 13, 2014

Excluding code from coverage...

This may (no guarantees) turn into a series of posts on how to refactor your code for testing using simple examples.

This particular example came from a request to add an "Exclude Lines from Coverage" feature to OpenCover. Now there are many ways this could be achieved, none of which I had any appetite for as they were either too clunky and/or could make OpenCover very slow. I am also not a big fan on excluding anything from code coverage; though OpenCover has several exclude options I just thought that this was one step too far in order to achieve that 100% coverage value as it could too easily abused. Even if I did think the feature was useful it still may not get implemented by myself for several days, weeks or months.

But sometimes there are other ways to cover your code without a big refactoring and mocking exercise which can act as a deterrent to doing the right thing.

In this case the user was using EntityFramework and wanted to exclude the code in the catch handlers because they couldn't force EntityFramework to crash on demand - this is quite a common problem in my experience. The user also knew that one approach was to push all that EntityFramework stuff out to another class and could then test their exception handling via mocks but didn't have the time/appetite to go down that path and thus wanted to exclude that code.

I imagined that the user has code that looked something like this:

public void SaveCustomers(ILogger logger)
{
  CustomersEntities ctx = CustomersEntities.Context;//)
  try
  {
    // awsome stuff with EntityFramework
    ctx.SaveChanges();
  }
  catch(Exception ex)
  {
    // do some awesome logging
    logger.Write(ex);
    throw;
  }
}

and I could see why this would be hard (but not impossible) to test the exception handling. Now instead of extracting out all the interactions with the EntityFramework so it is possible to throw an exception during testing I suggested the following refactoring:

internal void CallWrapper(Action doSomething, ILogger logger)
{
  try
  {
    doSomething();
  }
  catch(Exception ex)
  {
    // do some awesome logging
    logger.Write(ex);
    throw;
  }
}

which I would then use like this:

public void SaveCustomers(ILogger logger)
{
  CustomersEntities ctx = CustomersEntities.Context;//)
  CallWrapper(() => {
    // awsome stuff with EntityFramework
    ctx.SaveChanges();
  }, logger);
}


My original tests should still continue as before and now I have a new method that I can now test independently.

I know this isn't the only way to tackle this sort of problem and I'd love to hear about other approaches.

Monday, October 6, 2014

A simple TDD example

I recently posted a response to StackOverflow wrt TDD and Coverage and I thought it would be worth re-posting the response here. The example is simple but hopefully shows how writing the right tests using TDD gives you a better suite of tests for your code than you would probably write if you wrote the tests after the code (which may have been re-factored as you developed).

"As the [original] accepted answer has pointed out your actual scenario reduces to collection.Sum() however you will not be able to get away with this every time.

If we use TDD to develop this (overkill I agree but easy to explain) we would [possibly] do the following (I am also using NUnit in this example out of preference).

[Test]
public void Sum_Is_Zero_When_No_Entries()
{
    var bomManager = new BomManager();
    Assert.AreEqual(0, bomManager.MethodToTest(new Collection<int>()));
}

and then write the following code (note: we write the minimum to meet the current set of tests)

public int MethodToTest(Collection<int> collection)
{
    var sum = 0;
    return sum;
}

We would then write a new test e.g.

[Test]
[TestCase(new[] { 0 }, 0)]
public void Sum_Is_Calculated_Correctly_When_Entries_Supplied(int[] data, int expected)
{
    var bomManager = new BomManager();
    Assert.AreEqual(expected, bomManager.MethodToTest(new Collection<int>(data)));
}

If we ran our tests they would all pass (green) so we need a new test(cases)

[TestCase(new[] { 1 }, 1)]
[TestCase(new[] { 1, 2, 3 }, 6)]

In order to satisfy those tests I would need to modify my code e.g.

public int MethodToTest(Collection<int> collection)
{
    var sum = 0;
    foreach (var value in collection)
    {
        sum += value;
    }
    return sum;
}

Now all my tests work and if I run that through OpenCover I get 100% sequence and branch coverage - Hurrah!.... And I did so without using coverage as my control but writing the right tests to support my code.

BUT there is a 'possible' defect... what if I pass in null? Time for a new test to investigate

[Test]
public void Sum_Is_Zero_When_Null_Collection()
{
    var bomManager = new BomManager();
    Assert.AreEqual(0, bomManager.MethodToTest(null));
}

The test fails so we need to update our code e.g.

public int MethodToTest(Collection<int> collection)
{
    var sum = 0;
    if (collection != null)
    {
        foreach (var value in collection)
        {
            sum += value;
        }
    }
    return sum;
}

Now we have tests that support our code rather than tests that test our code i.e. our tests do not care about how we went about writing our code.

Now we have a good set of tests so we can now safely refactor our code e.g.

public int MethodToTest(IEnumerable<int> collection)
{
    return (collection ?? new int[0]).Sum();
}

And I did so without affecting any of the existing tests."

Thursday, April 3, 2014

Customsing New Relic installation during Azure deployments

For about a year we've been running New Relic to monitor our WebRoles running on the Azure platform. Installing has been quite simple by following the instructions initially found on the New Relic site and is now available via Nuget; however two things about this process have been irking me.

First, I wanted to be able to distinguish the CI and Production deployments in the New Relic portal by making them have different names, but the name as it appears in the New relic portal is controlled through a setting in the web.config and cannot be controlled though the Azure portal.

Second, I wanted to be able to control the licence key we used for CI (free licence, limited functionality) and Production (expensive licence, full functionality) deployments, however the key is embedded in the newrelic.cmd and is applied when the New Relic agent is installed; this is not easy to change during/post deployment.

The initial solution to both these problems involved producing two packages, one for the CI environment(s) and one for the Production environment. Instead of the normal Debug and Release build outputs, a 3rd target, Production, was used and the web.config was modified during the build process using a transform that changed the name to what was wanted. The licence key issue was resolved by have two newrelic.cmd items in the project and then packaging the required one with the appropriate build. This was not ideal but it worked in a fashion however the ProdOps guys were keen on having control over the name and licence key used in production.

Changing the Application name

New Relic gets the Application name from a setting in the web.config and so what is necessary is to read a setting in the Azure configuration and update the web.config. There are many ways to resolve this issue but the approach we took was based on the solution to an identical issue raised on GitHub.  

Form completeness I will however reiterate the steps below:

  1. In the ServiceDefinition.csdef file add a setting to the  <ConfigurationSettings/> section

  2. <ConfigurationSettings>
      <Setting name="NewRelicApplicationName" />
    </ConfigurationSettings>
    

  3. In the ServiceConfiguration file for your environment add a setting that will be used to set the Application name in New Relic

  4. <ConfigurationSettings>
      <Setting name="NewRelicApplicationName" value="MyApplication" />
    </ConfigurationSettings>
    

  5. In the WebRole.cs file for your application amend your code with the following

  6.     public class WebRole : RoleEntryPoint
        {
            public override bool OnStart()
            {
                ConfigureNewRelic();
    
                return base.OnStart();
            }
    
            private static void ConfigureNewRelic()
            {
                if (RoleEnvironment.IsAvailable && !RoleEnvironment.IsEmulated)
                {
                    string appName;
                    try
                    {
                        appName = RoleEnvironment.GetConfigurationSettingValue("NewRelicApplicationName");
                    }
                    catch (RoleEnvironmentException)
                    {
                        /*nothing we can do so just return*/
                        return;
                    }
    
                    if (string.IsNullOrWhiteSpace(appName))
                        return;
    
                    using (var server = new ServerManager())
                    {
                        // get the site's web configuration
                        const string siteNameFromServiceModel = "Web";
                        var siteName = string.Format("{0}_{1}", RoleEnvironment.CurrentRoleInstance.Id, siteNameFromServiceModel);
                        var siteConfig = server.Sites[siteName].GetWebConfiguration();
    
                        // get the appSettings section
                        var appSettings = siteConfig.GetSection("appSettings").GetCollection();
                        AddConfigElement(appSettings, "NewRelic.AppName", appName);
                        server.CommitChanges();
                    }
                }
            }
    
            private static void AddConfigElement(ConfigurationElementCollection appSettings, string key, string value)
            {
                if (appSettings.Any(t => t.GetAttributeValue("key").ToString() == key))
                {
                    appSettings.Remove(appSettings.First(t => t.GetAttributeValue("key").ToString() == key));
                }
                
                ConfigurationElement addElement = appSettings.CreateElement("add");
                addElement["key"] = key;
                addElement["value"] = value;
                appSettings.Add(addElement);
            }
        }
    
And that should be it

Changing the New Relic licence key

The New Relic licence key is applied when the New Relic agent is installed on the host so what we is needed is to read the Azure configuration when the newrelic.bat is executed as part of the Startup tasks (defined in the ServiceDefinition.csdef) and apply it when the agent is installed. There does not appear to be way of changing the licence key if your agents have already been installed other than reducing the number of instances to 0 and then scaling back up (I suggest you use the staging slot for this).

  1. In the ServiceDefinition.csdef file add a setting to the  <ConfigurationSettings/> section

  2. <ConfigurationSettings>
      <Setting name="NewRelicLicenceKey" />
    </ConfigurationSettings>
    

    and add a new Environment variable to the newrelic.cmd startup task that will be set by the new configuration setting

    <Task commandLine="newrelic.cmd" executionContext="elevated" taskType="simple">
            <Environment>
              <Variable name="EMULATED">
                <RoleInstanceValue xpath="/RoleEnvironment/Deployment/@emulated" />
              </Variable>
              <Variable name="NewRelicLicence">
                <!-- http://msdn.microsoft.com/en-us/library/windowsazure/hh404006.aspx -->
                <RoleInstanceValue xpath="/RoleEnvironment/CurrentInstance/ConfigurationSettings/ConfigurationSetting[@name='NewRelicLicenceKey']/@value" />
              </Variable>
              <Variable name="IsWorkerRole" value="false" />
            </Environment>
          </Task>
    

  3. In the ServiceConfiguration file for your environment add a setting that will be used to set the Application name in New Relic

  4. <ConfigurationSettings>
      <Setting name="NewRelicLicenceKey" value="<ADD YOUR KEY HERE>" />
    </ConfigurationSettings>

  5. Edit your newrelic.cmd to use the Environment variable

  6. :: Update with your license key
    SET LICENSE_KEY=%NewRelicLicenceKey%

Now you should be able to control the New Relic licence key during your deployment.