Wednesday, August 9, 2017

Test setup readability in C# 7

How do you set up the state and the data that you need in tests? Say that you are writing a view class for a domain object. In addition, you need to set up a mock environment that you need to assert against in order to make sure that the view is making the correct calls to environment (which might be a graphics card, a web service or a UI frontend:



  class DomainObject
  {
    ...
  }

  class View
  {    
    ...
  }

  class MockEnvironment : IEnvironment
  {    
    ...
  }



With NUnit, you can decorate setup methods with the [SetUp] or [SetUpFixture] attribute. These methods will be run before the tests or test class, allowing you to create data and set this as fields in the test class. A test class might look like this:




  public class Test
  {
    private DomainObject _domainObject;
    private View _view;
    private MockEnvironment _environment;

    [SetUp]
    public void CreateStuff()
    {
      // Create and initialize _domainObject, _view and _environment  
    }

    [Test]
    public void SomeMethod_Always_EverythingWorks()
    {
      // Test stuff and assert, using the class fields
    }
  }



However, that will cause issues if the tests are run in parallel. Actually, some test frameworks like xUnit has no support for setup methods because of this.

A better approach is to use maker methods that create the data with helper methods:




    [Test]
    public void SomeMethod_Always_EverythingWorks()
    {
      var environment = MakeEnvironment(some parameters);

      var domainObject = MakeDomainObject();
      var view = MakeView(domainObject, environment);

      // Do some more initialization

      // Implement test
    }

    private MockEnvironment MakeEnvironment()
    {
      ...
    }

    private DomainObject MakeDomainObject()
    {
      ...
    }

    private View MakeView(DomainObject domainObject, IEnvironment environment)
    {
      ...
    }


This quickly becomes messy if multiple objects are created. If they depend on each other, it becomes even more messy.

C#7  has a new feature that is very handy for this scenario: return tuples! With return tuples, you can create and initialise everything in one reusable method. Everything is still type safe and very readable:



    [Test]
    public void SomeMethod_Always_EverythingWorks()
    {
      var (view, domainObject, environment) = MakeViewAndStuff();

      // Implement test
    }

    private (View view, DomainObject domainObject, MockEnvironment environment) MakeViewAndStuff()
    {
      var environment = new MockEnvironment();
      var domainObject = new DomainObject();
      var view = new View(domainObject, environment);

      // Do more initialization

      return (view, domainObject, environment);
    }



Various tests in the test class may or may not reference the objects that are returned. For readability, simply assign the ignored return variables to a dummy underscore variable if you don't need them in the tests:



    [Test]
    public void SomeOtherMethod_Always_EverythingWorks()
    {
      var (view, _, _) = MakeViewAndStuff();

      // Implement test. No need to use the domain object and environment in this test
    }


Clever, huh?



Friday, August 4, 2017

Twitter

Just a small update from me... I am now on Twitter!

Well, I have had a Twitter account for years, but I never really used it. From now on, I'll post updates on Twitter whenever I write a new blogpost or stumble upon something which is related to test-driven development. Or other aspects related to software development. Or maybe even about boats!

https://twitter.com/jahnotto

@jahnotto

Sunday, February 14, 2016

A different kind of test metric - test depth

A better metric on test quality

How do you measure the quality of your tests? Test coverage is one of the more widely used metrics. This shows the percentage of code lines that are executed by unit tests.

This metric does not tell anything about the quality of the tests, though. It has a purpose as an alarm bell if coverage is low, but a high coverage does not guarantee that the tests are good.

One problem is; consider a scenario where we only have high-level tests. Let's say that we have an application which is calculating the weight of a 3D printed model. The application has three layers:



Say that we have one single test which is testing the "calculate" button in the User interface layer:

[Test]
public void CalculateButton_Clicked_ShowsCalculationProgress()
{
  ... verify some GUI state here
}

Even if we intend to test only a GUI feature, the rest of the layers may be covered by this test. We are not even verifying the calculation results, yet the Calculation layer may have 100% code coverage. Obviously, test coverage is not a good metric here.

I have some thoughts on a better metric.

Test depth

In the example above, the Calculation layer is tested only through the User interface layer. In other words, the Calculation layer is multiple levels away from the test code in the call stack. We will name this distance between test and tested code as "test depth". A high test depth (for this particular test) means that the test is not actually testing the tested code very well.

This gives us an improved metric; by weighting the test coverage with the test depth, we will get a new depth-weighted coverage. In traditional test coverage, each line is either covered or not covered:

C(line) = 0% or 100%, depending on whether line is executed by a test or not

Whereas the coverage for an entire class is the average

C(class) = Coverage_sum / line_count.

A depth-weighted coverage would then define the coverage per line (if it is covered)

C(line) = 100% / (lowest_depth_of_test - 1), where depth_of_test is the distance in the call stack.

If a line is covered directly by a test, C(line) will be 100%. If it is one level deeper, it will be 50%. Two levels deeper will yield a C(line) of 33%.

Is this the perfect metric?

Is depth-weighted metric a perfect test metric? No, it still doesn't actually verify that you are asserting on the right things. It is not a silver bullet, but it does however prevent the deep-test scenario above.

How do I calculate this?

I haven't found any tools that actually makes this kind of metric. Perhaps I need to make one myself... or are there any volunteers out there? ;-)


Saturday, December 19, 2015

Event testing with the constraint model

In my previous post about the NUnit constraint model, I briefly touched into the new constraint model which is used to write more elegant asserts. As I mentioned, the constraint model is more than just syntactical sugar. I have had a lot of fun with extending the constraint model to allow for easy event testing, and I'd like to share it with you. This is my Christmas gift to you!

Testing events - traditional method

In order to test that an event has been fired from the tested object, one would typically do something like this:

// Arrange
var testee = new Testee();
bool eventWasFired = false;
testee.MyEvent += (s,e) => eventWasFired = true;

// Act
testee.SomeMethod();

// Assert
Assert.That(eventWasFired, Is.True);

In order to test a PropertyChangedEvent, the syntax becomes a bit more complex and harder to read:

// Arrange
var testee = new Testee();
bool eventWasFired = false;
model.PropertyChanged += (s, e) =>
                            {
                              if (e.PropertyName == "SomeProperty")
                              {
                                eventWasFired = true;
                              }
                            };

// Act
testee.SomeMethod();

// Assert
Assert.That(eventWasFired, Is.True);

This can be tweaked as needed to assert that an event was NOT fired or to assert that the event was fired a given number of times. However, it is not as obvious and readable as one would like. Keep in mind the axe murderer who is going to read your tests!

Having so much more fun with the constraint model

One of the advantages of the NUnit constraint model is the extensibility. My event asserts allows you to test for events like this:

// Arrange
var testee = new Testee();
Listen.To(testee);

// Act
testee.SomeMethod();

// Assert
Assert.That(testee, Did.Fire.TheEvent("SomeEvent"));

I dare to say that this is slightly more readable and less complex than the traditional method! Specifying the event with a string is not typesafe, which is a bit sad. You will not get a compile error if the event does not exist. However, the assert will throw an IllegalArgumentException if the event does not exist. Hence, you will not get false positives or negatives because the event was misspelled.

The difference is even more obvious when testing for a PropertyChangedEvent (compare with the second traditional example):

// Arrange
var testee = new Testee();
Listen.To(testee);

// Act
testee.SomeMethod();

// Assert
Assert.That(testee, Did.Fire.ThePropertyEvent("SomeProperty"));

The other variants can be tested with

Assert.That(testee, Did.Not.Fire.TheEvent("SomeEvent"));
Assert.That(testee, Did.Fire.TheEvent("SomeEvent").Once);
Assert.That(testee, Did.Fire.ThePropertyEvent("SomeProperty").Times(3));

I have uploaded the source code with the required constraints here. Simply add them to your test project or a shared test helper project, and you are good to go. Code samples in the form of tests (of course) are included in the DidTest class. The constraint class is doing a lot of safeguarding to prevent the developer from using a wrong syntax.

Have fun!

Tuesday, November 24, 2015

NUnit assertions - constraint model and classic model

NUnit assertions - constraint model and classic model

If you are using NUnit for your unit tests, there are two models for writing assertions. The "old skool" way of writing assertions is like

Assert.IsTrue(a == 5);
Assert.AreEqual(expectedValue, result);

This way of writing assertions is well known and is used in most unit testing frameworks. Some years ago, NUnit started introducing a new assertion model.

Constraint model

The equivalent assertions in the new constraint model are

Assert.That(a, Is.EqualTo(5));
Assert.That(result, Is.EqualTo(expectedResult);

When the constraint model was introduced, it seemed like nothing more than syntactical sugar. I didn't see any real benefit to it and continued using the classic model.

The constraint model has however been refined and extended over the past years. New and clever ways of writing assertions have been introduced to the constraint model, while the classic model has not evolved.

Rather than writing complex asserts, there is now a number of useful asserts on collections like

Assert.That( iarray, Is.All.GreaterThan(0) );
Assert.That( myArray, Is.SubsetOf( largerArray ) );
Assert.That( sarray, Is.Ordered.Descending );
Assert.That( sarray, Has.No.Member("x") );
Assert.That( iarray, Has.Some.GreaterThan(2) );
Assert.That( iarray, Is.All.GreaterThan(0) );

The meaning of these asserts are pretty self-explaining because of the new syntax.

The constraints typically have additional modifiers. For example, for comparing two doubles, one would often provide a tolerance in the classic model in order to cope with floating-point round-off errors:

Assert.AreEqual(expectedValue, result, 0.000001)

Where the tolerance often was somewhat arbitrary and dependent on the expected value.

With the new constraint model, this can be made much more robust by specifying it like

Assert.That(result, Is.EqualTo(expectedResult).Within(0.01).Percent
Assert.That(result, Is.EqualTo(expectedResult).Within(2).Ulps

The "Ulps" tolerance is quite interesting. "Ulp" means "Units in the Last Place" and is more robust for allowing numerical inaccuracies that are introduced than using a fixed tolerance.

The list of constraints is too long to replicate here. The full set of constraints is listed in the NUnit documentation here: https://github.com/nunit/nunit/wiki/Constraints.

Custom constraints

If you happen to write your own classes (!), you may need to be able to compare them using a less strict method than the Equals(). For example, you may need to compare mathematical vectors within a tolerance.

Rather than using three asserts for a 3-dimensional vector, you can write your own constraints with a given tolerance.

The classic model is disappearing

Until now, NUnit has been fully backwards compatible. The developer has had the choice to use the classic model if desired. Starting in version 3.0, however, some of the classic assertions have disappeared. For example, the Assert.IsNullOrEmpty() has been removed.

I don't know if the classic assertion model will eventually be completely retired. Who cares anyway, when the constraint model is so much better? ;-)

Wednesday, August 26, 2015

NDepend - an impressing tool for code analysis

If you are a fan of test-driven development AND code analysis, it might be worth having a look on a tool called NDepend: http:///www.ndepend.com.


NDepend analyses your codebase and creates all kind of metrics and charts to assess the readability, maintainability and robustness of your code. You can either read the metrics as-is or create custom Linq-like queries to find the data of interest.

For architects, this tool seems to be very useful for monitoring how the codebase evolves. It might also be useful as a one-shot analysis in e.g. a due diligence process where the overall code quality in a project needs to be assessed.

I am more interested in the test-driven aspects of it though. NDepend can gather code coverage data and use them conjointly with the other NDepend metrics to get a picture on how well complex parts of the code are covered by tests.

Is this useful for a TDD developer? Personally, I am not using code analysis tools very often and find the amount of information a bit overwhelming. For architects or others that like to view their code from a more analytical point of view, however, NDepend is a gold mine of information.

Friday, May 15, 2015

Should we test private methods?

Should we test private methods?

One question which very often pops up when it comes to test-driven development is: "should we test private methods?" There are some (controversial) ways to test private methods in C#. For example, you can declare the test assembly a friend class of the testee assembly. That doesn't smell good though.

So should we test private methods?

This blog post will not try to answer that... because the question is wrong!

It's a sign

If you have read my previous blog post Help, my class is turning into an untestable monster, you may recall that untestable code is a sign of a poor design. It's time to do some refactoring.



Testing private methods is a related topic; if you feel the need to test private methods in order to test the behavior of a class, then it's a sign that you should do some refactoring. Your class is probably trying to do too much. It is violating the Single responsibility principle.

Example

Let's have a look on an example. Consider a class Logger which updates a log in a database when certain events occur. The Logger class has a method LogEvent(event) which generates a message with a timestamp based on the event and adds it to the database. It might look like this:


public class Logger
{
  public void LogEvent(AppEvent event)
  {
    string message = CreateMessage(event);
    AddMessageToDatabase(message);
  }

  private string CreateMessage(AppEvent event)
  {
    string message = "";

    foreach (var something in event.AffectedObjects()
    {
      ...do stuff here, add to string etc etc
    }

    ...add nicely formatted timestamp here

    return message;
  }

  private void AddMessageToDatabase(string message)
  {
    ...do all kinds of database magic here, add to correct table, avoid duplicates, etc
  }
}

Although a fairly trivial example, the Logger.LogEvent() method is trying to do two things: create a nicely formatted message AND add it to the database. Both of those tasks can have a number of different scenarios and failure conditions, so surely we feel the urge to test both the private CreateMessage() and AddMessageToDatabase() methods!

Refactor!

Having realized that this is a sign, we decide to refactor it:


Code:
public class MessageGenerator
{
  public string CreateMessage(AppEvent event)
  {
    ...create nicely formatted string

    return message;
  }
}

public class DatabaseLogger
{
  public void AddMessageToDatabase(string message)
  {
    ...do all kinds of database magic here, add to correct table, avoid duplicates, etc
  }
}

Now we have two smaller classes with one single responsibility each. The CreateMessage and AddMessageToDatabase methods are public because they are a part of the public contract and expressed responsibility of these classes.

Did we answer the question about whether or not to test the private methods of the Logger class? No, because the question was wrong!