Monday, May 19, 2014

Test-driven development of plug-ins

How can we do test-driven development in a plug-in environment where our production code lives inside of an application? This may be the case if we are developing plug-ins for applications like Microsoft Office, Adobe Photoshop or Schlumberger Petrel. There are two challenges with this:
  1. NUnit (or any other test runner of choice) may not be able to start the host application in order to execute the plug-ins
  2. Starting the host application may take a while. If it takes 30 seconds to start the application, it's hard to get into the efficient TDD cycle that I describe in the blog post "An efficient TDD workflow".

Abstraction and isolation

Abstraction, inversion of control and isolation are common strategies when we develop code which is dependent on the environment. The idea is to make abstractions to the environment and thereby omitting it when executing the tests. It's not always possible, though. Sometimes our plug-in interacts heavily with and is dependent upon the behavior of the environment.

So what do we do then? If we can't isolate them, join them!

Unit test runner as a plug-in

Both challenges above can be solved by creating your own test runner as a plug-in inside of the host application! Instead of letting NUnit start the host application (slow and perhaps not even possible), the host application is running the tests.

So instead of doing a slow TDD cycle like this:


We want to move the host application startup out of the cycle like this:


So how can we do this? Create a test runner inside the host application... as a plug-in!

Let's have a look on one specific case: a test runner as a plug-in in Petrel. Petrel faces challenge #2 mentioned above. It can run in a "unit testing mode" where NUnit tests can start Petrel and run the Petrel-dependent production code, but startup takes 30-60 seconds.

If you are using NUnit, this is quite simple. NUnit provides multiple layers of test runners, depending on how much you want to customize its behavior. A very rudimentary implementation which allows the user to select a test assembly, list tests and execute them could look like this:

public partial class TestRunnerControl : UserControl
  {
    private readonly Dictionary<string, List<string>> _assembliesAndTests = new Dictionary<string, List<string>>();

    private string _pluginFolder;

    public TestRunnerControl()
    {
      InitializeComponent();

      this.runButton.Image = PlayerImages.PlayForward;

      FindAndListTestAssemblies();
    }

    private void FindAndListTestAssemblies()
    {
      var pluginPath = Assembly.GetExecutingAssembly().Location;
      _pluginFolder = Path.GetDirectoryName(pluginPath);

      AppDomain tempDomain = AppDomain.CreateDomain("tmpDomain", null, new AppDomainSetup { ApplicationBase = _pluginFolder });
      tempDomain.DoCallBack(LoadAssemblies);
      AppDomain.Unload(tempDomain);

      foreach (var testDll in _assembliesAndTests.Keys)
      {
        this.testAssemblyComboBox.Items.Add(testDll);
      }
    }

    private void LoadAssemblies()
    {
      foreach (
        var dllPath
          in Directory.GetFiles(_pluginFolder)
            .Where(f => f.EndsWith(".dll", true, CultureInfo.InvariantCulture) && f.Contains("PetrelTest")))
      {
        try
        {
          Assembly assembly = Assembly.LoadFrom(dllPath);

          var dllFilename = Path.GetFileName(dllPath);

          try
          {
            var typesInAssembly = assembly.GetTypes();

            foreach (var type in typesInAssembly)
            {
              var attributes = type.GetCustomAttributes(true);

              if (attributes.Any(a => a is TestFixtureAttribute))
              {
                if (!_assembliesAndTests.ContainsKey(dllFilename))
                {
                  _assembliesAndTests[dllFilename] = new List<string>();
                }

                _assembliesAndTests[dllFilename].Add(type.FullName);
              }
            }

            Ms.MessageLog("*** Found types in " + assembly.FullName);
          }
          catch (Exception e)
          {
            Ms.MessageLog("--- Could not find types in " + assembly.FullName);
          }
        }
        catch (Exception e)
        {
          Ms.MessageLog("--Could not load  " + dllPath);
        }
      }
    }  

    private void TestAssemblySelected(object sender, EventArgs e)
    {
      this.testClassComboBox.Items.Clear();

      var testAssembly = this.testAssemblyComboBox.SelectedItem as string;
      if (!string.IsNullOrEmpty(testAssembly))
      {
        foreach (var testClass in _assembliesAndTests[testAssembly])
        {
          this.testClassComboBox.Items.Add(testClass);
        }
      }
    }

    private void runButton_Click(object sender, EventArgs e)
    {     
      if (!CoreExtensions.Host.Initialized)
      {
        CoreExtensions.Host.InitializeService();
      }

      var results = RunTests();

      ReportResults(results);
    }

    private void ReportResults(TestResult results)
    {
      var resultsToBeUnrolled = new List<TestResult>();
      resultsToBeUnrolled.Add(results);

      var resultList = new List<TestResult>();
      while (resultsToBeUnrolled.Any())
      {
        var unrollableResult = resultsToBeUnrolled.First();
        resultsToBeUnrolled.Remove(unrollableResult);

        if (unrollableResult.Results == null)
        {
          resultList.Add(unrollableResult);
        }
        else
        {
          foreach (TestResult childResult in unrollableResult.Results)
          {
            resultsToBeUnrolled.Add(childResult);
          }
        }
      }

      int successCount = resultList.Count(r => r.IsSuccess);
      int failureCount = resultList.Count(r => r.IsFailure);
      int errorCount = resultList.Count(r => r.IsError);

      string successString = string.Format("{0} tests passed. ", successCount);
      string failureString = string.Format("{0} Tests failed. ", failureCount);
      string errorString = string.Format("{0} Tests had error(s). ", errorCount);

      string summary = successString + failureString + errorString;

      this.resultSummaryTextBox.Text = summary;

      this.resultSummaryTextBox.Select(0, summary.Length);
      this.resultSummaryTextBox.SelectionColor = Color.FromArgb(80, 80, 80);

      if (successCount > 0)
      {
        this.resultSummaryTextBox.Select(0, successString.Length);
        this.resultSummaryTextBox.SelectionColor = Color.DarkGreen;
      }

      if (failureCount > 0)
      {
        this.resultSummaryTextBox.Select(successString.Length, failureString.Length);
        this.resultSummaryTextBox.SelectionColor = Color.Red;
      }

      if (errorCount > 0)
      {
        this.resultSummaryTextBox.Select(successString.Length + failureString.Length, errorString.Length);
        this.resultSummaryTextBox.SelectionColor = Color.Red;
      }

      this.resultSummaryTextBox.Select(0, summary.Length);
      this.resultSummaryTextBox.SelectionAlignment = HorizontalAlignment.Center;

      this.resultSummaryTextBox.Select(0, 0);

      // Set grid results
      this.resultsGridView.Rows.Clear();

      int firstErrorIdx = -1;

      foreach (var result in resultList)
      {
        var testName = result.Name;
        var image = result.IsSuccess ? GeneralActionImages.Ok : StatusImages.Error;

        int idx = this.resultsGridView.Rows.Add(testName, image);

        if (firstErrorIdx == -1 && !result.IsSuccess)
        {
          firstErrorIdx = idx;
        }
      }

      if (firstErrorIdx != -1)
      {
        this.resultsGridView.FirstDisplayedScrollingRowIndex = firstErrorIdx;
      }
    }

    private TestResult RunTests()
    {
      var testAssembly = this.testAssemblyComboBox.SelectedItem as string;
      var testClass = this.testClassComboBox.SelectedItem as string;

      TestPackage testPackage = new TestPackage(Path.Combine(_pluginFolder, testAssembly));
      TestExecutionContext.CurrentContext.TestPackage = testPackage;

      TestSuiteBuilder builder = new TestSuiteBuilder();
      TestSuite suite = builder.Build(testPackage);

      var testFixtures = FindTestFixtures(suite);
      var desiredTest = testFixtures.First(f => f.TestName.FullName == testClass);
      var testFilter = new NameFilter(desiredTest.TestName);
      TestResult result = suite.Run(new NullListener(), testFilter);

      return result;
    }

    private IEnumerable<TestFixture> FindTestFixtures(Test test)
    {
      var testFixtures = new List<TestFixture>();

      foreach (Test child in test.Tests)
      {
        if (child is TestFixture)
        {
          testFixtures.Add(child as TestFixture);
        }
        else
        {
          testFixtures.AddRange(FindTestFixtures(child));
        }
      }

      return testFixtures;
    }
  }

Note that this code sample is not complete, but it shows how to find and run the tests. Use the test runner control in a plug-in inside the host application, and it will look like this:


Whenever a test is failing, Visual Studio will break at the offending NUnit Assert statement:


So how does this allow for an efficient code-test cycle? By using Visual Studio's Edit & Continue! Whenever you want to edit the production code or the test code, press pause in Visual Studio, edit as needed, and run the tests again. Hence, you can write code and (re)run tests at will without restarting the host application.

It's not as efficient as writing proper unit tests that execute in milliseconds, but it's far better than waiting for the host application on every test execution.

No comments:

Post a Comment