Van Vugt's dynamiXs

All around NAV dev and test

How we started using the Test Toolkit

 With our upgrade to NAV 2016 from NAV 2009 R2 many new things came available to us at The Learning Network, formerly known as Van Dijk Educatie. Technical improvements like the performance of the service tier, which was one of the major reasons for wanting to leave behind NAV 2009. But of course, next to a vast number of functional features, also the availability of PowerShell cmdlets, upgrade codeunits and test automation. Those who have been following me one way or another know that the latter subject has my special attention: testing and test automation.

In today's post I would like to make a start in sharing our approach and findings in how we started to use the Test Toolkit as being released by MS on each NAV product DVD since NAV 2016. For that I have setup a kind of framework below that allows me refer back to some of its parts when elaborating on it more in posts to come.

So here we go, fasten your seatbelts and stay tuned. And … do not be afraid to ask.

Goal

Our primary goal of this exercise was to setup a test collateral, based on the standard MS application tests, to be used as a regression test suite, by

  • Getting as many standard application tests running on our "solution"
  • Running these, successful tests against the latest version of our stable code, on a daily basis
  • Noting if any of the, formerly, successful tests, have failed (or not, preferably ;-)
  • Analyzing failures and fixing them, in either app or test code

Basic Plan

The basic plan to achieve this was to:

  1. Run standard tests against our stable code (read more)
  2. Select all successful tests
  3. Export these as one suite
  4. Import as a new, separate suite
  5. Run this suite against our stable code
  6. Get more standard test running successfully to get more coverage
  7. Repeat steps 1 to 7, until all standard tests, or a reasonable, run successful

Assumptions

With any endeavor there are always a number of assumptions. Or should I say loads?

Well, basically we had these two, being that all MS tests …

  1. … run successful in CRONUS
    Proved to be nearly true. Approx. 1 ‰ failed; well, also automated tests is software ;-)
  2. … are independent, so any test function in any codeunit can be run independent from the other tests in the same test codeunit
    Proved to be true for most of the test functions, but not for all (unfortunately)

Basic Environment

This is the world we live in at The Learning Network:

  • App: NAV 2016 NL - RTM (technically CU11)
  • Database: CRONUS Nederland BV
  • Tests: NAV 2016 NL - RTM, 16.128 tests
  • Customization: Ca. 400 standard/Ca. 630 own

To continued ...