With our upgrade to NAV 2016 from NAV 2009 R2 many new things came available to us at The Learning Network, formerly known as Van Dijk Educatie. Technical improvements like the performance of the service tier, which was one of the major reasons for wanting to leave behind NAV 2009. But of course, next to a vast number of functional features, also the availability of PowerShell cmdlets, upgrade codeunits and test automation. Those who have been following me one way or another know that the latter subject has my special attention: testing and test automation.
In today's post I would like to make a start in sharing our approach and findings in how we started to use the Test Toolkit as being released by MS on each NAV product DVD since NAV 2016. For that I have setup a kind of framework below that allows me refer back to some of its parts when elaborating on it more in posts to come.
So here we go, fasten your seatbelts and stay tuned. And … do not be afraid to ask.
Our primary goal of this exercise was to setup a test collateral, based on the standard MS application tests, to be used as a regression test suite, by
The basic plan to achieve this was to:
With any endeavor there are always a number of assumptions. Or should I say loads?
Well, basically we had these two, being that all MS tests …
This is the world we live in at The Learning Network:
To continued ...
Looking at this beautiful Spanish scenery, just before a nice early-morning dive into the swimming pool, a perfect moment to reply to your follow-up post on automated testing, Jan.
Thanks for taking the time to answer my questions, Luc! I’ve definitely learned a thing or two, and I’m glad that we apparently agree about most of this stuff. Just a few last responses below…
Good to hear, and thanx too as it it allowed me to look more conscious at this topic.
About #1. “what to test” Like I said, my first few automated tests were unit tests testing the results of validations. That seemed like a good idea at the time, because it allowed the tests to a. be very limited in scope, and b. be very isolated from each other. Would you consider making separate test functions for each (relevant, i.e., sufficiently complex) field validation, like you probably would for each (relevant) function, or is that the wrong scale as far as you are concerned?
Like I said, my first few automated tests were unit tests testing the results of validations. That seemed like a good idea at the time, because it allowed the tests to a. be very limited in scope, and b. be very isolated from each other.
Would you consider making separate test functions for each (relevant, i.e., sufficiently complex) field validation, like you probably would for each (relevant) function, or is that the wrong scale as far as you are concerned?
My first thought is, that at this very moment it is too far off what we (still) need to achieve in our organisation, and, IMHO, also in the NAV world in general: to get automated tests in place that enable us to get a quick understanding of the current state of our code with respect to existing functionality (i.e. have regression testing in place). With this I keep thinking, but maybe I am too stubborn in that, that this can best/the most easiest be achieved by keeping close to what we are used to in the NAV world: perform/create functional/integration tests. This is mainly what the MS Test Automation Suite entails and as such is a fairly easy start.
On second thought I agree with your approach. And this is what, in the end, with respect to automated testing, developers should do: write unit tests to their app code. Having said that I realize I still will push my team first to get the integration tests in place.
About #2. “developing for testability” One of the things I’m trying to do in my development work is to use expressions instead of statements as much as possible. I want my functions to be as pure as they can practically be, i.e., fully deterministic and without observable side-effects, in order to optimise their testability.
One of the things I’m trying to do in my development work is to use expressions instead of statements as much as possible. I want my functions to be as pure as they can practically be, i.e., fully deterministic and without observable side-effects, in order to optimise their testability.
Fully agree, even though this is not an easy thing to achieve in NAV due to the habits we have grown in the NAV world. But yes, this makes code easier to test.
About #3. “predefined test data” I’m not sure if I fully understand when you say that your test data baseline should be 100% stable and known, but you are using CRONUS data? We have no real way of knowing what changes Microsoft makes to the demo data between releases, do we? Wouldn’t you be better off generating all of your own data in a new NAV company? Or is it just a trade-off between effort and security?
I’m not sure if I fully understand when you say that your test data baseline should be 100% stable and known, but you are using CRONUS data? We have no real way of knowing what changes Microsoft makes to the demo data between releases, do we? Wouldn’t you be better off generating all of your own data in a new NAV company? Or is it just a trade-off between effort and security?
You're fully right in all aspects. But practically, in our current situation, our daily test run has a 100% identical data baseline as long as we haven't moved to another version of CRONUS. And ... even when we move to a new version ... as long as our tests still prove to be successful, I will call it stable. If the contrary happens I will start considering "generating ... data in a new NAV company".
The rest of your replies make perfect sense to me.
[Y]
Well, actually, Jan, everything you always wanted to know about automated testing in NAV and did ask me about. ;-)
Thanx for starting this creative little experiment, allowing to elaborate somewhat more on some aspects of automated testing. Maybe needless to mention that my answers reflect my knowledge and experience and are in no way the one-and-only truth. And BTW: my Dutch Dynamics session had a very restricted scope being the way we started using the NAV Test Automation Suite MS is providing us with every release.
So let's go ... I will state your questions
1. First and foremost – how do you decide what (and how) to test? When I first started writing automated tests, I found myself testing things that were so obvious that they probably never posed any risk to the stability of the application in the first place. Can you give some rule of thumbs for what to focus our test effort on?
When I first started writing automated tests, I found myself testing things that were so obvious that they probably never posed any risk to the stability of the application in the first place. Can you give some rule of thumbs for what to focus our test effort on?
Wow, this already is a comprehensive question, as it partly depends on the context. Are you wanting to write tests while your are developing a new feature, or for an already existing one?
Let's approach them apart (even though they still have a lot in common).
To be honest I do not have a standard list on this. But my approach would be (without going into too many details):
If time does not suffice to get this done for each new feature, focus on the business critical ones. Those that:
Now having written this: this actually would be my approach also with ... Existing Features. In the NAV world, we probably all share the same feeling questioning where to start in the humongous sea of features; this heritage of code we built up ever since we started with it. Start to write tests, that will help the most to improve and guard the quality of the code.
2. How does the need for automated testing affect development work? You mentioned that testing NAV (ERP?) is different from testing most other systems, since practically everything goes through the database and there’s no easily available way to mock (simulate) this database interaction. Do the developers in your team have testability in mind when they are writing new features?
You mentioned that testing NAV (ERP?) is different from testing most other systems, since practically everything goes through the database and there’s no easily available way to mock (simulate) this database interaction. Do the developers in your team have testability in mind when they are writing new features?
Well ... I would like to say, yes. But, no, that's unfortunately not the case and this has all to do with where we come from in the NAV world. Developing without:
And yes, as in the NAV world in general, we are improving on this and testability is part of that, but one somewhat more up the road. Now that we started to use the standard Test Automation Suite, testability becomes slowly part of our vocabulary as we do run into parts of our code that we have a hard time getting it tested easily.
3. Using demo data as the basis for your test data You mentioned that tests should ideally create (and clean up) their own data, returning the database to its pristine state after all the tests have run. In our experience, being overly strict about that costs time twice – once during test development, and during each test run. How do you feel about isolating some of the data creation in a demo data creation tool, and running your tests in a database that already has that generated data on board.
You mentioned that tests should ideally create (and clean up) their own data, returning the database to its pristine state after all the tests have run. In our experience, being overly strict about that costs time twice – once during test development, and during each test run. How do you feel about isolating some of the data creation in a demo data creation tool, and running your tests in a database that already has that generated data on board.
Maybe I should mention that the basis to this is that each test should be run from the same baseline to make it reproducible. The baseline in our case is CRONUS, as provided be MS on the product DVD; and what MS also uses as their baseline. Adding your own data that, creates another baseline, so why not? Make sure that in between each test codeunit the state is this baseline.
BTW: an additional reason why we are using CRONUS is that running the Test Automation Suite on a copy of our production database (approx. 600 GB) never ended. It seemed to get stuck on the big number of item ledger entries. However, I never did spend deep investigation in getting to know the exact reason as the CRONUS baseline suffices very well.
4. Have you considered running chunks of tests in parallel? I guess that could significantly reduce the execution time, right? And that becomes even more relevant e.g. when you want to do some form of gated check-in, where tests must pass before a changeset is accepted into your code repository? Also, running in parallel forces you to make your tests fully independent of each other – as they should be.
I guess that could significantly reduce the execution time, right? And that becomes even more relevant e.g. when you want to do some form of gated check-in, where tests must pass before a changeset is accepted into your code repository? Also, running in parallel forces you to make your tests fully independent of each other – as they should be.
Nope. As we're still in the phase of getting all the standard tests working, this hasn't been our focus. But sure we will in due time, like we will also improve they way we are creating our test data now.
5. How do you design new tests? In my experience, designing your tests in a code editor leads to the worst results. I think it’s best to formalise your (existing, manual) tests, i.e. listing the steps and verifications, in a text editor, in plain English before converting them to code. Would you agree?
In my experience, designing your tests in a code editor leads to the worst results. I think it’s best to formalise your (existing, manual) tests, i.e. listing the steps and verifications, in a text editor, in plain English before converting them to code. Would you agree?
Fully agree. No pin between (geen speld tussen te krijgen ;-).
6. Most of our tests were (consciously) implemented as UI tests. Only having access to fields that are visible from the GUI can be quite limiting – there is no straightforward way to get e.g. the Line No. from a Sales Line. Any advice on that (apart from using unit tests instead)?
Only having access to fields that are visible from the GUI can be quite limiting – there is no straightforward way to get e.g. the Line No. from a Sales Line. Any advice on that (apart from using unit tests instead)?
A field like Line No., which, as a design pattern, should no be placed in the GUI, is actually not the only issue. Any field that has Visible=FALSE has a similar issue. For the latter I have asked to allow us to change it's visibility from code. I think something alike should be asked regarding fields like Line No.: to be allowed to access fields not available in the GUI, like you would use the About this Page feature.
7. You mentioned the other day some strange differences between running the test suite from the Windows client, and running it ‘headlessly’ from PowerShell. Can you elaborate a little on that? Did you manage to solve that issue?
Can you elaborate a little on that? Did you manage to solve that issue?
The standard Test Automation Suite has a number of tests (like running reports) that needs a Client session. Running it 'headlessly' from PowerShell yields the error:
Microsoft Dynamics NAV Management Server attempted to issue a client callback ….. Client callbacks are not supported on Microsoft Dynamics NAV Management Server.
Asking MS they confirmed this and also told they have some special tooling to enable 'headless' test runs. The tooling is however not yet ship-able.
Well ... hope this makes sense.
As not only you, Jan, but many others probably have noticed I am doing my best to share as much as possible things regarding the test automation in NAV. Currently it focuses on the Test Automation Suite and you can find a number of NAV Skills webinars I have been performing:
Another one has been scheduled: Testability Framework Deep Dive. Date & Time : July 6th. 10am CET & 4pm EST. (You can find the recording here.)
And during the next NAV TechDays I will lead two pre-conference workshops and co-lead a conference session on automated testing.
So get yourself informed ...!
One of these of every day things that makes work so easy; you just take them for granted. Getting the 2.2.0 update mail from Christian Clausen, I realized that's what Statical Prism has become for me. No fuss of whatever setup, except for having my code in .txt objects in a directory. Using a source repository, that's what is on my system by default. But hey, let's not repeat what I wrote before.
If you haven't used Statical Prism before, go and try it. If you did and stopped because of some missing features, you might want to pick it up again as the 2.2.0 release has some major features added to it:
Just to mention two.
The first was something really missing; I mean, we very often would like to know where the OnInsert trigger was being called, don't we.
The second as a very nice-to-have one. Typically not on my list, but now looking it ... it's neat. In many cases Find Usages on fields yields a long result list. Often overwhelming. Now it's possible to filter this list.
Say you want to find the usages of the primary key (Code) on the Currency table (4). This would be the result:
451 hits, maybe too many just to browse through. Say you're only interested in the usage related to SETRANGE. Use the Text filter to filter out only hits relating SETRANGE, this will show the following:
Once again: great guys!
As you know my daily work is highly depending on Visual Studio Team Foundation Services, so when last week VS 2017 was released, I immediately installed it and continued my work as before. It's one of those great things of VS: a high up- and downward compatibility. It makes me smile.
Nevertheless there are always a couple of things that I need to reinstall to have my setup the way I want it. Among them:
For this I lately used Steve Fenton's post Add Visual Studio Command Prompt To Visual Studio 2015. Straight forward to implement. However for 2017 it needs some tweaking with respect to the argument:
The VS Command Prompt will now be available in: Tools > VS Command Prompt