Yes, I have been a lazy blogger lately. With so many people blogging about NAV nowadays, it’s really difficult to come up with original content all the time.
So, I decided to test performance of NAV under various configurations, including on-prem, Azure VM and SQL Azure, and share my findings with you. And while I am writing these lines, there are four machines sweating the crap out under some performance test code I recently wrote.
As soon as I collect the results, I’ll share them with you in a series of three posts:
Now, some general notes about these tests and their results.
Performance vs. Concurrency
First of all, let’s clear this thing out. I am not testing concurrency, which is how many users you can unleash at the same time in the same tenant or same NST, or whatnot. I don’t care (yet) about that. I might or might not get around to testing concurrency at a later time, but for now I am focusing on performance.
Performance is raw output that specific setup can produce for a single user at one time.
Think of it like this: performance is how fast your car can drive. Concurrency is how many cars (of the same or different types) a road can sustain at the same time without resulting in crashes or jams.
Why do I care about performance?
For a very simple reason. I wanted to measure what kind of hardware (or virtual) configuration you need to achieve certain performance goal. Not everyone needs hundreds of users. Measuring a single user under a specific configuration gives you a nice picture of how money you’ll need to shell out for hardware, or how big a virtual machine size you’ll need to go for.
Any specifics about my goals?
For on-prem, I wanted to find out what mattered most in terms of hardware. Is it a fast processor, or fast disks, SSD vs HDD and similar.
For Azure, I wanted to find out how different machine sizes affect performance, and whether you should go for A tier, D tier, or maybe G tier, and then how big inside the tier.
For SQL Azure, I wanted to find out which size to go for, and also to see where it behaves well, and where not that well.
How did I measure
All of the tests run in iterations, and for some tests there are tens of thousands of iterations. Also, all tests log their results into the database. So, measuring time elapsed between a specific test began and completed wouldn’t be fair, and would be measuring overhead that I don’t care about.
Therefore, I used .NET System.Diagnostics.Stopwatch class which allows very precise time measurement, with a possibility to stop and resume measuring in the middle of an iteration. This way I was able to precisely measure a specific operation, regardless of what else happens during an iteration or during test preparation and completion stages.
Many tests include data rollbacks as parts of each iteration, to allow every iteration to work on exactly the same dataset. When rollbacks are included, then the rollback time is not measured.
All tests ran on a CRONUS database from NAV 2016 CU5.
I restored the database, imported the objects, ran all tests once to warm up the configuration, then deleted the results and then ran all tests five times in a row. Then I collected the raw data and calculated the averages.
I have designed a total of 14 different tests, to test different aspects of both NST and SQL performance, and some of those to measure how well they work together.
So, here’s the list of the tests I ran.
So, that’s it. My tests have now ran on a range of different tiers, and it’s time to go and compile the results.
Stay tuned, as over the next few days there will be a series of three posts (and perhaps more) letting you now how different on-prem, Azure VM, and SQL Azure tiers performed under the duress I described above.
The post NAV performance in various configurations appeared first on Vjeko.com.