I am sorry, but I will keep you waiting for a week or so as I will be on holidays enjoying the Southern French atmosphere. Hope the weather will not keep me from doing that. Au révoir.
Yesterday, Jill Frank, Senior Programming Writer for Microsoft Dynamics NAV, posted a blog on msdn How We Use MSDN Feedback to Improve Help. I would really want to encourage you to read it and pick the gauntlet she is throwing at us. As MS Dynamics GDL team member I was there when the NAV UA group started to redefine their goals and processes and I know they we really into improving their work. One of the outcomes has been the release on MSDN of the Microsoft Dynamics NAV 2009 Developer and IT Pro Documentation. I have been happy with that as soon as I knew about it. You might have noticed.
The UA group is really fast in giving feedback on what they will/can do with your input.
BTW: I would like to extend Jill's appeal also to the NAV Online Help. Ever noticed the documentation feedback link at the bottom of every topic? Use it for every anomaly you observe in a topic!
Yes, the purple color betrays I am using it, really.
Quality is something that has been in my veins for ages already. Although many around me might expect so, I cannot claim I was born with it. I can clearly recall my father rebuking me because of my sloppiness. Maybe that was just adolescent matters, who knows; long hair, careless attitude. Or maybe my father had never learned to just admit I was doing alright. Memory is selective and often not good in chronology. Nevertheless I dare to say I am good at quality, or at least I am into quality.
Why? Because I don't want to redo a job; economical; ecological... Because I want to be able to trust on the outcome of any endeavor. And not in the least: I want others to trust on me. And indeed, I don't like to be distrusted. Feels to me like, … like, … like failing my father's expectations probably.
Meanwhile you might wonder: where is he heading? Well, in our business quality is typically something we all want to deliver, or at least are expected to deliver. And frankly speaking: at the end of the day, that's what we get paid for by our customers.
Last week I picked up reading the book I had started a year ago, just released at that time: How We Test Software at Microsoft. Not sure where I had stopped reading, I was browsing through the table of contents, as I often do. Just to see what 'statement' would trigger me. I halted at chapter 16, Building the Future. Or more precise at the section called The Need for Forward Thinking. Right in the bull's eye! Quality 'all over the place'.
"Most software used today is too complex, too big, and too expensive to improve quality from testing alone"
"If you want to get a high quality product out of test, you have to put a high quality product into test" - quoting Watts Humphrey
Or in other words:
"If quality is not at the forefront of engineering processes, it is impossible to reach acceptable levels of quality in the end"
So how to deliver quality? How to improve quality? By driving quality upstream.
As often: once I start reading I don't stop and jump from one book to the other. So one more for the road from Steven McConnell's much appraised CODE Complete.
"If you start the process with designs for a Pontiac Aztek, you can test it all you want to, and it will never turn into a Rolls-Royce. You might build the best possible Aztek, but if you want a Roll-Royce, you have to plan from the beginning to build one."
Upstream! And what's 'to be found' upstream? Right, Requirements Specifications or Specs. In the next couple of blog posts I want to shed my light on this topic. On How I Reviewed Specs @ Microsoft. You're right I liked that title: How We Test Software at Microsoft. Especially the acronym made of it: HWTSAM.
So look out for HIRS@M part 1.
... and test. As you might have noticed: the sub title to my blog.
Last Sunday's great 'pat on the back' from Vjekslav Babic (once again: thanx!) explicitly mentioned the test part of my blog and, to be honest, next to the fact that it made me somewhat shy, it also triggered me that I had dedicated myself to write about testing also. I know I did in some post, but also have been struggling with this theme and as such haven't written a lot about it explicitly IMHO; at least not as much as I initially wanted too. However one testing theme had been bugging me several times lately, being sunshine vs. rainy. So today, motivated by Vjekslav, I decided to pick this up. As I abhor redundancy I first did some goo.., uhhh, binging on this and also consulted my testing mentor. But I didn't get a lot from that. Probably I am not a very good/persistent binger. Hope you still believe I was quite a good tester.
The most obvious thing we do when testing a piece of software is asking ourselves: what is this supposed to do? And when we have found the answer, either from well documented requirements, in-code comments or coffee-corner talks, we will test whether it does what it is supposed to do. Straight forward, like when the sun is shining. This is what we call sunshine scenario testing. In a nut shell you could say: this is what it's all about. Build and test software that does what it should do. Nothing more, nothing less.
Unfortunately - depends on your perspective - it's always more complex than this as:
There is always someone or something to blame. And although the sun was shining when we started, clouds are getting in the way and even produce rain. It isn't anymore what was supposed/assumed to be. Therefor we also need to test the rainy scenario(s); i.e. feed our piece of software with invalid input, try to overload it, approach it from a different perspective, etc. In many ways the role of a tester is one of a demolisher, be it a constructive one. And this doesn't start just when code is available. It should already start as soon as requirements are being conceived. In every phase of a software project there are things (i.e. deliverables and processes) to be tested.
[So far haven''t found a lot to mention here, but over time I imagine the list will grow.]
As I wrote on my previous blog post, I "... was greatly impressed by [the Test Manager's] ability to support manual testing and even let you record and code them ... " [... Read More ...]
Yesterday and today I spent some time to download and install the Virtual Machines for VS 2010 RC with Sample Data and Hands-on-Labs. Some time? Sorry for the understatement: it took me a least 4 hours to download, another 4 hours to get it run and, let's balance it, another 4 hours to play with it so I could use the Test Manager on NAV. And guess what? It was really worth every minute of it. O yes, I got it crashing once; only once. But I got it running and recording and replay my manual test on NAV 2009 RTC. It was real fun! Although you have to learn to be precise in your actions as every keyboard and mouse action is recorded.
NAV 2009 RTC? So not on Classic? Indeed, not on classic as it does not recognize most of the controls. But I can live with that moving forward and eventually leaving behind the classic client.