Friday, May 27, 2011

Application of Model Based Testing to a Binary Search Tree - Part II

Okay, today I want to wrap up on the model based testing of the binary search tree implementation I did last time. Remember how we uncovered a problem that the model did not cover all of the code? Drawing from our experience from the Application of Code Coverage to Model Based Testing post we understand that our model does not reflect closely the actual implementation, and we have a risk in terms of a test hole.

Understanding the problem
Before we jump in to adding additional tests, let’s try and understand what the problem really is. Remember I hinted that it has to do with our choice of container in the model. So let’s try to understand this some more by building some trees from the model:

Notice that even though these three trees are very different constructs, the internal set representation of the model reads (0, 1, 2) for all cases.

Friday, May 20, 2011

Application of Model Based Testing to a Binary Search Tree - Part I

I wanted to post some real modeling examples for a change, where I show how to use model based testing to explore test case combinatorics. The obvious choice is of course the more than sufficiently modeled calculator. So I decided not to choose a calculator, but something a bit different. I thought to myself, why not a simple Binary Search Tree? Hmm, but does it have any potential?

BSTs are really nice in that you can write invariants for them: 
For all nodes n in T: value(left(n)) < value(n) < value(right(n))

However, in a normal functional testing paradigm this is not entirely sufficient to validate the tree. The problem is that any given sub-tree of the BST will pass the integrity check – thus if I were to introduce a bug that removed the whole sub-tree when deleting a node from the tree, the resulting tree is still a valid BST but it’s not the expected! Normally we would need to check the node count and also that the expected values are to be found in the tree, however in a model based testing paradigm this is no longer required as we will see later on.

Monday, May 16, 2011

Flexibility of model based testing in practice

We all hear people arguing, model based testing is much better than traditional testing. "Why" you ask? "Well, it's much more flexible" comes the answer. And you sit back and think, "hmm, that really didn't answer my question?".

So let me try to answer your question - why are models more flexible? Let me give you a real life example of a case where model based testing proved to be flexible. I was working on a model for a new feature for the system under test. We started out designing and implementing model based testing on this particular feature the way it was implemented. It so happens that the feature contains a list of items (we call it a journal, when you confirm your entries you post the journal), and each item has associated a set of attributes (we call these dimensions - they are generic and used for analysis later on). We had developed a model for testing the posting functionality of this journal, and the model created roughly 300 test cases. Now it so happens that the dimension attributes can be set to a blocked state preventing posting the journal - which we had modeled.

Now somebody comes along and says: "You know what? This is simply used for forecasting, there is no need to prevent posting of blocked dimensions." At exactly this time you take a deep breath and brace youself for an argument on how the requirements were setup before you realize, well... this is not a problem at all, we used model based testing. We went back and looked at the model, and literally changed one line of model code changing the expected result from the PostJournal action to always be true. Viola, we had fixed 300 test cases.

In conclusion, model based testing allows for design changes at a later stage of development, as changes only need to be introduced on the model level. This is an huge benefit, especially because during the end game the pressure starts rising to finish before the deadline, and it is exactly at that time you do not have time to change 300 test cases because a requirement changed. Oh, and by the way, do requirements change in late game? Of course they do, that's the whole reason we invented SCRUM instead of the waterfall model (but that's for another post).

Wednesday, May 11, 2011

Application of Code Coverage to Model Based Testing

There’s something I want to get out of my system before I move on with the more meaty posts, and that is applying code coverage to model based testing.

In general code coverage must be the most abused quality metric in software testing. Take for example the often heard statement that once you hit 100% code coverage your system is completely tested. Of course from a coding perspective, yes – you have hit all code (and nothing exploded). But what does that mean from a testing perspective? You can have 100% code coverage, but it does not guarantee that the code is doing what you expected. Without verification on every line you cannot guarantee there are no bugs. Code coverage metrics often establishes a false sense of security.

Then, is code coverage useless? – no, actually it’s very useful, when it’s used correctly. Tying back to my original post on testing as risk minimization, code coverage will hint you at risks. If you have a class with zero code coverage, this means you have no tests exercising the class, and thus cannot say anything about its quality level, thus you have a gaping hole in your testing, which is to be considered a risk. Thus priority should be put on writing tests targeting this class. Code coverage can be utilized in this fashion to direct your effort to the most risky portions of the code.

The same holds true for model based testing. Obtaining high code coverage does not guarantee that your model is good – using the same argument, you do not know if the covered code is actually verified. However, if you obtain very low code coverage of the targeted part of the system under test, it is an indication that the model is disconnected from the actual system under test.

However, from my experience you should not expect to gain more than 40-60% coverage from a good model. The point here is that models are simplifications of the system under test and to obtain higher coverage percentage, you need to exercise corner cases of the system – usually this requires more specific test cases (e.g. boundary testing), and including these cases in the model is often more work than writing them stand alone.

From a code coverage perspective model based testing has the down side that the code coverage density of the tests is very low. By that I mean, that each test contributes very little to additional coverage. The reason for this is that model based testing performs combinatorial testing on the inputs, which results in many tests exercising the same paths, with only slight divergence between them. That means tests share a lot of code coverage. However, as you will no longer use code coverage as a performance metric for your test, this should not be a problem.

Thursday, May 5, 2011

Model Based Testing conference

If you happen to be in Europe and are interested in the latest news in model based testing, check out the

ETSI Model-Based Testing User Conference

When Oct 18, 2011 - Oct 20, 2011
Where Berlin
Submission Deadline May 22, 2011
Notification Due Jun 30, 2011
Final Version Due Sep 18, 2011

 http://www.model-based-testing.de/mbtuc11/index.html

Wednesday, May 4, 2011

Finite vs. infinite state space

Okay, the first "real" post. So what is a finite state space model?
Any model that you can explore completely is by definition finite. That means you can try out every single combination in finite time (although in some cases this can be a very long time).

Opposite, an infinite model is one where the number of inputs grow infinite.

Theoretically we cannot have infinite models in a computer, because the state space will be determined by the variables in the model which all have finite ranges - but say you combine two 32bit integers, the state space explodes to 18446744073709551616 states - which is practically infinite.

In general model exploration is bound by number of states and steps, and as such we don't know if a model is infinite if it has more states than the state bound. A classic example of a (practically) infinite model would be a counter with internal state variable int x = 0, and the action Increment() { x++; }. This model will generate a state for every value x >= 0. Exploring the model gives the following result:


So basically what this means is that anytime you have numbers in your model that can take arbitraty values you have infinite state space. Bummer. Okay, but there are of course ways to work with this. Enter: Equivalence Class Partitioning - ECP is all about determining which values you consider to be different. A great example is the addition function of your calculator - say add(x,y), consider testing that: add(2,2) = 4 if this holds true, would you expect add(2,3) <> 5? Of course not, if addition works for 2 it should work for 3 as well. Thus we consider y > 0 equivalent. Likewise y < 0 should be equivalent and y = 0 is a boundary case. (really, for all practical purposes we would assume y completely equivalent).

We can apply ECP the same way to our model state variables. Instead of modeling the counter fully we could assume that x > 0 is equivalent, thus we adjust the model slightly to be Increment() { if(x < 1) x++; }. Notice that the increment action is valid from any state, but calling Increment with x = 1 does not change the internal state representation. The model is now limited to two states, x = 0 and x = 1. It is also finite under our equivalence assumption. Model exploration gives:


Closing exercise: Extend the counter model to support negative numbers using a Decrement() action and apply ECP to make the state space finite.

[1] Counter models

Why model based testing?

So, why did I decide to start blogging about model based testing? Good question. I will try to answer this in my first post.

The obvious answer is that I believe model based testing is a strong practice to apply when performing software testing - one that every software tester should be aware of.

The more subtle answer lies in how I perceive software testing in general. I believe that the art of software testing is all about minimizing risk. What I often hear is that software testing is all about creativity. I disagree with this view because creativity often is far from rigourous. I believe we need a mixture of creativity when determining risks, but when it comes to automation of scenarios, these should be selected from rigourous criteria instead of human creativity - for example through the use of model based testing.

Model based testing in essence can be seen as a driving mechanism for generating test inputs for your system under test (okay - I know this is a simplification, but anyway). Once your model is completed (and assuming it is finite in state space) the exploration of it will test all possible combinations on the input. This is clearly a more rigourous way of testing your system, than coming up with all the scenarios by your self.

This was a very short introduction to why I blogg about model based testing.