Friday, May 20, 2011

Application of Model Based Testing to a Binary Search Tree - Part I

I wanted to post some real modeling examples for a change, where I show how to use model based testing to explore test case combinatorics. The obvious choice is of course the more than sufficiently modeled calculator. So I decided not to choose a calculator, but something a bit different. I thought to myself, why not a simple Binary Search Tree? Hmm, but does it have any potential?

BSTs are really nice in that you can write invariants for them: 
For all nodes n in T: value(left(n)) < value(n) < value(right(n))

However, in a normal functional testing paradigm this is not entirely sufficient to validate the tree. The problem is that any given sub-tree of the BST will pass the integrity check – thus if I were to introduce a bug that removed the whole sub-tree when deleting a node from the tree, the resulting tree is still a valid BST but it’s not the expected! Normally we would need to check the node count and also that the expected values are to be found in the tree, however in a model based testing paradigm this is no longer required as we will see later on.

Okay, so I started implementing a BST class just for the fun of it.
The signature looks like:

    public class BST
    {
        public bool Add(int i)
        public bool Remove(int i)
        public void CheckIntegrity()
    }

Two simple methods Add and Remove each takes the integer value to store/remove from the tree, the last method checks that the integrity of the tree is preserved. Cool. To get started I wrote a static wrapper class around the BST that operates on a single instance of the BST class:

    public class BSTTest
    {
        private static BST tree = new BST();

        public static void Cleanup()
        {
            tree = new BST();
        }

        public static bool Add(int i)
        {
            bool retval = tree.Add(i);
            tree.CheckIntegrity();
            return retval;
        }

        public static bool Remove(int i)
        {
            bool retval = tree.Remove(i);
            tree.CheckIntegrity();
            return retval;
        }
    }

This class will be the interface for my auto-generated tests.

Now to the fun part: Modeling!
First, you can pretty much build any relevant test scenario from a three node tree (except one case which requires four nodes, but it makes the article less concise to include it… if interested post a comment). Obviously the absolute integer values you insert are of no relevance, it is the relative values compared to what is already in the tree that matters (i.e. a tree with nodes 1,2,3 is equivalent to one with 2,3,4 or even 10,57,123) – with this in mind we can scope the model down to working with values 0,1,2 – that’s it, everything else is equivalent. The behavior of Add and Remove depends on which values are in the tree. You cannot remove a value that is not in the tree and you cannot add a value that is already there. Thus the model must keep track of what is in the tree – to do this we use a SetContainer. Another option is to use a SequenceContainer, which is a bit different in that the order you add values in matters. You might think the difference is neglectable, but as we will see this actually makes for a profound difference in the end result. The model code looks like:

    static class BSTModelProgram
    {
        static SetContainer<int> values = new SetContainer<int>();

        /// <summary>
        /// A rule that models the action of adding a value to the BST.
        /// A value can only be added once.
        /// </summary>
        /// <param name="x">The value to add to the tree.</param>
        [Rule(Action = "Add(x)/result")]
        static bool AddRule(int x)
        {
            if (values.Contains(x))
                return false;

            values.Add(x);
            return true;
        }

        /// <summary>
        /// A rule that models the action of removing a value from the BST.
        /// A value can only be removed if it was already added.
        /// </summary>
        /// <returns>The value to remove from the tree.</returns>
        [Rule(Action = "Remove(x)/result")]
        static bool RemoveRule(int x)
        {
            if (!values.Contains(x))
                return false;

            values.Remove(x);
            return true;
        }
    }

Parameter values for x are limited to [0;2] by the machine configuration file. Notice though that the way I designed the model I allow Spec Explorer to execute the RemoveRule with parameter y even though y is not in the list - this allows Spec Explorer to generate negative test cases, where the test also checks that it is not possible to remove a value if it does not exist. Another approach would be to use Conditions to block the RemoveRule from being explored if the value is not contained in the tree, which would then only generate sunshine scenarios.

Let’s explore the model
At first this looks somewhat confusing, but there is order in the chaos. Try following the path for Add(0), Add(1), Add(2). Notice how all the remove actions seem to trace backwards in the graph – this is to be expected.

Remember how I told you that in model based testing there is a new paradigm, and that we did not actually need to check the content and size of the tree to validate it? Let me explain why. Let’s start by considering a functional testing approach:

Add(0) -> Add(1) -> Add(2) -> Remove(1) -> Validate that the tree contains 0, 2 -> End

Let’s look at the equivalent model driven test:

             Add(0) -> Add(1) -> Add(2) -> Remove(1) -> ?

So, what happens now? According to the model picture we are in state S28, but there are more actions to apply from here! Spec Explorer will continue testing from here. For example it would try

-> Remove(2)/True

In the case of the bug that deletes the sub-tree instead of just the node, the value 2 would be nowhere to find in the tree, and thus the test would fail. Likewise Spec Explorer will automatically test that all the expected nodes are in the tree. Why? Because in S28, for each value y in the tree there is a self-looping action available to Add(y)/False, stating that the model expects an error when it adds y – but because we are doing positive and negative testing it will actually try this action.

To conclude on this, the thoroughness of the model exploration actually greatly enhances the validation of the tree – and you get this for free!

Auto generated test pass
The model generates a set of 17 test cases to run against the SUT. On a fully compliant implementation of BST all tests pass. To analyze the value of this model based approach I have found a series of common bugs to test for:

Bug                                                                  
# test result
Forgetting to remove the root node if it is deleted
6/17 passed
Deleting the sub-tree from node instead of only the node itself
6/17 passed
Adding nodes to the wrong side of the tree
3/17 passed
Failing to delete leaf nodes
9/17 passed
Removing a node with only left children deletes whole sub-tree
7/17 passed
Removing a node with only right children deletes whole sub-tree
11/17 passed

So the performance of the test suite is pretty solid in terms of catching bugs. However, after running the tests I do a code coverage analysis of the SUT and we find the following piece of information
Hmm? I’m missing coverage on 4 blocks - that means I can introduce bugs in these 4 blocks and my model based tests would not be able to find them. So, if my model was perfect I would have 100% code coverage of the SUT? So what am I missing in my model?

The offending method is named FindReplacement. The purpose of this method is to locate the node that is to be put in place instead of a deleted node. But why am I not getting all outcomes covered??? Well, as it turns out this has to do with my implementation choice on using a SetContainer to store inserted values in my model – but more on this next week in Part II…

2 comments:

  1. This comment has been removed by the author.

    ReplyDelete
  2. Really nice topics you had discussed above. I am much impressed. Thank you for providing this nice information here.

    Software Testing Company

    QA Services

    Console Game Testing

    Game Testing Company

    Video Game QA

    ReplyDelete