The other day some colleagues and I got into a discussion on model-based testing. The idea of testing a system based on a graphical representation of a system has been around for a long time and it has taken many forms over the years. The typical vision of model-based testing is having a graphical model of the system under test (SUT) and being able to push a button and like magic, tests are created, run against the live system and a log of discovered defects appears before your eyes. I started thinking how it was unfortunate that the concept really never became viable in the industry and how it had not made its way into the Rational quality management portfolio… and then I suddenly realized… it did!has become a very powerful component in our portfolio and I hadn’t even noticed it! Before I discuss that very powerful component, let me justify how its presence had taken me off guard.
I think the reason why I missed the obvious is that the model is just a component of our portfolio, not the keystone. Our various attempts at model-based testing over the years here at Rational were really not just model-based but model-driven approaches. The difference is subtle. Let me show my age by using a former Rational product to illustrate my point.
One of the earliest of Rational’s organic developments in the area of model-based testing was a product we put out around the year 2000 called Rational Test Architect. It actually was a pretty slick tool based on Rational Rose UML (Unified Modeling Language) models. The idea was that the System Architects would develop Rose UML models to define the system being built. The Designers would then take that model and elaborate it with all the details required to generate code from the model. The code generated was really more of a framework. Programmers had to fill in the real implementation of the methods and deploy the system. Once that was done, the Testers could elaborate the UML model a bit more and link all the test data they had created independently to the model. Once all that was in place – BAM! – you could click a button and the tests would run against the deployed system and give you a log of errors. It was just that simple.🙂
The first thing you probably recognize as a problem is the waterfall nature of the process. Testing was left to the end, of course. But there are a few more reasons Test Architect isn’t exactly a household word today.
If you Google “model-based testing”, this Wikipedia article will be near the top of the list. And in that article, you will find this workflow diagram.
IXIT in this diagram refers to implementation extra information and refers to information needed to convert an abstract test suite into an executable one. That is often easier said than done. It’s pretty easy to draw out a block diagram of the SUT from which you can envision some abstract tests, but there can be a lot of “extra information” required to generate a real executable test from the model. This was a problem for Rational Test Architect and is generally a problem for any model-based testing tool. We used to say “your model is your test” but the model required many complex and technical details such as addresses, port numbers, authentication parameters and data structures in order to create a real test. In order to generate a test, the model had to be complete. It had to be fully elaborated. If the model wasn’t perfect, you couldn’t move forward.
In his paper “Model-Driven Testing with UML 2.0”, Zhen Ru Dai uses this diagram to illustrate the process of using models to create test code.
Notice the number of transformations and refinements that need to happen to get from a Platform Independent Model (PIM) to Test Code. That loosely equates to the “extra information” from the Wikipedia article. Clearly this is very specific to the technology being used to implement the system. The more structured and specified the platform technology is, the less that must be supplied by the Tester in order to transform the PIM to executable tests.
In order to contain this vast complexity, Rational Test Architect limited the technologies to which it could be applied… to one technology… COM (Microsoft Component Object Model). No, not COM+ or even DCOM, just COM. (Don’t laugh, we could have chosen CORBA). In effect, we were starting with a Platform Specific Model in the above diagram, avoiding some of the general stuff. That made model-based testing much more realistic for the COM world, but not so much for anything else. Limiting ourselves to COM was rather, well, limiting. Yet another reason Rational Test Architect isn’t a household word.
But you also have to keep in mind that back in 2000, component-based development was a fairly new concept. You didn’t find that many industry accepted standards for message interchange and services. Architecture was only for the really big DoD systems. Most desktop systems were based on seat-of-the-pants development from the ground up. This isn’t the case today, which brings me back to the model-based testing component in the Rational portfolio today.
I’m talking about what is known as the Architecture School perspective of Rational Integration Tester (formerly Green Hat GHTester) (RIT). It is used to graphically model aspects of the SUT such as endpoints, operations and relationships. So why do I think this has any more chance of long-term success than than past attempts at model-based testing? Let me give you a few thoughts.
Vast Standard Technology Coverage
RIT is constrained to working within the boundaries of the technologies which it understands. There is no getting around that. However, today’s IT systems are build around standards such as HTTP(S), JMS, TIBCO, Softwar AG, Oracle, SAP, IBM MQ and others. There are many, but they are based on industry accepted standards. RIT supports over 70 industry standard protocols and message formats.
In addition to those 70+ standards, RIT provides mechanisms to extend your coverage by enabling you to define custom message formats and schemas which will help you support not only your proprietary legacy systems but new bleeding edge specifications.
Model-based, not Model-driven
The model assists in the process of developing tests, it isn’t driving the creation of the tests. I can begin with a model as simple as a single HTTP connection and start creating tests. I don’t need to have a complete, elaborate model to get started.
Build by Specification
The model can be derived by analyzing interface specifications such as WSDLs, XSD, DFDLs, Copybooks, IDocs or many others. These specifications act as interface contracts between the system components. They are often created well before the components are coded. RIT can derive portions of the model from these specification documents and stay in synch with them as they evolve. Tests are not rendered obsolete each time the specification changes.
Build by Recording
If portions of the system already exist – even in very preliminary forms – RIT can record the interactions and derive the model from the recordings. This includes identifying operations, dependencies and data formats. One of the tenets of Agile development is building an executable system at the end of each iteration. This means there will be working interfaces to record very early in a project which lead to very early integration tests.
Separation of Logical and Physical Views
That “extra information” required to realize an executable system from a logical model is captured in a physical resources. The physical resources are bound to the logical model components through environments. The beauty of this architecture is that tests can be based on the generic logical model and only bound to the physical details at execution time, depending on the environment in which the test will be run. Testers can switch from testing against developer desktop environments to test lab environments to user acceptance to pre-production without redefining countless “extra information” in the model.
So my old friend model-based testing was standing right there in front of me and I hadn’t even recognized him. He certainly dresses differently and acts differently in Rational Integration Tester than when I last saw him in Rational Test Architect. He’s learned to be more open minded – to apply himself to more tasks. He’s matured in his approach. He’s flexible. Maybe we will stay in contact now after losing touch for the last twelve or so years.