The Rational Integration Tester Model

In my last post, I talked a bit about how Rational Integration Tester uses visual models to assist you in building tests and service virtualizations – otherwise known as stubs. I would like to go a bit deeper into that topic in this post.

You use a perspective in Rational Integration Tester called the Architecture School to build your models. The name implies that you will be teaching RIT about your system by creating models. That is partially true, but in some ways, RIT will be teaching you! Let’s take a closer look at the Architecture School and how it is used.

The first thing you may notice about the Architecture School is that is is divided up into multiple sections through tabs. I will focus on the Logical View and the Physical View in this post.  The Logical View is typically where you will start entering your model if you will be modeling manually (more about automated modeling in a future post). The Logical View will represent the logical representation of your system. Don’t think of this as a deployment topology. It is not. You will not be specifying details like redundant servers, app server clusters, etc. Think logical here. For example, if you will be testing an HTTP web service, you may start with a single HTTP connection in the logical view. HTTP connections, databases, JMS domains and such are considered Infrastructure components in the Logical view. You will normally compartmentalize your architecture into Service components. In addition to Infrastructure components, Service components usually contain the Operations exposed by the service. Relationships between Operations and Infrastructure components are represented by Dependencies. In the following diagram, JKE Services is a Service component, JKE Legacy Services is an HTTP Connection Infrastructure component and user and account are Operations that are dependent on the JKE Legacy Services connection. Test and stub definitions are defined using only logical components. Tests and stubs are not directly built upon physical resources.

Logical View of the Architecture School Perspective

Logical View of the Architecture School Perspective

Details such as IP addresses, port numbers, authentication methods and the like are associated to Physical resources in the Physical View. An Infrastructure component from the Logical view can represent any number of Physical resources. For example, you may be building a web service that runs on a logical HTTP connection. You do your development testing on a simple desktop instance of the logical architecture where the application is hosted on a small Tomcat server. As the system matures, you deploy to a QA instance running on a WebSphere Application Server in a test lab. As you move to pre-production, your system instance will look more like your redundant WebSphere cluster. So that one JKE Legacy Services connection could be realized through three or more physical servers.

Physical View of the Architecture Perspective

Physical View of the Architecture School Perspective

So how does Rational Integration Tester know which physical server you want to use when you create or run a test? The piece that pulls the logical view together with the physical resources is what is known as an Environment. The environment is the secret sauce that makes this separation of logical and physical actually work. The environment “binds” the logical infrastructure component to the physical resource for a particular instance of the system you will test against. Think of a domain as a map.

Environment Editor

Environment Editor

So to run tests against the development environment in our example, you simply select Mike’s Dev environment. To run the exact same tests against the QA lab environment, you simply switch to the QA Lab environment. The tests are not changed in any way. Rational Integration Tester takes care of the details of directing the requests to the right hostname, queue or domain.

Switch Environments

Switch Environments

As I mentioned in my last post, two downfalls of many well-intentioned model-driven testing approaches has been the rigid dependence of the test to the model and the need to fully elaborate the model with physical details before testing can begin.  Rational Integration Tester uses the concept of multiple views associated by environments to manage the visual model to enable the visual model to be useful yet not too restrictive.

Model-based Testing… We meet again, my old friend

The other day some colleagues and I got into a discussion on model-based testing.  The idea of testing a system based on a graphical representation of a system has been around for a long time and it has taken many forms over the years.  The typical vision of model-based testing is having a graphical model of the system under test (SUT) and being able to push a button and like magic, tests are created, run against the live system and a log of discovered defects appears before your eyes.  I started thinking how it was unfortunate that the concept really never became viable in the industry and how it had not made its way into the Rational quality management portfolio… and then I suddenly realized… it did!  Model-based testing has become a very powerful component in our portfolio and I hadn’t even noticed it!  Before I discuss that very powerful component, let me justify how its presence had taken me off guard.

I think the reason why I missed the obvious is that the model is just a component of our portfolio, not the keystone.  Our various attempts at model-based testing over the years here at Rational were really not just model-based but model-driven approaches. The difference is subtle.  Let me show my age by using a former Rational product to illustrate my point.

One of the earliest of Rational’s organic developments in the area of model-based testing was a product we put out around the year 2000 called Rational Test Architect.  It actually was a pretty slick tool based on Rational Rose UML (Unified Modeling Language) models.  The idea was that the System Architects would develop Rose UML models to define the system being built.  The Designers would then take that model and elaborate it with all the details required to generate code from the model.  The code generated was really more of a framework.  Programmers had to fill in the real implementation of the methods and deploy the system.  Once that was done, the Testers could elaborate the UML model a bit more and link all the test data they had created independently to the model.  Once all that was in place – BAM! – you could click a button and the tests would run against the deployed system and give you a log of errors.  It was just that simple. 🙂

The first thing you probably recognize as a problem is the waterfall nature of the process.  Testing was left to the end, of course. But there are a few more reasons Test Architect isn’t exactly a household word today.

If you Google “model-based testing”, this Wikipedia article will be near the top of the list.  And in that article, you will find this workflow diagram.

Model-based testing workflow

Model-based testing workflow

IXIT in this diagram refers to implementation extra information and refers to information needed to convert an abstract test suite into an executable one. That is often easier said than done.  It’s pretty easy to draw out a block diagram of the SUT from which you can envision some abstract tests, but there can be a lot of “extra information” required to generate a real executable test from the model.  This was a problem for Rational Test Architect and is generally a problem for any model-based testing tool.  We used to say “your model is your test” but the model required many complex and technical details such as addresses, port numbers, authentication parameters and data structures in order to create a real test.  In order to generate a test, the model had to be complete.  It had to be fully elaborated.  If the model wasn’t perfect, you couldn’t move forward.

In his paper “Model-Driven Testing with UML 2.0”, Zhen Ru Dai uses this diagram to illustrate the process of using models to create test code.

Test Design Models

Test Design Models

Notice the number of transformations and refinements that need to happen to get from a Platform Independent Model (PIM) to Test Code.  That loosely equates to the “extra information” from the Wikipedia article.  Clearly this is very specific to the technology being used to implement the system.  The more structured and specified the platform technology is, the less that must be supplied by the Tester in order to transform the PIM to executable tests.

In order to contain this vast complexity, Rational Test Architect limited the technologies to which it could be applied… to one technology… COM (Microsoft Component Object Model).  No, not COM+ or even DCOM, just COM.  (Don’t laugh, we could have chosen CORBA).  In effect, we were starting with a Platform Specific Model in the above diagram, avoiding some of the general stuff.  That made model-based testing much more realistic for the COM world, but not so much for anything else. Limiting ourselves to COM was rather, well, limiting.  Yet another reason Rational Test Architect isn’t a household word.

But you also have to keep in mind that back in 2000, component-based development was a fairly new concept.  You didn’t find that many industry accepted standards for message interchange and services.  Architecture was only for the really big DoD systems.  Most desktop systems were based on seat-of-the-pants development from the ground up.  This isn’t the case today, which brings me back to the model-based testing component in the Rational portfolio today.

I’m talking about what is known as the Architecture School perspective of Rational Integration Tester (formerly Green Hat GHTester) (RIT).  It is used to graphically model aspects of the SUT such as endpoints, operations and relationships.  So why do I think this has any more chance of long-term success than than past attempts at model-based testing?  Let me give you a few thoughts.

Vast Standard Technology Coverage

RIT is constrained to working within the boundaries of the technologies which it understands.  There is no getting around that.  However, today’s IT systems are build around standards such as HTTP(S), JMS, TIBCO, Softwar AG, Oracle, SAP, IBM MQ and others.  There are many, but they are based on industry accepted standards.  RIT supports over 70 industry standard protocols and message formats.

Technologies

Sample of Supported Technologies

Extensibility

In addition to those 70+ standards, RIT provides mechanisms to extend your coverage by enabling you to define custom message formats and schemas which will help you support not only your proprietary legacy systems but new bleeding edge specifications.

Custom Record Layout

Custom Record Layout

Model-based, not Model-driven

The model assists in the process of developing tests, it isn’t driving the creation of the tests.  I can begin with a model as simple as a single HTTP connection and start creating tests.  I don’t need to have a complete, elaborate model to get started.

Simple HTTP Connection

Simple HTTP Connection

Build by Specification

The model can be derived by analyzing interface specifications such as WSDLs, XSD, DFDLs, Copybooks, IDocs or many others.  These specifications act as interface contracts between the system components.  They are often created well before the components are coded.  RIT can derive portions of the model from these specification documents and stay in synch with them as they evolve.  Tests are not rendered obsolete each time the specification changes.

Build by Specification

Build by Specification

Build by Recording

If portions of the system already exist – even in very preliminary forms – RIT can record the interactions and derive the model from the recordings.  This includes identifying operations, dependencies and data formats.  One of the tenets of Agile development is building an executable system at the end of each iteration.  This means there will be working interfaces to record very early in a project which lead to very early integration tests.

Recording Studio on the toolbar

Recording Studio on the toolbar

Separation of Logical and Physical Views

That “extra information” required to realize an executable system from a logical model is captured in a physical resources.  The physical resources are bound to the logical model components through environments.  The beauty of this architecture is that tests can be based on the generic logical model and only bound to the physical details at execution time, depending on the environment in which the test will be run.  Testers can switch from testing against developer desktop environments to test lab environments to user acceptance to pre-production without redefining countless “extra information” in the model.

Environment Bindings

Environment Bindings

So my old friend model-based testing was standing right there in front of me and I hadn’t even recognized him.  He certainly dresses differently and acts differently in Rational Integration Tester than when I last saw him in Rational Test Architect.  He’s learned to be more open minded – to apply himself to more tasks.  He’s matured in his approach.  He’s flexible.  Maybe we will stay in contact now after losing touch for the last twelve or so years.