From the documentation for Ruby gem Contracts:

Contracts let you clearly – even beautifully – express how your code behaves, and free you from writing tons of boilerplate, defensive code. You can think of contracts as assert on steroids.

I wrote here about getting better by reviewing the literature from time to time. The literature I specifically mentioned was one of the programming language cookbooks.

Well, recently I reviewed the Ruby Cookbook (while my car was being serviced) and saw Recipe 10.16, about Contracts. The recipe had code for a Contracts module; I was delighted to find later that since the book’s publication date [2005 — but a new edition is in work!], the Contracts idea has been made into a Ruby gem: Contracts.

You’re gonna love this!


And a Very Useful Warning, Indeed

I wrote yesterday about the warnings I’ve been getting from RubyMine, and my eliminating them.

RubyMine warns when it cannot find the definition of a method. I got many of these warnings for methods that are defined dynamically, at runtime, instead of statically.

At first I thought I might leave the code as-is, because if a method really is not defined, there will be a runtime error when it’s time to call it. But then I realized that there are two important consquences: RubyMine will not be able to perform either code completion or code navigation for an ‘unfound’ method.

This is important to me, because I want someone writing test code to get as much help as possible from the tools. If RubyMine can’t do code completion for a method, the programmer will have to know outright the name (and correct spelling) of the method he wants.

That will not do. So I’ve changed my code so that RubyMine now finds the definitions, and can do both code completion and code navigation.

Thanks, RubyMine!

I Was Warned!

After 90 days in my new project, it seems certain that we’re going to continue testing with Ruby, so I’ve bought an IDE, RubyMine. That gives me a debugger, which is often convenient and sometimes critical.

It also gives me static code analysis (RubyMine calls it code inspection), which is the equivalent of compiler warnings. Steve McConnell is very large with compiler warnings, as am I. In a few hour, I’ve reduced the number of warnings to zero, which is where I want it to remain. If there are a lot of unexamined warnings, something important may be hiding therein. (Actually, there was a warning about a case statement that had no else clause. I’m usually scrupulous about that, because is absence can cause downstream symptoms that are difficult to diagnose later on.)

Now my code is warning-free!

History Report

Okay, my recent post was about my Changes Report. In this post I’m writing about my History Report, which is a spreadsheet.

(If your history report is a spreadsheet, too, you may want to skip the first three paragraphs below, and resume reading at Each verdict cell.)


The left headers are in the first few columns at the left; their job is to identify each row as belonging to a single test verdict. I’m using the Ruby gem MiniTest::Unit, so the identifying information is: suite name, test name, method name, verdict identifier.

The top headers are in the first few rows at the top; their job is to identify the build and summarize the results. Each column’s headers include the date and time, the build identifier, and the count of each possible outcome: passed, missed, failed. The leftmost of these build columns is for the most recent build. Older builds are represented in columns to the right.


Each column (other than those I’ve just mentioned) show the verdicts for a single test run; the most recent run is just to the right of the identifying information, and the older runs are to the right of that.

Each verdict cell shows the outcome for the verdict in that row: passed, missed, or failed. These outcome cells are colored according to their values. (See my post on colors.)

Beyond that, there’s one other important bit of data involved: if the verdict or its underlying data changed since the previous test run, the verdict is rendered in bold and italic, and is in fact a link. The link takes me to the very verdict in the Changes Report, and there I find the full information about the verdict: its expected and actual values for the current and previous test runs.

The bold italic link is present only when there was a change in the verdict. That means that for an old (unchanged) verdict, I can look to the right to find the most recent bold italic link, and that tells me when the most recent change occurred.

The remaining item I’ll be adding (soon) is a column for defects. Each cell will be a link to the defect item (in Rally), if there is one.

Oh, and did I say? Both my Changes Report and my History Report are generated automatically from the test logs (the only exception being the defect information, which must be updated manually).

Changes Report

My automated tests produce two reports:

  • History report.
  • Changes report.

In my test logs, each verdict is one of: passed, failed, missed (the verification point was not reached).

Now what the managers want to know is: How many of each there were. That’s what’s in the history report: today’s results, along with historical results. I’ll write about the history report in my next post.

What I want to know is: What’s different from the previous test run. That’s what’s in the changes report: all the differences between the current test run and the previous one.

The changes report groups verdicts as follows:

  • New failed.
  • New missed.
  • New passed.
  • Changed failed.
  • Changed passed.
  • Old failed.
  • Old missed.
  • Old passed.

The last three — old failed, old missed, and old passed — are of no immediate interest to me. The current result is exactly the same as the previous result. There’s no action I need to take, because all these were dealt with after some previous test run: defect reports opened, closed, updated, etc.

The first three — new failed, new missed, and new passed — obviously need my attention. Defect reports will need to be opened, closed, updated, etc.

The middle two — changed failed and changed passed — also need my attention:

  • Changed failed: A changed failed verdict is one that failed in the previous test run, then failed in the current test run, but in a different way. This occurs when the actual value changes from one wrong value to another. Investigation is required.
  • Changed passed: A changed passed verdict is one that passed in the previous test run, then passed in the current test run, but in in a different way. This occurs when both the expected value in the test and the actual value delivered by the application have changed, but also agree. Usually this would be because the developer gave advance notice of a change, which the tester accommodated by pre-adapting the test.

So what of the changes report itself? Well, it has nine sections: a summary, plus a section for each item in the first list above.

The summary lists the other sections, linking to each, and showing me the count of verdicts in each. The links allow me to navigate quickly to whichever section I want.

Each of the other sections begins with a list of the verdict ids for the verdicts it contains; each verdict id in that list links to the data for the verdict. Again, the links facilitate navigation.

At the links, each verdict’s data is presented in a small table that gives the verdict id, along with the expected and actual values for both the previous test run and the current one. The table is “over-and-under,” showing the corresponding values one above the other; this makes it easy for me to spot differences, even between similar values. The values in the table are displayed in a monofont, which also makes spotting differences easier.

And of course, my reports are kinder and gentler than some others.

Report Verifications, Not ‘Tests’

In many testing shops, counts of passed/failed ‘tests’ are the main part of summary test reporting. But a ‘test’ result is just a collection of verification results, and its reporting is just a way to obscure the actual situation.

Suppose, just for a very simple example, that we have 100 tests, each with 10 verification points. Suppose further that the report says that 5 tests failed.

What, exactly, does that mean? Well, actually, it doesn’t mean anything very exact.

At one extreme, it could mean that in the 5 tests, all 50 verifications failed. At the other it could mean that just one verification failed in each of the 5 tests — 5 failures in all.

So we know that for failures we have somewhere in the range 5% (50/1000) down to 0.5% (5/1000). Pretty fuzzy, no?

That’s why I don’t report passed/failed ‘tests’; I report passed/failed verifications.

[Actually, I report passed/missed/failed verifications, where ‘missed’ means that the verification point in the test was not reached.]

A Kinder, Gentler Report

I really hate a report (whether Excel, HTML, or Word) that uses full-strength colors as visual aids. They’re too jarring!


Instead, I’m now using the colors I cribbed from MS Excel’s Home tab, in Styles.


[Here in WordPress I did not find out how to make narrow cells. Sorry.]

The new colors will look more true (and better) if you scroll down so that the old colors are no longer on your screen.


My History Report is an Excel spreadsheet. For that report, I’ve set the whole spreadsheet to conditional formatting. Here’s how:

  1. Select entire worksheet (by pressing Ctrl-A).
  2. Passed:
    1. Go to Home => Conditional Formatting => Highlight Cell Rules => Equal To....
    2. Type passed.
    3. Select Green Fill with Dark Green Text.
    4. Click button OK.
  3. Missed:
    1. Go to Home => Conditional Formatting => Highlight Cell Rules => Equal To....
    2. Type missed.
    3. Select Yellow Fill with Dark Yellow Text.
    4. Click button OK.
  4. Failed:
    1. Go to Home => Conditional Formatting => Highlight Cell Rules => Equal To....
    2. Type failed.
    3. Select Red Fill with Dark Red Text.
    4. Click button OK.

Then, any cell that has text passed, missed, or failed is automatically formatted appropriately.


My Changes Report is an HTML page. For that report, I’ve used the MS Excel colors as above, by adding this to the style section in the head section:

.good {color: rgb(0,97,0) ; background-color: rgb(198,239,206) }
.neutral { color: rgb(156,101,0) ; background-color: rgb(255,236,156) }
.bad { color: rgb(156,0,6); background-color: rgb(255,199,206) }

Then a cell can be, for example, defined in the HTML as:

<td class="good">passed</td>

The cell is then rendered with that same font color and background color as is seen in the Excel spreadsheet above.

MS Word

For a Word document, …

Well, actually I don’t generate reports in Word documents.


The long silence here on the blog is because I had:

  1. A job search.
  2. A hand-over on the old job (Aquent/HP).
  3. A ramp-up on the new job (R1Soft).

This is my fifteenth gig.

I’m going to leave off on all things CUIT for now. In my new work I’ll be testing web services. I’m looking at tools: Savon, SoapUI, possibly others.