Tag Archives: software test automation

Watchwords

Here are some principles I try to keep in mind.

Readability Make code easy to read: code is read more often than
written.
DRYness: Don’t Repeat Yourself Avoid redundant code and data.
YAGNIty: You Ain’t Gonna Need It Don’t write code before it’s needed.
Sloth Maximize use of existing packages and libraries. The line of code you don’t write is the line of code you never have to debug. — Steve Jobs
Explicitness Explicate everything, even when “unnecessary.” If it goes without saying, it would go better by saying it. — Tallyrand
Cleanliness Keep everything clean and consistent: run static code
analysis; resolve all issues before committing.
Failed Verdict Diagnostics Log data sufficient to diagnose a failed verdict.
Error Diagnostics Detect test errors early, fail early, log useful information.
Monitoring Trust, but verify. Monitor documentation for changes (programatically, of course).
Advertisements

I Was Warned!

After 90 days in my new project, it seems certain that we’re going to continue testing with Ruby, so I’ve bought an IDE, RubyMine. That gives me a debugger, which is often convenient and sometimes critical.

It also gives me static code analysis (RubyMine calls it code inspection), which is the equivalent of compiler warnings. In a few hours, I’ve reduced the number of warnings to zero, which is where I want it to remain. If there are a lot of unexamined warnings, something important may be hiding therein. (Actually, there was a warning about a case statement that had no else clause. I’m usually scrupulous about that, because is absence can cause downstream symptoms that are difficult to diagnose later on.)

Now my code is warning-free!

Tiered Testing [Socratic Dialog]

Socrates: Let’s begin at the beginning. Now, tell me, Tester, what is the purpose of the build verification test?

Tester: Its purpose, Socrates, is to determine whether full regression testing should proceed.

Socrates: I see. And what is the alternative, if the full regression testing should not proceed?

Tester: The alternative is that the build is considered failed, and repairs must be made before the build is tried again.

Socrates: What sorts of test failures, bugs, would fail the build?

Tester: Well, bugs that block important parts of the testing, certainly.

Socrates: The full regression testing should not proceed unless the tests can actually do their work?

Tester: That’s correct, I think.

Socrates: Are there other failures that would fail the build?

Tester: Yes. I think so: failures of important functionality.

Socrates: Regression testing should not proceed unless the major functionality works?

Tester: That’s right.

Socrates: Any others besides blocking failures and major functionality failures?

Tester: No, Socrates, I think that’s it.

Socrates: Very well. Then let’s think about just those two types of failures.

Tester: As you say.

Socrates: Of the two, is each type of failure sufficient, by itself, to fail the build?

Tester: Yes, Socrates, certainly.

Socrates: All right, then. Suppose that there are failures in major functionality, but there are not any blocking bugs. In that case, the regression testing should not proceed?

Tester: I think that’s right.

Socrates: That must mean, then, that the information gathered by the regression testing would not help in diagnosing the failures in major functionality, and therefore is not needed.

Tester: Well, the information might be helpful. Let me think. Yes, it would be helpful. Very much so, now that I think about it.

Socrates: So a failure in major functionality should not, by itself, be sufficient to fail the build. The regression testing should begin, and would gather helpful information.

Tester: Yes, I do now think that’s so.

Socrates: And a blocking bug alone would be sufficient, regardless of whether there are major functionality failures.

Tester: Yes, it would be sufficient.

Socrates: I see. Therefore the major functionality testing on the one hand does increase the duration of the build verification test, but on the other hand does not contribute to determining whether to fail the build.

Tester: Again, true.

Socrates: Why, then, is major functionality testing included in the build verification?

Tester: I’m not sure, Socrates. Perhaps it should be included because we need to identify important failures sooner rather than later.

Socrates: Indeed, that is important.

Tester: Well, Socrates, at least I get some agreement from you today.

Socrates: I’m glad for that. But, according to what we’ve said, would it not be better to separate the testing into three tiers: build verification, major functionality, and full regression testing? That way, the build verification can complete sooner; ideally, the major functionality testing would be started at the same time, but if not, then immediately after the build verification test.

Tester: Yes, Socrates, you’re right.

Socrates: Thanks for that.

Tester: Therefore I see, finally, that it would be good to have three-tiered testing:

  1. Build verification test: Find disqualifying bugs first.
  2. Major functionality test: Find important bugs fast.
  3. Full regression test: Find as many bugs as possible.

Socrates: As you say, Tester.

Tester: And if possible, all three should begin at the same time, to get the results soonest. In case the build is failed, diagnosis and repair can begin immediately.

Socrates: Again, true.

Tester: Thanks, Socrates. I’ll begin working on this.

Socrates: You’re very welcome, Tester.

[Ed: Modern thinking is that the BVT should fail the build for a single failed verification. Note, however, that a single verification may be, under the hood, compound and complex. For example, if there are two ways to register a user on a website, the verification might be that at least one of those ways succeeds. The verification would fail only if there’s no effective way to register a user, because that would block testing.]

Clean and DRY Verifiers

In a Coded UI Test (CUIT), a test method is a method that has attribute TestMethod. A test method is what many might call a test script. It’s the outermost method in the test, and directs the test steps.

Some say that only the test method itself should perform verifications, that a method in a page object (or other supporting object) should not perform verifications automatically:

The usual reason given is that such automatically performed verifications definitely will affect performance, and may not even be wanted or needed in a particular test context.

I agree, but with one addition: a method can appropriately perform verification at the request (direct or indirect) of the test method. So the request for verification should originate in the test method.

No matter where the actual verification is performed, the verifier method must log the expected value, the actual value, the verdict (pass or fail), and a descriptive message.

Question: Where is the best place to perform the actual verification?

Answer: Wherever it will be clean and DRY (Don’t Repeat Yourself).

A that will be where the verification method has the fewest and simplest parameters passed to it: in a page object!

A page object encapsulates its entire page, so it already has access to the HTML control that has the actual value for the verification. That means that a verification method in the page object need not pass the actual value. And that means, in turn, that a call to the verification method has fewer parameters: at most, just the expected value and a message string. That’s pretty DRY.

But wait, there’s more!

When the expected value is a constant (a table column header, for example), that value can also be stored in the page object. So in that case, the verification method would have no parameters at all. That’s really DRY.

Examples:

  • Home page verifies logged-in user’s name:
    public Boolean VerifyUserName(String expectedValue, String message = "User name")
  • User page verifies user data:
    public Boolean VerifyUser(User expectedValue, String message = "User")
  • Users page verifies that user does not exist:
    public Boolean VerifyUserNotExist(User expectedValue, String message = "User does not exist")
  • Page object knows its own column headers: public VerifyColumnHeaders()
  • Page object knows its own URL: public VerifyUrl()

Finally, I have a special-purpose verifier:

  • Verify that the locators in a page object correspond to actual controls in the UI: public Boolean VerifyLocators()

So performing verification in a page object, under the supervision of the test method, is easy. And doing so improves both cleanliness and DRYness.

Object not Found? Log the Context!

“Object not found.”

That’s what a GUI test tool is likely to log when an object is, well, not found. And many times no useful additional information — context — is available.

But there are situations when context is available, but usually not reported: and that situation is when some sort of selection fails.

Examples:

  • Menu item not found.
  • Tree view or cascaded menu item not found.
  • Radio button not found.
  • Select list option not found.

In these situations, it’s very useful for the test log to report what was found:

  • Menu: items found in the menu and the item not found.
  • Tree view or cascaded menu: nesting-level of the failure, items found at that level, the item not found, and the items successfully found farther up the tree.
  • Radio button: buttons found in the set and the item not found.
  • Select list option: options found in the list and the option not found.

This can really matter.

Suppose, for example, that the spelling (or even the casing) of an item is changed. You might have to breakpoint the test and run it for minutes, just to see what’s going on. But if the context of the failure — the items that were found — are logged, you’d immediately see what’s wrong.

So how to do this? In the GUI encapsulator, discussed in post
“Encapsulate the GUI Tool?”

For example, suppose in the GUI encapsulator you have a method whose job it is to select a given option from a given select list:

  • Create a new method that logs all the options in a given select list.
  • There will already be code to search for the relevant control. Around that code, place a try block.
  • In the catch block, call the new logger method, then re-raise the exception.

Now when a desired select option is not found, the log will contain all the items that were in the list, which you can now examine without re-running the test. Time-saver!

Try it! You’ll like it!

Location, Location, Location

In my Coded UI Test (CUIT) page objects, I encapsulate locator data into a Locator object. A locator defines and specifies the search for a specific HTML control in the target web application.

The Locator object has:

  • A locator name.
  • One or more name/value pairs, each of which indicates an attribute name and value to search for.
  • A search criterion: Contains or EqualTo.
  • A status:
    • Required: The control is expected to be on the page now; it is an error if it is not present.
    • Forbidden: The control is expected to not be on the page now; it is an error if it is present.
    • Allowed: The control is allowed (but not required) to be on the page now; it is not an error whether or not it is present.

Locator Creation

A page object creates locators when it is instantiated. Many of the locators will be Required. Some may be Forbidden or Allowed.

Locator Maintenance

The page object is responsible for maintaining the status of its locators. For example, if a locator is initially Forbidden, but some Javascript creates the corresponding control, the page object must change the status to Required.

More specifically, I have a user page with a delete button. If the test presses that button, the application will put up a display with buttons for confirming or cancelling the deletion. The locators for these two buttons, which have been Forbidden must now be changed to Required.

Conversely, when one of those two buttons is clicked, their containing display is removed, and the two locators must be returned to Forbidden.

The reason for this strict locator maintenance is that the page object may be called upon at any time to verify its locators’ controls: to confirm that each Required locator’s is present, and that each Forbidden locator’s control is absent.

In the Page Object: Locators, not Controls

A page object operates on an HTML element by passing a locator to a lower-level library method. That method finds the control and performs the operation. For example, a page object can call a button-click method, passing a locator; the button-click method uses the locator to find the button, then clicks it.

The page object does not retrieve a control, nor, I think, should it. It’s not clear to me when or why the object might to go “stale” (no longer reflect the state of the UI), so I prefer to get a fresh object for each operation. And, I think, doing so is just good data-hiding.

A Thorn in My Side

A very few of the HTML elements I’m interested in do not have sufficiently unique attributes to support unambiguous location. For example, there may be several h2 elements on the page, each with no attributes at all.

Perhaps the most fragile way to get at one of these is to search for all h2 elements, then take the one with the appropriate index. A change in the number or positions of the elements can break the test.

A way that’s only a little better, and one that I’ve felt obliged to use occasionally, is to find a nearby element (one that can easily be located), then “walk” the DOM (accessing parent and child elements) to get to the desired element.

I hate having this DOM-walking code in my page object, even though it’s factored into a method. Doing things this way means that there are two completely different ways the page can operate on a control:

  • The right way: Call for the operation to be done, passing a locator object.
  • The wrong way: User a locator to retrieve a control, walk the DOM to get to the desired control, then call for the operation to be done, passing a control.

Note that this means that for the same operation on different HTML elements, I need two method overloads: one that accepts a locator object, and another that accepts a control object.

A Partial Solution

A partial solution I’m considering is enhancing class Locator so that it stores one of the following:

  • Name/value pairs, representing the usual attributes in the element to be searched for.
  • The method that’s called finds the control and performs the operation.

  • A reference to a method that returns a control. The method that’s called calls the function, gets the control, and performs the operation.

Doing this would at least mean that a page object always works with a locator, and never with method that returns a control object. And that in turn would mean that if (when?) the HTML is improved, the locator could be adjusted without requiring other changes.

Thoughts, anyone?

Initial Post

This blog will mostly be about testing software, but I’ll likely go off topic from time to time.

At first, I’ll mostly be blogging about the work I’m doing right now, which is building test automation.  My language for this work is C#, and my test framework is Visual Studio (Premium or Ultimate only) Coded UI Test, which I’ll call  by the unattractive but shorter name, CUIT.  My test target is a web application.  CUIT also supports testing Windows applications, which I may also be doing soon.

I am building what some people have been calling “hand coded” CUITs.  That is, I am not using record/playback (or even record), and I am not using CUIT’s UI maps.  (Now that’s hand-coded!)

Soon I’ll be writing about exactly what I’m doing, as well as how and why.

Burdette Lamar