Tag Archives: Coded UI Test

Test Pages, Not Workflows: Part the Second

A few weeks ago, I posted Test Pages, Not Workflows. Since then, I’ve been doing just that, with much success.

My page, I can say without revealing anything proprietary, is the application’s Users page, for which I’m adding eight page-specific tests:

  1. Evaluate page initial state:
    • Page has correct menu bar and footer elements.
    • Expected buttons are present and have correct enabled/disabled state.
    • User list (table) is present.
    • User list has correct column headers.
  2. Add users in UI.
  3. Add users from CSV file.
  4. Add users from spreadsheet.
  5. Send email to selected users.
  6. Search users.
  7. Sort users by column.
  8. Remove selected users.

These tests perform several hundred verifications, which covers the page and its operation completely.

Can you spell D-R-Y? With these tests in hand, no other test will ever have to verify anything on this page.

Now on to the next page!

Pass the Object, Please

This sign-in method looks reasonable:

public HomePage SignInPage.SignIn(String userName, String password)

It isn’t reasonable.

Why? Because it violates encapsulation: the caller knows (or thinks it knows) what’s needed for signing in.

What if a scanned-in card id is added to the sign-in procedure? Well, that would require the method to be changed.

But actually, that’s none of the caller’s business in the first place.

This is reasonable:

public HomePage SignInPage.SignIn(User user)

Moral: don’t pass multiple data items from an object. Pass the object!

Architectural Notes

I’ve been meaning to post some notes about the architecture I’m implementing for CUITs.

Here goes:

  • Page objects: Each page, popover, and page tab is fully encapsulated using the page object pattern, hiding the HTML on the page, and providing services to the test scripts. It defines its own page-specific locators, and derives other locators via page compositors.
  • Base page: Each page, popover, and page tab derives from a base page that houses the current context and provides locator management.
  • Page compositors: Each page includes common elements, such as menu bar and footer, via compositor classes. Each page tab includes common elements, such as page title and tab navigation, via page compositor classes. This composition simplifies the page objects by avoiding redundant code.
  • GUI tool encapsulator: The GUI tool interface is fully encapsulated by a single class that performs all searches for controls, and all access to controls. This greatly reduces redundant code elsewhere. It also facilitates instrumentation and diagnostics, such as logging the context when an error occurs. Example: when a menu item is not found, the menu items that did appear are logged.
  • Locators: Each control on a page is identified by a locator that lists the attributes needed to find the control. A page object accesses a control by calling a method in the GUI encapsulator, passing it a locator. Example: a page object clicks a button by calling method ClickButton(locator).
  • Log: The test log is an XML file consisting of nested sections. These sections correspond to nested code-blocks in the test, and allow a test to “tell its story” in an organized way: steps, substeps, data, verifications. Each section and subsection has a title and, by default, a timestamp and a duration.

    Each logged verification includes the expected value, the actual value, the verdict (passed/failed), and a message. A summary at the top of the log gives the counts of passed and failed verifications, and notes whether an exception was thrown. Any thrown exception is captured and logged, including its type, message, and stack trace.
  • Verifications: Each verification is performed in a page object, but only at the request of a test script. This eliminates the need for the test script to fetch the actual value, which the page object fetches transparently. The page object also logs the verification.
  • Context: A context object is passed to each page constructor, giving access to the test log, the CUIT TestContext object, and other context information.
  • Test runner: A test runner object manages the test, including opening and closing the browser and test log, catching and logging exceptions, and logging version and other environmental information.
  • Documentation: Documentation is via C# XML documentation, which adds the documentation to the IDE’s IntelliSense. The documentation is also compiled into HTML pages using Doxygen.

    Additional documentation covers such topics as test architecture, project best practices, how to “spy” on the application, and various conventions for the test project. This is also captured in the Doxygen.

Good Things Come to Him Who Waits (Instead of Sleeping)

I avoid putting sleeps into my test code. It’s never certain how much time is needed for the hoped-for state to be reached. And the sleep time has to be the longest time believed necessary, even if a much shorter time often would have served.

Waits are better, but I’ve had trouble with some CUIT wait methods, which sometimes seem not to have the desired effect: a search succeeds, but accessing the found control causes an exception.

Recently, I’ve added to the search code these wait method calls:

String state = UITestControl.PropertyNames.State;
ControlStates nvail = ControlStates.Unavailable;
ControlStates nvisi = ControlStates.Invisible;
foundControl.WaitForControlExist();
foundControl.WaitForControlReady();
foundControl.WaitForControlPropertyNotEqual(state, nvail);
foundControl.WaitForControlPropertyNotEqual(state, nvisi);

So far, they seem to be effective; I have not had a false search failure recently.

Now Playing: CUITs Without VS!

I’ve spent time this week setting up a virtual machine. It now has MSTest, the Selenium binaries, and the Test Agent and Test Controller.

Together, they can run my Coded UI Tests without Visual Studio. This is essential if we’re going to have multiple test machines, because we don’t want to have to have a VS license for every test machine.

Wasn’t easy, though. I’ve captured all the steps in a doc. I need to write a PowerShell script to do this work!

And Thereby Hangs a Tale

Your test log should tell a story — the story of what the test did.

When a verification fails, you should be able to learn a great deal from the log itself, without having to go back to the test method’s code.

Most test loggers are flat, with all entries at the same structural level. This makes it difficult for anyone reading the log to understand the test’s structure and intent.

To make the story’s flow obvious, organize your test log into sections and subsections, each with a title. These sections need to be evident both in the test method and in the test log.

The key concept here is that the in the test code, a statement starts a section that will end when something goes out of scope. The statement itself begins the section, and the going-out-of-scope ends it.

How to implement? That depends in part on the language. Examples:

  • Ruby:  block.
  • PythonWITH statement.
  • C#: using statement with IDisposable.

I’ve written test loggers in all three of these languages (plus Perl), and will share more details with interested parties.

What’s in a Name?

Notes on naming:

  • It’s easy to cause confusion between objects and name strings. In the statement below, are the objects strings? We don’t know.

    previousUser = user;

    For a user object, I use an unadorned word user:

    User user = new User();

    For a user name, I adorn:

    String userName = "George Washington";

    That way it’s clear throughout the code when I’m talking about an object, and when a name.

  • It’s also easy to make confusion about what exactly are the keys and values of a dictionary (hash). In hash users, are the keys user names? User IDs? We don’t know.

    I embed the word By in my hash name, to make it obvious what’s indexed by what:

    Dictionary usersByUserName = new Dictionary();

    All is perfectly clear here in the declaration, of course, but elsewhere in the code, it won’t be. Give your code maintainer (usually yourself, six months later) a break!

The key principle in creating a name is that the code will be read far more often than it will be edited. Make it easy to understand your code.

Hey, what are your pet peeves and great ideas for naming?

Clean and DRY Verifiers

In a Coded UI Test (CUIT), a test method is a method that has attribute TestMethod. A test method is what many might call a test script. It’s the outermost method in the test, and directs the test steps.

Some say that only the test method itself should perform verifications, that a method in a page object (or other supporting object) should not perform verifications automatically:

The usual reason given is that such automatically performed verifications definitely will affect performance, and may not even be wanted or needed in a particular test context.

I agree, but with one addition: a method can appropriately perform verification at the request (direct or indirect) of the test method. So the request for verification should originate in the test method.

No matter where the actual verification is performed, the verifier method must log the expected value, the actual value, the verdict (pass or fail), and a descriptive message.

Question: Where is the best place to perform the actual verification?

Answer: Wherever it will be clean and DRY (Don’t Repeat Yourself).

A that will be where the verification method has the fewest and simplest parameters passed to it: in a page object!

A page object encapsulates its entire page, so it already has access to the HTML control that has the actual value for the verification. That means that a verification method in the page object need not pass the actual value. And that means, in turn, that a call to the verification method has fewer parameters: at most, just the expected value and a message string. That’s pretty DRY.

But wait, there’s more!

When the expected value is a constant (a table column header, for example), that value can also be stored in the page object. So in that case, the verification method would have no parameters at all. That’s really DRY.

Examples:

  • Home page verifies logged-in user’s name:
    public Boolean VerifyUserName(String expectedValue, String message = "User name")
  • User page verifies user data:
    public Boolean VerifyUser(User expectedValue, String message = "User")
  • Users page verifies that user does not exist:
    public Boolean VerifyUserNotExist(User expectedValue, String message = "User does not exist")
  • Page object knows its own column headers: public VerifyColumnHeaders()
  • Page object knows its own URL: public VerifyUrl()

Finally, I have a special-purpose verifier:

  • Verify that the locators in a page object correspond to actual controls in the UI: public Boolean VerifyLocators()

So performing verification in a page object, under the supervision of the test method, is easy. And doing so improves both cleanliness and DRYness.

Object not Found? Log the Context!

“Object not found.”

That’s what a GUI test tool is likely to log when an object is, well, not found. And many times no useful additional information — context — is available.

But there are situations when context is available, but usually not reported: and that situation is when some sort of selection fails.

Examples:

  • Menu item not found.
  • Tree view or cascaded menu item not found.
  • Radio button not found.
  • Select list option not found.

In these situations, it’s very useful for the test log to report what was found:

  • Menu: items found in the menu and the item not found.
  • Tree view or cascaded menu: nesting-level of the failure, items found at that level, the item not found, and the items successfully found farther up the tree.
  • Radio button: buttons found in the set and the item not found.
  • Select list option: options found in the list and the option not found.

This can really matter.

Suppose, for example, that the spelling (or even the casing) of an item is changed. You might have to breakpoint the test and run it for minutes, just to see what’s going on. But if the context of the failure — the items that were found — are logged, you’d immediately see what’s wrong.

So how to do this? In the GUI encapsulator, discussed in post
“Encapsulate the GUI Tool?”

For example, suppose in the GUI encapsulator you have a method whose job it is to select a given option from a given select list:

  • Create a new method that logs all the options in a given select list.
  • There will already be code to search for the relevant control. Around that code, place a try block.
  • In the catch block, call the new logger method, then re-raise the exception.

Now when a desired select option is not found, the log will contain all the items that were in the list, which you can now examine without re-running the test. Time-saver!

Try it! You’ll like it!