Monday, August 29, 2011

One bit of information

Sometimes one bit of information makes all the difference in how people respond to something.

My middle son is profoundly autistic.  He doesn't speak, but he laughs and hums and grunts.  He loves to jump and run and spin.  Sometimes I bring him along to my oldest son's Boy Scout troop meetings.

I can ALWAYS tell who knows about his autism and who doesn't.  As he is humming or giggling and spinning and jumping, the people who know look at him warmly and affectionately.  The ones who don't glare and scowl.

Is there one bit of information about your testing that could change the way people respond to it?

Wednesday, August 10, 2011

Had a great time at CAST 2011

I'm feeling inspired!

Harry Robinson did a great talk on how the Bing! team uses test automation.

Listening to him, and thinking about some complaints I've heard from the functional testers that are my customers, I realized I have totally neglected an important context--the Individual Tester Context.

As someone who thinks of himself as a tester, this is seriously embarrassing.

Wednesday, January 21, 2009

When Contexts Collide, Part II

In my last post I described the events leading up to a meeting we had to talk about how to incorporate some new code that a couple of our functional testers had written. It enabled some new types of testing within our test automation framework.

I kicked off the meeting by reiterating the differences between Project Context and Product-Line Context automation. Project Context automation is focused on delivering the automation and results that are needed to complete a development project. Product-Line Context automation focuses on building automation that can be run completely hands-off for a long period of time across multiple projects/releases.

Then, myopically, I started to talk about what needed to be done to finish their new code for the Product-Line Context. Once again, I was trapped in my own context and not stopping to think about the needs of the Project Context. Fortunately, one of my colleagues stopped me almost immediately and suggested that we should define completion criteria for both the Project Context and the Product-Line context.

This lead to a very good discussion. We eventually reached consensus on two different sets of "Done Criteria." Below are the detailed criteria, from the post-meeting email that I sent out.

Project Context: when something is “done” for the Project Context, it is ready to be checked into the ITE source tree and shared among other ITE users, but it is not ready to migrate into the lab to be run fully hands-off by the Automation team.


We agree to the following as “Done Criteria” for the Project Context:
  • Must be backwards compatible with the ITE and existing tests—if other people start using the new code, it isn’t suddenly going to cause a bunch of false test results or other problems.
  • It must have a good suite of unit tests. Both the new unit tests and the existing unit tests should run and pass.
  • It must be able to submit pass/fail/abort test results to the ITE results database.
  • It must be checked into the ITE code tree (in some appropriate location, might not be on the main branch).
  • There must be sufficient documentation to allow other people can set it up and use it.
  • Other ITE users must be able to build/deploy it to their ITE installations.

Product-Line Context: when something is “done” for the Product Line Context, it is ready for the Automation team to start running it in the lab. No hands-on modifications or monitoring will be needed.


We agree to the following as “Done Criteria” for the Project Context:
  • It can be set up using f5iteconfig (f5iteconfig is the tool we use to set up a new automation environment).
  • Our subjob health-checks can monitor all the hardware in the test harness and generate alerts if one of them fails (health-checks are done periodically during a test run to verify that all the systems in the test harness are functioning).
  • The controller can identify the harness and use appropriate meta-data to figure out which tests can run on it (our tests are tagged with meta-data denoting the hardware/software releases for which the test is valid).
  • We can retrieve data (log files/core files/whatever) from all the hardware in the harness.
  • We can configure/provision/license all the DUT in the harness.
  • We can allocate resources/configure services on all the hardware in the harness.

Wednesday, January 14, 2009

When Contexts Collide...

I got a first-hand reminder recently of the value of the idea of Automation Contexts. A conflict that I was having a lot of trouble framing productively suddenly came into focus for me when I looked at it through the lens of contexts.

Recently a couple of testers in our functional test team rewrote on of the modules of our test automation system and started using their changes on their own test harnesses. The changes that they made enabled them to add some additional gear to the test rig and do some testing that wasn't previously possible.

Everybody was really excited about this--it's always good to have the capabilities of the system expand, and this showed that the capability of the functional test team was also expanding. A year ago there was no one working on the functional team who had the knowledge of the automation system and the technical skills required to do something like this.

We decided that we wanted to take their code and integrate it into the ITE (our automation system). Then the fun started.

There was a chain of emails that, when you boil it down and strip away the polite phrasing, went something like this:
Automation person: "This code is really good, but it isn't done yet, there are several other things that need to written before we can use this:

Functional test person: "This code is really useful. We're already using it."

Automation person: "Yes, like I said, this is good code, but it doesn't solve the whole problem."

Functional test person: "This is really useful code. Look at this list of bugs that we've found using it."

Automation person: "We often find a lot of bugs while we are developing a new piece of automation functionality. But there is still work that needs to be done before we can use this code."

Functional test person: "Why do you keep saying it isn't done yet?"
This all went back and forth over a couple of days when I was out of the office. The next day, when I was back in the office my manager came and asked me what I thought was happening, and how I wanted to handle it. I thought about it for a little while, and noted that the people involved seemed to be talking past each other.

I went and talked to my principal engineer for a while. As we were talking, it dawned on me that this was an example of Project Context needs being different from Product-Line Context needs. The functional testers had written something that addressed their needs for their project.

Because it addressed their needs for their project, the functional test manager was happy and didn't immediately see why the automation team thought additional work was needed. Looking at it from the Project Context, it really was done, and there was no need to do additional work.

But when my team looked at it from the Product-Line Context, as is our habit, there was more work to do. The changes required some manual configuration, which would be a problem in the lab. The changes added additional hardware to the harness that the system health check didn't monitor, which would also be a problem in the lab. Until that work was done, we couldn't run set up and run the tests in "hands-off" mode in the lab.

I summarized all this in an email, and set up a meeting to discuss it. The meeting, which I will describe in a future post, helped me figure out ways to balance the organizational need of the business for both Project Context and Product-Line Context automation.

Monday, November 17, 2008

The Individual Developer Context

When I use the phrase "Individual Developer Context", what does that mean?

My current defintion is "a set of tools and practices used by a developer to test the code that she writes."

DISCLAIMER: I have not worked as a developer. On a good day I can code my way out of a paper bag. My thoughts on the Individual Developer Context are based more on observation than personal experience.

The main traits of the Individual Developer Context as I have seen it practiced are that the tests are written by developers, the results are looked at by the developers, and the primary decision that is being influenced by the tests is code check-in. If the tests pass, the code can be checked into the main branch. If they don't pass, you have to fix them before checking them in.

Is there a difference between Individual Developer Context automation and automated unit testing? I'm not sure that there is. Wikipedia defines unit testing as "a method of testing that verifies the individual units of the source code are working properly. There may be tests written by an individual developer for his own code aren't unit tests. I can imagine a developer who works mostly by himself writing these kinds of tests. But I've never encountered them (if any of you have, I'd love to hear about them).

So what kind of tools are needed for effective Individual Developer Context automation? The critical piece is a tool for quickly creating and running low-level, granular tests. The xUnit framework has been widely proven as a solid tool for writing and running unit tests. For my own coding projects (some very crude stuff that I'm writing as a learning exercise) I use pyUnit. The other tool that I think is really useful for the Individual Developer Context is a code-coverage analyzer. Since these tests are written by the people writing the product code, they are best positioned to design new tests that will cover uncovered lines of code.

Wednesday, November 12, 2008

Saturday, November 1, 2008

My talk at GTAC 2008

Last week I spoke at the 2008 Google Test Automation Conference.
 
Creative Commons License
Context Driven Automation is licensed under a Creative Commons Attribution-Share Alike 3.0 United States License.