snuffleupabug – a bug that is ignored because you think the reporter (who you consider less sophisticated than you) is making it up.
Archive for the ‘testing’ Category
I finished the test, and fixed the script. My overall impression is that CubicTest is fairly straightforward, and I have not yet used it to its strengths. I still want to play with subtests, test suites, and custom test steps.
Now for the pictures:
The IP Validator looks like this:
When you enter a possible IP address and click the button, the server-side script returns a nifty table with the result:
Unfortunately, there was a bug in the script. After the third address checked, the display looked like this:
That bug is not the interesting bug I wanted to keep. Here’s the quick test I came up with, using the CubicTest plugin:
During the test development, I added id attributes to the result table elements. That made the common contexts much easier to specify. And I had a few go rounds with eclipse editor weirdness switching the sense of the last check from “text on the page” to “text not on the page”.
Other attendees included Dale Emery, Chris Sims, Jeffrey Frederick, Ken Pier, and Kevin Lawrence.
We chatted a bit about test automation philosophy, told war stories, and played with the tools.
I watched Elisabeth set up a RobotFramework test for some ATDD work she is doing with a new website. When she showed the “fixture” code and the interface capabilities of the tool, I was hooked. I’m going to write up some tests for the big-app-at-work next week.
Last night, I managed to get seleniumrc up and running on my MacBook, then this morning I got cubictest installed and running in eclipse. I think I’ll use it to write tests so I can refactor my ip address validator (the one with the interesting bug I’m keeping in, but the annoying bugs I want to fix).
Finally, Elisabeth shared a pointer to UISpec4J as a good unit test library for Swing.
Lots of other tool goodness happened, too.
Thanks again, Elisabeth, for calling and hosting the event.
A quick hint about managing test data for “enterprise” applications.
Does the application under test rely on a large central database?
If so, is that large central database backed up on a regular basis?
If so, can you get access to the backup files?
For some testing tasks, it might be easier to just load a new database instance from those backups than to create and populate a database from scratch.
In one case, we dropped, truncated, and pruned a number of tables from the backup, then created a new backup file. A much smaller backup file. One that loaded in one hour instead of many hours.
From Brian Marick, on the agile-testing list:
Simon Baker’s written a nice short summary of testing in his Agile project. It pretty much captures my biases. (That’s what “nice” means.)
To me, software testing is what happens when someone asks me to tell them something about some software system. They might ask me to find problems or bugs. They might ask me if I think the system is any good. They might ask me if I think it is fit for use in a particular situation. They might ask me if the system meets a set of requirements.
I really like it when they make my job challenging. It’s no fun to find lots of bugs or lots of missed requirements very quickly and very easily, or to discover that adding the third client session locks up the server. Instead, I’d much rather have to work harder and smarter to reveal useful, previously unknown information to the person who is asking.
I can use software tools to help investigate software systems, and I can help others use software tools, like automated tests and executable specifications, to help them do their jobs better.
But I don’t know how to help someone answer the question “Tell me something about this system that I should know, but didn’t know that I should ask” by using a tool. Answering that kind of question needs a tester.
Answering that kind of question needs me.
Well, that takes care of one counter-example. What other exceptions can you think of?
One of the software development teams I work with has been using xp-style iteration planning and velocity measuring, with cards and points and walls and such.
I was standing in front of the wall, talking with one of the coaches, when the following dawned on me:
I can tell a product owner that by investing x points into making legacy code more maintainable, the team could subtract 1 point from the estimate on each of the cards for stories in that code area.
So if there are, or will be, more than x cards of that type, then it might be worth it to invest the time to improve the code.
There are no a priori best practices
There are only the practices you are using now.
And practices that are better than the ones you are using now.
I’m always impressed by the simple revelations.
Testing. What do you mean by testing, or tests, or testers . . .
With a hat tip to George Dinwiddie and a post of his on the agile-testing mailing list, I have a few more context-establishing questions:
- Am I testing in the context of building something, or in the context of evaluating something that has been built
- Am I testing something I am building or have built, or something you are building or have built
- Am I testing something we are building or have built, or something they are building or have built