Archive for the ‘mozilla’ Category

puppies!

Wednesday, December 20th, 2006

HPIM0272

More insightful observations from Elisabeth Hendrickson

Monday, November 13th, 2006

I’ve pointed to postings from her in the past, and Elisabeth Hendrickson on her new blog site continues to demonstrate one of her many skills – distilling wisdom from her practical experiences. Start with her 3-Nov post, and keep reading forward. It will be time well-spent.

Cool test stuff from sayrer

Monday, October 23rd, 2006

Take a look over at bug 357523 to see what Robert Sayre is doing with the MochiKit testing framework. He’s extended it to log results to stdout or a file, and quit the app when the tests are done, both of which are implicit really-nice-to-have’s for running in an automated manner.

Cool stuff! Thanks, Rob!

The tests themselves are very similar in structure to the jsunit based tests I wrote a while ago as proof-of-concept efforts, which is also cool in case we have to pick one harness over another. But having both options available right now is a win, in my opinion.

On an unrelated note, can anyone recommend a web forum package that supports moderated posts where the posts can be edited and re-ordered by the moderator?

Pacific Northwest Software Quality Conference – Trip Report

Thursday, October 12th, 2006

I’m sitting in the Portland, Oregon airport, laptop out, free wi-fi network joined, coffee at my side, waiting for my flight, and thinking about the great time I had attending PNSQC. This was their 24th year, and having attended once before (22nd?), I’m looking forward to attending the 25th anniversary session next year.

Monday night, I attended the kick-off dinner at Old Town Pizza, where I chatted a good long while with David Anderson and Jon Bach about lean, missions, stories, and kids. And I went on the tour of the “Shanghai Tunnels”.

Tuesday, I enjoyed Andy Hunt’s keynote/overview of learning styles, and one way to embark on professional development. And it made me wonder: how many tech folks who think about learning are married to health-care professionals?

I listened to the first half of Jean Tabaka’s talk on Lean, then headed on down to hear John Lambert from Microsoft talk about stuff testers can do when they are not writing or running test cases. It’s nice to hear someone else’s take on the stuff I’ve been doing for years, which is also stuff that the Mozilla testing community does almost naturally.

The Wacom guys I spoke with at lunch were very interested in how we at Mozilla handle configuration testing, which led nicely into my paper (slides to follow shortly) about how the project uses the feedback from the community. My presentation focused on three lessons I thought might apply to other contexts:

  • Make it easy to send in feedback
  • Make it easy to track what happens as a result of that feedback
  • Make it easy for new folks to find useful feedback to send in

After my talk, I caught the second half of Michael Bolton‘s talk on first-to-market and systems thinking. Then I ended up talking with Cem Kaner and Kathy Iberle for the remainder of the afternoon about open certification, testing large systems, and extremism.

That evening, while meandering through LLoyd Center to grab a bite to eat, I watched folks doing curling at the indoor rink.

Tuesday morning’s keynote by Karl Wiegers on software quality cosmic truths was predictable yet entertaining. I ended up chatting with Karen Johnson during the first block of talks, then caught Jon Bach’s talk on exploratory testing as a competative sport.

Jon is a passionate, compelling speaker on exploratory testing, and I’m glad he’s folding in the idea of stories and storytelling into his presentations. I’ve long believed that testers need to be good storytellers, and Jon pointed me at some promising references I will be following in the next few weeks.

The panel discussion at lunch was a bit of a disappointment, mostly because I could not see the faces of the rest of the folks in the room. The topics were ones I agree with

  • Most commercial certification of software testers is not worth much, except to the people running the certifications and charging for review courses
  • Many software testing techniques lauded as “current best practices” are 30 years old, designed to effectively test the kind of software that was developed 30 years ago, and actually hinder effective testing of the kind of software developed today

I chatted with a small group of folks after lunch, then went back to the hotel before going out to dinner with a MoCo QA member who lives in Portland.

All in all, I consider this a worthwhile trip.

Updates on the testing front

Wednesday, September 27th, 2006

Axel Hecht posted in mozilla.dev.quality some good ideas about extension-based test harnesses. I especially like the logging to the console and registering a console listener idea. I hope someone, maybe even me, can work on these ideas soon.

dietrich and rsayre are working on porting places test cases to the xpcshell-simple test harness. See bug 354401. I heard they were discussing the philosophy of naming test functions, too.

waldo took the simple http server functionality that was embedded in the necko tests, fortified it, and packaged it to run as a stand-alone extension/component. He’s in the process of landing the changes in bug 342877. He packaged a spike of it for me to try with jsunit. It works fine, and I’m looking forward to seeing other cool uses of this server in testing.

rhelmer checked in a bunch of enhancements to the update checker script (see bug 346013). But he’s not done – more changes are coming to add additional automated verifications to the build process. This is really cool, since problems detected during that process are problems QA won’t trip over (forcing a respin and a restart of testing).

Thanks to some cribbing from config files created by bhearsum at seneca, I got buildbot running as a master on my MacBook Pro, and a buildslave running on WinXP (on Parallels, on my MBP) building firefox on trunk. AND running “make check” with log-file scanning to detect test failures. I’m planning to get jsunit, dbaron’s reftest, and some of the extension-based test harness stuff running as well.

Why buildbot? Well, I wanted to learn more about it. And I wanted to get something running without waiting on tinderbox machine config or tinderbox client code changes. And all I really want is to turn a tree orange if a test fails (which buildbot can do). And it’s shiny.

Beginnings of a simple xul-based test harness

Tuesday, September 19th, 2006

A few weeks ago, I wrote a set of scripts to check the search engine plug-ins packaged with localised Firefox 2 builds. Last week, I got around to cleaning up the docs.

I used a number of lessons learned (recently documented here), including profile creation, cross-platform quit, and the super-secret unattended mac installation mechanism.

Fell free to try it out, or even better, look over the to-do list and work on one of the issues.

The Tests Are Running! (Or Not)

Saturday, September 9th, 2006

Lessons learned:

  • even tests need system test in a close-to-production environment
  • our build system grew up without continuously running automated tests, so it’s small wonder that turning on such tests might be complicated.

This was it! The week we (the build team and I) would turn on “make check” in trunk Tinderboxes!

And we did! We enabled the “make check” test on one tinderbox.

And we saw that only one test ran, and it had an error. Not a failure, but a problem actually running the test.

So we disabled it. In parallel with investigating the test error, we enabled the tests on another tinderbox.

And we looked very closely at the run log for that second tinderbox. And discovered:

--disable-tests” is specified in the mozconfig for most tinderboxes

sigh.

Then we discovered, from investigating the test error on the first tinderbox, that:

most tinderboxes build static builds, and xpcshell (something our tests use a lot) does not work when built statically.

Sigh.

If I had run the tests on my build machines using the same configuration as is on the tinderboxes, I probably would have discovered this earlier. I don’t feel too bad about this, because we used incremental roll-out as a proxy for system testing, and the tinderbox config system is just so gosh darn complex . . . but I’ll still do better next time.

The bit about “--disable tests” is bigger than it may appear. Removing this option (and enabling tests) has at least one very undesirable side-effect; namely, lots of files get installed inside dist/ that we do not want in the final build packaging.

SIGH.

Here’s the plan. Short term, I’ll get some new tinderboxes set up to run these tests. They will build non-static builds and not specify “--disable tests“, and they will not copy their builds anywhere. I’m not sure which tinderbox tree group will receive the the results, but we’ll figure it out.

Medium term, I’ll work with the build team and the developers most familiar with the build system so that we can turn off “--disable tests” without side-effects. For the known side-effect I listed, I think we’ll move the files from inside dist/ to maybe dist/../test/, or at least dist/test/.

make check” is not the only set of tests I want to enable. I’m also looking at jsunit-based tests, jssh-based tests, and tests based on dbaron’s new layout reftest tool. Look for these types of tests soon!

How to test the platform

Tuesday, September 5th, 2006

I need to add “investigate this” to my to-do list, because another item on my to-do list is to run the automated tests of other projects against our nightly builds to see if we break those tests (things like the jsunit self-tests).

Great fun at parties

Tuesday, September 5th, 2006

One skill a tester needs is the ability to analyze a requirement in order to identify possible ambiguities and possibilities for misunderstanding.

For example, a colleague received the following request via email:

I’m wondering if you happen to have any suggestions on a good book on QA that I could recommend to a novice?

This colleague forwarded the request to me, which inspired this blog entry. I see the request as a veritable gold-mine of underspecification :-)

What do you mean by “QA”? Is that software testing, hardware testing, fabrication testing, process analysis and auditing, requirements analysis and auditing, or something else?

What do you mean by “novice”? Someone who has no experience or training in “QA”, someone who has some training but no practical experience, someone who has some experience but is new to a particular job, someone who does not have a job but wants to get one, someone who wants to pass a certification test, someone who wants to discover what certifications he or she should pursue, or something else.

Another skill a tester needs is to know how to pace these questions so as not to overwhelm (read: put off) others, possibly by offering a potential answer and observing how it is received.

Go get a copy of Lessons Learned in Software Testing, by Kaner, Bach, and Pettichord.

What if I . . .

Thursday, August 10th, 2006

. . . changed my name to Lorem Ipsum? I suspect nobody would be able to find me using search engines . . .