Dear Lazyweb: Does this software exist?

I’ve been wondering if the following kind of testing management software exists (preferably free software, of course).

It would allow one to specify a number of test cases. For each, one should be able to describe preconditions, testing instructions and expected outcome. Also, file attachments should be supported in case a test case needs a particular data set.

It would publish a web site describing each test case.

A tester (who in the free software world could be anyone) would take a test case, follow the instructions given and observe whatever outcome occurs. The tester would then file a test report with this software, either a terse success report or a more verbose failure report.

The software should maintain testing statistics so that testers could easily choose test cases that have a dearth of reports.

As a bonus, it would be nice if the software could submit a failure report as a bug report .

(Note that this would be useful for handling the sort of tests that cannot be automated. There are many good ways already to run automated test suites.)

17 thoughts on “Dear Lazyweb: Does this software exist?

  1. Thank you. Converting a one-off site into another one-off site would probably be a possibility if it was needed badly enough.

    Anybody know of a general solution that does not require stabbing the code?

  2. The Ubuntu ISO testing tracker is sort of what I am looking for but it’s a bit too specialized (and thus requires stabbing the code). I note that the Ubuntu QA team uses a Wiki to store its application test cases (and appear not to track success reports for them).

    But still, thank you :)

  3. yeap the wiki is used to store instructions for the test case. And the test results are available for a particular test are available from the qa-tracker itself e.g. http://iso.qa.ubuntu.com/qatracker/result/4427/53

    There are also some scripts for processing “historic” data, cause during fondation/qa meetings on irc & in the minutes there are ascii statistics of how things are progressing.

  4. Yes, the ISO system stores reports, but there appears not be any support for storing success reports for the *application* test cases (as opposed to ISO installation tests).

  5. I read about something like that while browsing through stuff going on in the Ruby community. They got something like a markup language to define goals or test cases and I’d think it was quite awesome. Now, the only thing I’d have to do was to remember the name… Ha!

    It’s cucumber:

    http://cukes.info/

    I hope it helps; and if not, it’s interesting anyway :)

  6. @m: We looked at Cucumber for our business-language test framework at my ISV, but ended up using TestDriven and SpecFlow to convert the plain-English test terms and their parameters (via a regex) to the internal testing interfaces of the software suite. (I don’t know of a GPL equivalent, but would be interested in helping create one.)

    In terms of having a series of testing steps for people to follow, Borland have SilkCentral which is a horrific abomination of a web interface (or an unforgiveable Java desktop client) to a flaky MySQL DB of tests – in fairness, it can reliably run automated tests on a schedule for you. You specify projects, components and test runs, and can put in manual tests as ‘test content’ and ‘expected outcome’ steps. At this point, I’ve got to say that writing your own on any platform, and without using SQL, will probably be more helpful and reliable than Silk. (Again, I don’t know of a GPL or DFSG equivalent, but would be interested in making myself a better replacement and an atonement for SilkCentral.)

    It has to be said that, even with a good central place to store tests, different people parse and understand different things from even the most-plainly written tests. Our QA department has spent hours in meetings arguing about the right way to put the tests so that anyone can run them. Also, there will always be a certain amount of technical knowledge that people will need before stepping through the tests. But, on the whole, making it easy for volunteers to regression-test free software, and making their time and effort valuable, verifiable and repeatable would be an awesome step forward for any free software project.

  7. I’ll just throw the “Salome TMF” name around. Never used it myself, but I’ve heard of it from people who are into test management.

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>