From: David Chart (linux@dchart.demon.co.uk)
Date: Sun Apr 28 2002 - 06:03:00 EDT
On Sat, 2002-04-27 at 22:15, Larry Kollar wrote:
>
> A QA "sheet"? As in singular?
>
> [Jesper already knows this stuff, to be sure, but just in
> case some others don't... you might want to know what
> you're getting into in case you want to volunteer. :-)]
>
> My day job isn't verification, but I work with them from
> time to time. They have dozens (if not hundreds) of one- or
> two-page procedures that they go through for each release
> (I helped them develop a template & wrote one or two sheets
> for them, and some of our documentation also goes through
> a verification process.) It's basically a checklist with
> a pass/fail result depending on what happens.
>
> Failed test cases are not always showstoppers; each one is
> filed as a bug & evaluated individually. But quantity as
> well as quality counts; a certain percentage of failed test
> cases (regardless of severity) will hold up a release too.
>
If a failed test case isn't a showstopper, then it doesn't belong in the
procedure I was suggesting. I have some idea of how much effort full QA
on a release of Abi would take ("lots", as in dozens of man-hours at
least -- everything needs to be tested on every platform and
architecture), and I don't think we need to start with that.
I think we do need to start by making sure that the release makes dist
on all our platforms. Several recent releases (including 1.0) haven't.
Making sure that the dist installs and starts would be good, as well.
That doesn't take too long, and for most of it the program is compiling
so you can do something else.
Full QA is a fine target to work towards, but I think we should do it
the Open Source way -- start small, and build up.
One observation -- if we run tests with scripting enabled, scripting
must be enabled in the distribution builds. I can't remember if it is.
-- David Chart
This archive was generated by hypermail 2.1.4 : Sun Apr 28 2002 - 06:05:28 EDT