In this post, I’ll show how implementing these Agile practices enhances a DO-178C project by pulling testing inside a development mini-cycle while keeping it compliant with the standard.
First, a bit of theory.
Test-driven development practice can be resumed as follows: write a test, run it and ensure that it fails, write code, run the test again and ensure that it passes (and correct the code if it does not). The idea is that you need to define your desired functionality using the test and then develop code that passes this test. TDD may seem inapplicable to DO-178C project setup at first glance. Two small adjustments, however, turn things around: write a requirements-based test and segregate code development and test development with two different persons. Thus, the spirit of the TDD can be tailored organically to heavyweight development methodology.
Continuous integration was originally a practice of frequent merging of working software copies into a mainline. The main goal of CI was to deal with integration problems. Nowadays this technique is more widely considered. Usually CI is aided with automated build process and complemented with various quality-oriented activities. Coupled with TDD, continuous integration forms an outstanding basement for project-health monitoring and tangible quality increases. My post “Is continuous integration worth the price? Yes, I’m sure” describes CI practice in more detail and sheds light on its usage in DO-178C projects.
Now let’s move from theory to practice.
Common DO-178 projects often consist of several phases. These phases are called “releases”, “builds”, “versions”, “loads”, “labels” and so on. A phase duration usually varies from a couple to a dozen months or more. In turn, the work in each phase is split into a number of change requests. When development for all change requests is finished, the build is released and transferred to the verification group for testing. The next build is then worked on, along with testing of the previous build.
Such a project setup often results in error propagation deep into the lifecycle, introducing turbulence into the project flow and completion of scrap or redundant work. Ultimately, a project gains a significant chance of encountering last-minute surprises, additional unplanned phases to clean up bugs, schedule slips, missed deadlines and a whole bunch of known but uncorrected errors in the delivered software.
The goal of project life-cycle enhancement is to drastically reduce error propagation by uncovering and fixing them right at the time as the code portion is developed.
You need to do both technological and process adjustments to make these enhancement. From the technological perspective, you need two things in place:
- a means for automated creation and deployment of the builds
- automated test procedures
Having a good framework for test runs and results visualization is strongly preferable, but not mandatory. These two points are essential to make TDD and CI work. No process change will do any good if your test or build sequence is cumbersomely manual. Align your technology setup before proceeding to process tuning!
From a process perspective, the big picture is left unchanged. You still may have “releases” and split the work into particular change requests. The main tweak is applied to change-request implementation. In addition, a few more actions are needed for consistency and compliance purposes.
First, implement change requests by following these 7 steps for each single change request:
- Develop requirements.
- Conduct a requirements review.
- Develop code and test cases concurrently, according to requirements.
- Add newly developed code and corresponding tests into the CI pipeline.
- Obtain test results.
- Correct code bugs and adjust tests (if needed). Repeat steps 3 – 6 until requirements coverage and code coverage are achieved and test cases pass.
- Conduct code reviews and test reviews. Repeat steps 3 – 6 if any deficiencies are found during the reviews.
This 7-step recipe explains the title of this blog post: you need to switch from a big, awkward
V-shaped cycle to a series of short and focused mini-v-shaped cycles.
Second, do not consider the test results from step 5 as formal verification results required by the DO-178C standard. Treat these testing activities as part of a development cycle aimed at providing high-quality output. When all change requests scheduled for the “release” are completed, proceed with the formal release procedure and conduct a formal run (aka “run-for-score”). At the time of the formal run you will have all needed code requirements, code items and tests approved and controlled. This is a bridge to assure compliance to the standard. The formal run becomes an easy and predictable process because:
- problems were resolved during development stage
- established CI allows the formal run to unfold automatically
It might seem that we do additional work, or do the work twice. This is true, to a point. Measure the overhead for such “additional” tasks and compare it with the total cost of an unplanned clean-up phase, schedule slip or a bug found after delivery. Take into account the resulting product quality and the impacts on customer satisfaction. It is wise to spend a small additional piece of effort at a certain point in the life cycle, but save a lot more in the long run. Practical experience shows up to a 30% increase of overall project performance after this agile flip-flop is made and assimilated.