Mikhail Sudbin

Mikhail Sudbin
Chief Technology Officer at Advalange

Peer review is a commonly used practice in the software development world. Unarguably “Two heads are better than one” principle makes your product better. The main question is if the enhancement worth the price you pay for it. In the safety critical domain such as aerospace where reviews are mandatory, this issue is of current significance. DO-178C forces us to conduct reviews for merely every single item that is produced throughout the project lifecycle and consumes up to 30% of project efforts. Thus spending time for peer reviews in an effective and efficient manner is essential.

However, the intent lying underneath peer review sometimes becomes blurred, especially to a large and complicated aerospace project. Either review process is done for the sake of the process with the focus on production of formal review records. Or process becomes inefficient when people dig too deeply into the aspects that do not contribute in ending product quality.

Consider the following easy tips to keep the balance making your DO-178C review process more valuable and to control the time spent for review cycles responsibly:

Tip #1. Foster the right attitude.

Have you ever experienced anything similar to what I call “checkbox review” when all questions are marked with “passed” without any real check? I bet you have. Especially, when the deadline is close and management is pushing. Another case of the wrong attitude is “Correct it as I say” when a reviewer has much more authority or experience than an author does.

Always remember the following simple statement to avoid such a wrong attitude: “review is an independent qualitative opinion regarding compliance and quality of your outputs, not more not less. Every single word matters in this statement.

  • Review should be independent. Reviewer and author shall discuss findings and provide arguments, until agreement is reached rather than one of them enforces his opinion in a peremptory manner.
  • Review is always qualitative. You can never prove your views with 100% strict mathematical proof. There is always room for subjectivity.
  • Review is aimed to confirm or disprove that the output under review complies with the standards and procedures and fulfills the requirements against which it was developed. Review is neither a brainstorming meeting, nor a research session, nor training classes.

Tip# 2. Create the right checklist.

Peer review is deemed to increase product quality. Quality, however, is a very amorphous word. Everybody feels that understands what it is but almost nobody can express it in a short and quantitative manner. In the DO-178C world we are lucky to have more or less strict guidance to define the quality of the product.

  1. First, the product operation shall be safe.
  2. Second and complementary to the first, the product and process shall be compliant with the standards.
  3. Third and complementary to the first two, the product should fulfill intended functionality and shall not do anything unintended.
  4. Fourth and complementary to all above the product shall be maintainable throughout 30+ year’s period.

Always validate any review checklist against these principles. Make sure that your checklist is specific enough. It should contain questions regarding how exactly these principles are satisfied in a work product under review rather than if those principles are superficially satisfied. On the other hand, a checklist should not be too excessive. Concentrate only on what is vital. Having a 150-question checklist for a simple unit test is a sign that you are doing something wrong. Never ignore the feedback you receive from review participants. Reconsider the checklist if a reviewer is confused with what question to choose when relating a finding or if too many findings fall into the “other” category.

Tip# 3. Establish the right process.

Be sure that only relevant participants take part in the review and that their involvement is neither excessive nor insufficient. For the example with a unit test having two inspectors, one moderator and one facilitator will be exorbitant. However, such a team may not be enough for reviewing software architecture.

The flow of the review process is very important as well. Provide the team with a well-fit infrastructure to support data exchange between the participants and review records management to make it a favorite piece of project culture rather than annoying formal activity. Usually, configuration management systems are extended with peer review plugins to make the process fast, transparent and straightforward. Investment in convenient tool setup for reviews is a wise decision. What you  do not want to have is a “manual” process and record flow with review cover sheets looking like a “history book of the project”.


Taking after these three tips brings a powerful tool for project analysis in addition to making review process more adequate. The right checklists provide you with well-structured data taxonomy. The right attitude and the right participants provide confidence that data connected to those checklists is accurate and representative.  Statistical analysis will help you to figure out weak or poor defined aspects of your project besides the review process itself.

DO-178C requires reviews and no one can get by without it. It is your choice to make it for compliance solely or to get real benefits. The three tips will help you to make a wise decision and maximize cost\value ratio from your reviews.

Advalange is pleased to congratulate a valuable member of our team Mike Coligny with the significant achievement of Master Instructor Emeritus status.

MIE Mike Coligny, AZ (Jul16)



Richard Michael “Mike” COLIGNY, Master Instructor Emeritus   (Emeritus: 1Jul16)
Prescott  AZ
Mike’s e-mail address:   MColigny@cableone.net,

Mike Coligny, an 8-time Master and a charter member of SAFE, was recently granted Master Instructor Emeritus (MIE) status through MI LLC’s https://MICEP.FluidReview.com/ in recognition of his many years of commitment to excellence, professional growth, service to the aviation community, and quality aviation education.  An aerospace consultant, Mike is president of CFS Consulting, founded in 1980, and located in Prescott, Arizona with clients worldwide.  He also serves on SAFE‘s government affairs committee, HAI’s flight training committee, and is a FAASTeam lead representative in the Scottsdale FSDO area. (Photo: MCFI-E Mike Coligny of Prescott, AZ)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

Master Instructors LLC takes great pride in announcing a significant aviation accomplishment on the part of Mike Coligny, the president of CFS Consulting and resident of Prescott, Arizona.

Recently, Mike was granted the title of Master Instructor Emeritus (MIE / MCFI-E) by Master Instructors LLC, the international accrediting authority for the Master Instructor designation as well as the FAA-approved “Master Instructor Continuing Education Program™.”  He first earned this national professional accreditation in 2002, has held it continuously since then, and is one of only nine worldwide to earn the credential eight (8) times.

To help put these achievements in their proper perspective, there are approximately 101,000 CFIs in the United States.  Fewer than 800 aviation educators worldwide have achieved one or more of the Master accreditations thus far.  Twenty-two (22) of the last National Flight Instructors of the Year, National FAASTeam Representatives of the Year, or National AMTs of the Year were Masters (see: http://www.GeneralAviationAwards.org/) while Mike is one of only 35 Arizona aviation educators to earn one or more of these prestigious “Master” titles.  Additionally, he is one of 40 worldwide to be granted EMERITUS status.

In the words of former FAA Administrator Marion Blakey, “The Master Instructor accreditation singles out the best that the right seat has to offer.

Emeritus status is an honorary title that may be conferred upon an individual Masters in recognition of her/his years of dedication and commitment to excellence, professional growth, and service to the aviation community.  Since the inception of the Master Instructor program almost twenty years ago, hundreds of professional aviation educators have earned initial Master accreditation followed by biennial renewals.  Many of those veteran Masters are now cutting back or retiring altogether from active aviation education.

The Master Instructor designation is a national accreditation recognized by the FAA.  Candidates must demonstrate an ongoing commitment to excellence, professional growth, and service to the aviation community, and must pass a rigorous evaluation by a peer Board of Review.  The process parallels the continuing education regimen used by other professionals to enhance their knowledge base while increasing their professionalism.  Designees are recognized as outstanding aviation educators for not only their excellence in teaching, but for their engagement in the continuous process of learning — both their own, and their students’.  The designation must be renewed biennially and significantly surpasses the FAA requirements for renewal of the candidate’s flight instructor certificate.


Reprinted with permission from the email distribution by MasterInstrs@aol.com.

Mikhail Sudbin

Mikhail Sudbin
Chief Technology Officer at Advalange

In my previous blog post “Why DO-178C Forces Software Development To Be More Agile” I discussed the importance of verification activities for a project in safety-critical areas and the reasons why you need to conduct these activities as soon as possible. Ideally, you should test right after a portion of requirements and the code for it are developed. Such an attitude is common in the Agile world. Moreover, two specific practices support it: Test Driven Development (TDD) and Continuous Integration (CI).

In this post, I’ll show how implementing these Agile practices enhances a DO-178C project by pulling testing inside a development mini-cycle while keeping it compliant with the standard.

First, a bit of theory.

Test-driven development practice can be resumed as follows: write a test, run it and ensure that it fails, write code, run the test again and ensure that it passes (and correct the code if it does not). The idea is that you need to define your desired functionality using the test and then develop code that passes this test. TDD may seem inapplicable to DO-178C project setup at first glance. Two small adjustments, however, turn things around: write a requirements-based test and segregate code development and test development with two different persons. Thus, the spirit of the TDD can be tailored organically to heavyweight development methodology.

Continuous integration was originally a practice of frequent merging of working software copies into a mainline. The main goal of CI was to deal with integration problems. Nowadays this technique is more widely considered. Usually CI is aided with automated build process and complemented with various quality-oriented activities. Coupled with TDD, continuous integration forms an outstanding basement for project-health monitoring and tangible quality increases. My post “Is continuous integration worth the price? Yes, I’m sure” describes CI practice in more detail and sheds light on its usage in DO-178C projects.

Now let’s move from theory to practice.

Common DO-178 projects often consist of several phases. These phases are called “releases”, “builds”, “versions”, “loads”, “labels” and so on. A phase duration usually varies from a couple to a dozen months or more. In turn, the work in each phase is split into a number of change requests. When development for all change requests is finished, the build is released and transferred to the verification group for testing. The next build is then worked on, along with testing of the previous build.


Such a project setup often results in error propagation deep into the lifecycle, introducing turbulence into the project flow and completion of scrap or redundant work. Ultimately, a project gains a significant chance of encountering last-minute surprises, additional unplanned phases to clean up bugs, schedule slips, missed deadlines and a whole bunch of known but uncorrected errors in the delivered software.

The goal of project life-cycle enhancement is to drastically reduce error propagation by uncovering and fixing them right at the time as the code portion is developed.

You need to do both technological and process adjustments to make these enhancement. From the technological perspective, you need two things in place:

  • a means for automated creation and deployment of the builds
  • automated test procedures

Having a good framework for test runs and results visualization is strongly preferable, but not mandatory. These two points are essential to make TDD and CI work. No process change will do any good if your test or build sequence is cumbersomely manual. Align your technology setup before proceeding to process tuning!

From a process perspective, the big picture is left unchanged. You still may have “releases” and split the work into particular change requests. The main tweak is applied to change-request implementation. In addition, a few more actions are needed for consistency and compliance purposes.

First, implement change requests by following these 7 steps for each single change request:

  1. Develop requirements.
  2. Conduct a requirements review.
  3. Develop code and test cases concurrently, according to requirements.
  4. Add newly developed code and corresponding tests into the CI pipeline.
  5. Obtain test results.
  6. Correct code bugs and adjust tests (if needed). Repeat steps 3 – 6 until requirements coverage and code coverage are achieved and test cases pass.
  7. Conduct code reviews and test reviews. Repeat steps 3 – 6 if any deficiencies are found during the reviews.

This 7-step recipe explains the title of this blog post: you need to switch from a big, awkward
V-shaped cycle to a series of short and focused mini-v-shaped cycles.

Second, do not consider the test results from step 5 as formal verification results required by the DO-178C standard. Treat these testing activities as part of a development cycle aimed at providing high-quality output. When all change requests scheduled for the “release” are completed, proceed with the formal release procedure and conduct a formal run (aka “run-for-score”). At the time of the formal run you will have all needed code requirements, code items and tests approved and controlled. This is a bridge to assure compliance to the standard. The formal run becomes an easy and predictable process because:

  • problems were resolved during development stage
  • established CI allows the formal run to unfold automatically

It might seem that we do additional work, or do the work twice. This is true, to a point. Measure the overhead for such “additional” tasks and compare it with the total cost of an unplanned clean-up phase, schedule slip or a bug found after delivery. Take into account the resulting product quality and the impacts on customer satisfaction. It is wise to spend a small additional piece of effort at a certain point in the life cycle, but save a lot more in the long run. Practical experience shows up to a 30% increase of overall project performance after this agile flip-flop is made and assimilated.

Mikhail Sudbin

Mikhail Sudbin
Chief Technology Officer at Advalange

Software development standards in safety critical areas such as DO-178C are usually associated with classical waterfall  or V-model life cycle, a common but a misleading association. Hints to the more agile process are hidden inside the standard. Let me reveal them.

To start with, DO-178C does not impose any particular life cycle or methodology. It talks about objectives and activities that can be done to satisfy those objectives. Moreover, the objectives are quite general, for example: “High-level requirements are developed.”

It may seem that DO-178C leaves a lot of space for life cycle anarchy with such general definitions of single goals. However, a complete set of several dozens interrelated  goals drastically reduce the room for maneuver in a life cycle choice. The waterfall or V-model life cycles seem to be the easiest and most straightforward way to fulfill all of the goals at once. Nevertheless, the easiest way is not always the most appropriate.

The goals are arranged into 10 groups represented by tables A1 – A10 in the annex of the standard. Each group corresponds to certain aspect of a life cycle. In addition, the table defines the rigor of the process with respect to software level:


DO-178C objectives per software level

Logically, the higher the level is the more goals must be satisfied. However, look at the distribution of those goals. The number of verification goals outruns the number of development goals several times. Moreover, development goals are exactly the same for level A, level B, and level C software. So DO-178C deems that the reliability of the software bases on the thoroughness of verification. This conclusion can be even strengthened: you cannot say anything about reliability of your safety-critical airborne software until verification is complete!

Unsurprisingly, waterfall and V-model can become costly if an error spreads through the whole life cycle before it  uncovered in late verification stages. IIt is not a rare for additional iterations of the whole cycle to be added at the end of a product timeline to correct problems from the beginning of the story. This all results in nasty deadlines, a demoralized and exhausted project team, angry customers, and frenzied top management.

An obvious piece of advice for project managers is to implement as much verification as early as possible. Bingo! This concept is among the outstanding Agile traits.

I do not want to add fuel to the fire of the heavy-weight vs. Agile holy war. I do want to emphasize that Agile is not the equivalent of anarchy. Treat requirements, design, and other required outputs as a valuable part of your product rather than an annoying or exhaustive documentation. With such an attitude Agile methods fit perfectly: break your life cycle into smaller iterations and pull-in verification.

Of course, you need to be very careful implementing Agile in your safety-critical project. Tailoring the spirit of Agile practices to DO-178C environment is not an easy task. I will provide a practical example of such tailoring in my next blogpost.

Mikhail Sudbin

Mikhail Sudbin
Chief Technology Officer at Advalange

True leaders know how to delegate work effectively. Avoid pitfalls along the way to successful work delegation with these five easy steps.

How often have you heard: “If you want something done right then do it yourself?” I bet more than once. This mindset, however, is a common obstacle to true leadership. Long-term success depends on your ability to maximize your team’s potential by mastering the skill of delegation.


Art of delegation

Image by Ros Asquith (image source).


Books on delegation, including “The Busy Manager’s Guide to Delegation” by Richard A. Luecke and Perry McIntosh or “The 7 Habits of Highly Effective People” by Stephen R. Covey, are good entry points to the comprehensive study of this topic. It can take years to study the many aspects that affect the success of work delegation. Start with these five easy steps. Failure is certain if you do not follow them.

“Our plans miscarry because they have no aim.
When a man does not know what harbor he is making for,
no wind is the right wind.”


Step 1. Identify the result

Establish quality goals, KPIs, process objectives – whatever you call those items. You need to have a solid and unambiguous foundation to move forward. Measurable vision of success is the key factor of effectiveness.

“Never delegate methods, only results.”
Dr. Stephen R. Covey

Step 2. Translate the task correctly

Speak in terms of “what will be accomplished” not in terms of “what to do.” If you tell people what to do, they do not commit to the result and latently put responsibility back to you. Be sure that you and your delegate operate in the same context and have the same understanding of the desired result.

“My job is to not be easy on people. My job is to make them better.”
Steve Jobs

Step 3. Coach your people

The world we live in is not perfect. Accept that it is normal for a person to lack some experience or skills for the task. Talk openly to determine what kind of training and support the person needs. There is no shame in not knowing; the shame is in not learning.

“Let no man imagine that he has no influence.
Whoever he may be, and wherever he may be placed,
the man who thinks becomes a light and a power.”

Henry George

Step 4. Involve your people

Share your whole game plan with your people. A person should know how his piece of work fits into the whole picture and that his work matters. People are not motivated when they aim for some synthetic numbers without understanding their contribution. “You don’t need to know why you are doing it, just do it” is the worst answer imaginable.

“It doesn’t make sense to hire smart people and tell them what to do;
we hire smart people so they can tell us what to do.”

Steve Jobs

Step 5. Trust and respect your people

Accept that there is always room for failure. It is too optimistic to expect perfection on the first attempt. Give feedback, search for root causes and make corrective actions rather than blame the person. Do not value your methods more just because they are yours. If you delegate right, you will soon notice that people suggest solutions that you’ve never thought of. Moreover, these solutions, surprisingly, outperform your own.

Passing The Torch

Image source

Following these easy steps increase your chances for successfully delegating work. Consult this list when assigning a task or calling somebody on the carpet. If you’ve missed any point, restart the process from that point. Accepting and learning from mistakes differentiates a true leader from a boss.

Mikhail Sudbin

Mikhail Sudbin
Chief Technology Officer at Advalange

7 Tips That Can Make Your Software Tool Qualification Easier

Qualification (aka certification) of software tools is a common requirement in safety-critical engineering. Often this aspect of development life cycle raises many questions and issues. Below I summarized my experience in aerospace area into seven tips. Though the tips are tight to RTCA DO-178C and DO-330 standards, other safety-critical domains will benefit as well. Consider these tips to increase efficiency of your project.

Tip #1. Evaluate tool qualification criteria correctly

Normally you don’t need to qualify each single software tool. Identify tool usage scenarios precisely and evaluate them against qualification criteria stated in the standards. It’s like a court trial: the standard is a judge and you are a lawyer. You need to justify that your tool does not require higher level of qualification or qualification at all. If your arguments are reasonable and sound, you will be okay with the certification authority.

Tip #2. Consider tool chain usage accurately

Usually several tools are used consecutively to obtain a result. You need to make a wise choice. One option is to qualify every tool in the chain as a development tool. Another option is to add one more tool in the chain to check the result and qualify only this tool as a verification tool. Sometimes it’s more reasonable to add an additional verification tool even if the chain consists of a single development tool.

Tip #3. Qualify only required functionality

Once again, evaluate the tool usage scenario carefully. Often you need to qualify only a portion of functionality. In that case think about partitioning this functionality, otherwise you’ll have to qualify the tool completely. A good example is splitting a tool into a GUI application and a command line utility. The command line utility implements business logic and is being qualified. GUI is developed in a less rigorous way. Review of command line log serves as a bridge between these two components. Such an approach may save you considerable effort.

Tip #4. Remember about COTS tool qualification package issue

Overreliance on COTS tool qualification package may produce a number of sleepless nights. A tool provided with a qualification package from a vendor does not mean that you can proceed without further actions. In most cases, you need to do some additional activities such as running qualification tests and evaluating the results. Such additional tasks may take significant time. Moreover, a tool qualification package may require prerequisites that your target system cannot provide. For example, it may require a file system on your target to store intermediate data and results or an ability to debug step-by-step. Consider additional qualification efforts and package constraints when planning tool usage.

Tip #5. Balance your cost

Evaluate the total cost of tool qualification. Estimate the savings that this tool will bring to your project. Sometimes it’s more efficient from a time and budget perspective to refuse tool qualification and to conduct full cycle of verification activities for outputs of the tool. This is especially useful in the cases where a unique component is being created and there is no plan of further usage of the tool.

Tip #6. Timely plan tool qualification activities

Negotiate tool usage and qualification strategy with the authorities at the start of the project. Conduct tool qualification tasks along with system development. Evaluate COTS tool qualification package before deciding on tool usage. This advice may seem obvious. Many projects, however, start thinking about tool qualification several weeks prior deadline when there are many other functional and developmental troubles.  You will not have the luxury to spend valuable time wrapping up qualification things at that time.

Tip #7. Combine previous tips

Each of these seven tips is valuable on its own. However, you can achieve better results by combining them creatively. Try different tool chains and evaluate total implementation effort for different variants. Don’t be shy about adding COTS tools into chains of homemade tools. Transfer qualification tasks from development to verification tools by adding additional activities into your product life cycle. It’s like music: you have just seven notes but you can play thousands of melodies.


Recipe on software tool qualification


In conclusion, let me illustrate these seven tips with a practical example of parameter item data generation tool. The tool uses some portion of requirements presented in formal notation to generate a binary data file. The file is then uploaded into target.

Tip #1: This tool is definitely a development tool if we use its binary outputs without further review and testing. It should be qualified according to DO-330 TQL1-TQL3 depending on the system’s design assurance level.
Tip #5: Qualification of a development tool is an expensive task that may overwhelm the budget.
Tip #1, #2 and #5: The cost may be reduced by adding additional verification tool (DO-330 TQL5) and some manual verification into the chain.
Tip #3: We can minimize qualification effort by extracting data file generation functionality from GUI, setup and other components of the tool.

As a result parameter data item tool is transformed into:

  1. GUI, setup and other components that can be developed with any rapid architecture development technique. These components produce a kind of database (DB) file for further generation, for example in xml format. No qualification is needed.
  2. Command line utility that converts DB file into a binary file for target. In addition, this utility produces a .txt file which contains DB parameters and execution log information in human-readable form. Exact correspondence between DB file, binary file and human-readable .txt file is qualified according to the verification tool criteria.
  3. The process is complemented with an additional review step. A developer adds DB and text files into configuration management system along with the binary output. A verification engineer then checks that the information in the text file matches the requirements.

Thus, this example shows that an inventive approach can transform a costly qualification of a development tool into a far easier sequence of verification and utility qualification. However, don’t forget about tip #6 – any creative plan should be approved by the certification authority.

Mikhail Sudbin

Mikhail Sudbin
Chief Technology Officer at Advalange

5 common traps can hurt your DO-178C project seriously if recognized late

DO-178C, Software Considerations in Airborne Systems and Equipment Certification by RTCA, regulates the process of software development in the aerospace domain. The document, however, does not provide any clear recipe of getting things done in a given project. Each particular team has its own way of interpreting DO-178C objectives and limitations. Below are five cases that, in my experience, have caused the most arguments and misunderstandings. I call these aspects “traps” because choosing the wrong way in implementing them will have a tangible, negative impact on your project.

#1. Waterfall lifecycle trap

The opinion that DO-178C mandates a heavyweight waterfall-like life cycle is wrong. Any methodology is good, even an agile one, if you follow transition criteria and fulfill the objectives. Choose what fits your project and organizational culture best and complement it with additional steps to get the compliance. Do not exclude any activities that bring value to your product just because their conformance to the standard is questionable. Simply do not take formal credit from such activities. And vice versa, don’t be scared to bring in extra tasks to meet formal requirements. Often a week spent on dedicated formal tasks saves a month of ineffective and useless work on artificial life cycle stages.

#2. Trap of the undefined robustness

Often people recall about robustness at late stages of verification. It may look like: “Ouch, we need to add some kind of robustness testing to pass through SOI audit.” Software made this way is nothing more than a colossus with feet of clay. You must consider robustness from the very beginning and transform it into appropriate outputs throughout project life cycle: identify abnormal situations and containment actions in the requirements, insert corresponding features into your design and do not let untraceable defensive code ooze into your implementation. Robustness testing should be just another portion of a requirements-based test set if you do it right.

#3. Structural testing trap

DO-178C stands on a requirements-based testing concept. Nevertheless, a word combination “MC\DC tests” often circulates in engineers’ conversation. Forget such word combinations. Tests should check the requirements. MC\DC decision or statement coverage is just a measure that shows overall mutual consistency and completeness between requirements, code and tests. Having a coverage gap does not necessarily mean that a test is bad. Often requirements may be inadequate and lack details or you may have untraceable additional code. The worst mistake you can make is adding a synthetic test case just to exercise certain combinations of software variables.

#4. Overzealous tool qualification trap

DO-178C pays additional attention to tool qualification. Dedicated standard’s supplement DO-330 regulates this aspect. However, there is no need to qualify every single software tool. Moreover, tool qualification is an expensive process and you need to choose your qualification strategy wisely to balance your efforts. You need to understand tool qualification criteria and evaluate costs for each tool qualification level. Sometimes it is even more efficient to introduce additional activities, such as reviews or analysis, rather than to qualify a tool. Another mistake is over-reliance on the tool qualification support package from the tool vendor. You can rarely take the package as is and provide it as qualification evidence. Tune-up to a project’s environment, qualification test runs and results analysis may consume considerable time. You need to take this effort into account when figuring out your tool qualification strategy.

#5. Reckless use of external components trap

Introducing a third party or open source component into your software often seems like an attractive idea. However, any single portion of your code must be DO-178 compliant. This means that you cannot link a library to your project without further actions. The statement that “the community uses this library for ages” will not work for a formal credit. You need to have a solid plan of adaptation of such components into your project. You may use a certification support package or go through a reengineering cycle to capture requirements for a code library and to conduct all needed verification.  Consider this extra effort when you are thinking about an external component. Sometimes developing your dedicated library is more efficient than working out a common third-party component.

The list above is not exhaustive. There are many more technological, design or process traps you can face along the way. Nonetheless, these five mistakes will be costly if revealed in the late stages of your project. My advice is: keep an eye on these aspects from the very beginning and don’t forget to negotiate your approach with your DER.

Mikhail Sudbin

Mikhail Sudbin
Chief Technology Officer at Advalange

Pair Programming Is Not Just for Software Tasks

Two heads are better than one is a lesson well learned from childhood fairy tales. Two pilots in an airplane, two cops in a patrol car, two persons to open a bank vault prove this concept.  Pair programming works off this idea as well. In pair programming two engineers share a single workstation to produce a software code. One is a driver who does the typing. The other is an observer who conducts ad-hoc peer review and oversight. They switch roles frequently. Pair programming is widely accepted as a valuable practice in Agile development.

Nevertheless, pair programming does not exclusively belong to software development. It can be tailored to many other aspects of your work. Advalange implements this technique in several ways:

  • Presentations
    Some people have an artistic talent to present from scratch. However, many struggle to  prepare a good presentation especially for a topic which is not 100 percent clear. In such a case, we usually make a skeleton of the presentation in pair programming style. Defining S.M.A.R.T. goals, deciding what to explain and what to show appears much easier when you immediately validate your idea against your teammate. When your run out of ideas, your partner may have one more idea in reserve. This raises the creation tempo.
  • Analytics
    Sometimes an analytical task is obvious. Other times the number of parameters to analyze is overwhelming. In this situation, making an analysis is like searching for a treasure chest in a misty swamp. While you are figuring out the next chain in a link your mate can quickly verify your hypothesis, make some fast side calculations or search for more information. This cuts off dead-end directions faster and keeps overall focus clearer. In addition, we often try to get the result by two different methods concurrently. Matching numbers give us more confidence to proceed to the next stage.
  • Articles
    Like  the presentation example, you can validate your thoughts and statements before throwing them out to readers. This helps to avoid context trap: you know (or at least pretend to know) what you are talking about and can skip some important details. A reader out of context may be lost and misunderstand your words. I often ask colleagues to put on a reader’s hat and provide feedback. Wearing the writer and reader hats at the same time is not easy. Some doctors may even label you schizophrenic if you do it regularly. Another benefit of pair writing is ad-hoc style and grammar correction.
  • Planning
    Creating a comprehensive work breakdown structure is always a challenge. Sometimes a stupid mistake such as missed activity or inappropriate taxonomy results in sleepless nights and nervous overhead. While you focus on decomposing work domains to certain tasks, your mate keeps his eye on overall outcome and interactions between single tasks.

Obviously, a number of rules will make your pair work more efficient.

Rule #1: Respect your partner.

Your mate has a right to his own opinion. Educate him carefully rather than laugh at his ignorance. Remember that your goal is to achieve better results, not to criticize your fellow.

Rule #2: Trust your partner.

Never say “I don’t believe you.” If you doubt your partner, it is better to say: “I heard a different thing. Let’s double check it.” Also, let your partner finish his statement rather than interrupting him.

Rule #3: Be open-minded.

Remember that pair work aims to find the optimal way. Don’t fight for your opinion to the last breathe. You would rather dig into your buddy’s suggestions to figure out the best result. Be ready to accept that your initial idea failed.

Rule #4: Speak up.

Always comment what you are doing and why you are doing it this way. It is important that both of you talk. Do not convert pair work into a one-man speech. Do not chat. Plan short breaks every half hour to reflect. Though it seems obvious, this rule is hard to follow. You need to tune into each other for some time to find an optimal way of communicating.

Rule #5: Be focused.

Remember the goal of your pair activity. Don’t fall into brainstorming or a  discussion of overall philosophical questions of your work. We usually start with writing down our goal and vision of output. This helps guide us back to the intended direction.

Rule #6: Pair only what should be paired.

Pairing each single activity is a bad idea. In the presentation example, we pair only the semantic portion of presentation preparation. Formatting it in PowerPoint is always done by one person. Use pairing when one more idea, independent opinion, review, or validation may lead to a different direction.


Pairing helps get things done well. You can’t fall asleep or go and look into Facebook “just for a couple of minutes” when a teammate is staring at you.

We at Advalange trust in pairing. In our experience, certain tasks are done up to five times faster when done in pair programming style. You may agree or not but have you ever really tried to consistently pair non-programming tasks?

Mikhail Sudbin

Mikhail Sudbin
Chief Technology Officer at Advalange

Is continuous integration worth the price? Yes, I’m sure.

Software development teams from different industries recognize continuous integration as a valuable practice. Some treat it as a “must-have” piece of an effective life cycle. However, in safety-critical areas and in the aerospace domain, in particular, there is still strong resistance to implementing continuous integration. Some people may think that continuous integration cannot formally fit into DO-178B\C.  Some people may think that those agile things bring no practical value into strict and rigorous development process.

These fears are groundless and I can show you that a DO—178C project can benefit from continuous integration.

Continuous integration aims to pull testing activities inside the development cycle rather than saving them for last. This goal breaks down to the following tasks:

  • Integrate all software pieces to see how they fit each other;
  • Check for any  regression possibly introduced by recent changes;
  • Check the functionality of newly implemented changes (if applicable).

These tasks should be done frequently enough to allow removal of any bugs before moving to the next stage of the life cycle.

“Any DO—178C project can benefit
from continuous integration.”

You will need several prerequisites to implement continuous integration:

  • A version control system to store and retrieve project artifacts coupled with a build tool;
  • Automated tests and a test execution framework;
  • Means to analyze test results.

Luckily, DO-178 features of heavyweight development approach provide a solid base for all three requirements.

DO-178 mandates comprehensive configuration control and change management.
Every single change request and the resulting changes in artifacts should be segregated and tracked. Usually it’s only a matter of setting up a configuration management tool to automate the build process and see how each subsequent change fits into the previous build.

Requirements-based testing is one of the main ideas of DO-178 standard.
Usually different groups develop code and test cases concurrently. If test strategy is established and test tools and environment are set up, nothing prevents the test group from developing test procedures along with test cases. This means that when a code change is deployed, everything is ready for continuous integration and, more important, for continuous testing.

DO-178 requires establishing thorough traceability.
Usually every single function is traced to corresponding requirements. Test cases and procedures are traced to requirements as well. Trace matrices are stored in a well-structured way. Everything is in place to select an appropriate set of tests and run them against a code change. Trace matrix will also aid analysis: you can identify the broken parts almost instantly.

“DO-178 features of heavyweight
development approach provide
a solid base for continuous integration.”

Of course, this approach works well if testing is automated or at least semi-automated. Modern software testing toolsets from VectorCast, LDRA, Rational, and others provide a solid base for test automation. Model-based development environments open new horizons for continuous integration by software-in-the-loop and hardware-in-the-loop concepts. Even old school manual visual tests can be adjusted for continuous integration. A way to set inputs automatically almost always exists. Output from a screen can be recorded for further analysis. Thus, all tests, with few exceptions, can fit into the strategy: “Run automatically, when a change is ready, analyze results when needed.”

Compliance is one of the most important questions when you develop airborne software. Some people believe that every tool should be qualified for continuous integration. It is not true. The DO-178C standard states: “Tools that are used to eliminate, reduce, or automate software life cycle process activities, and whose outputs are not verified, need be qualified.” You may choose different strategies to fulfill the DO-178 requirements such as:

  • Automation and pass\fail selection functionality may be qualified’
  • Tools’ outputs may be reviewed;
  • A combination of qualification and reviews may be used.

“It is not true that
every tool should be qualified
for continuous integration”

You may also go a different way – separate continuous integration and certification tasks. You may conduct formal verification (a k a run-for-score) once at the end without using automation tools. If continuous integration is applied throughout the project, you would be sure that your code is in good shape and your tests are correct and complete. There is minimal risk of getting some sudden costly updates..

Since continuous integration is not mandated by certification authorities some may seek to avoid the additional cost. It’s your choice, but make it wisely. Time and resources spent during earlier stages of your project to introduce test automation and continuous integration will save money later. You’ll be rewarded with fewer bugs, , avoiding integration hell and decreasing the cost of changes in later stages of project (Beck’s curve). Or defer testing and pay the price closer to the project deadline as you fix all those cascaded bugs. Each single change will be a challenge under such conditions (Boehm’s curve). Look at those two curves and decide which one you would rather see in your project.

Beck’s curve

Beck's curve

Boehm’s curve

Boehm's curve

Welcome post from Alexander Svinov, Advalange CFO.

Alexander Svinov, Advalange CFO
I graduated from the computer science school of Moscow State University, but quickly came realized that my true inspiration at work is everyday business challenges – searching for new clients, inspiring the team to reach for new goals, finding an investor for a new product. For the past 15 years I have honed my skills in finance and investments. Yet I always sought to combine my IT knowledge with my passion to develop new businesses. I was fortunate enough to meet Evgeny Rodin and his colleague Mikhail Sudbin – two experts in software production who were looking for an investor/entrepreneur type of a partner to join their great new venture at Advalange. I seized this opportunity as I felt that there was a strong level of understanding and trust between us as partners, and that together we could create an ambitious team to drive this business forward.

“My true inspiration is everyday business challenges.”

When we founded the firm, we aspired to create a world class software development company, employing the best Western business practices. My past experience working in U.S. companies has given me a rich background of business practices to bring to Advalange.

I am an optimist, and, when looking at today’s world – with all its political and economic instability, volatility in asset prices, and economic uncertainty – I still see a whole lot of opportunities. I believe that our clear vision, commitment to our clients, strong technical background, and our success will serve as a foundation for further growth.

Page 1 of 2