Quality of Product

In order to be able to sell the product it should do something useful and be in a working state. The only practically way to check if product works or not - is to check it according to the specification.

That's why product specification is important - it defines the goal, what the product should be and how to check if it works or not (it also helps to measure progress - how close current state to the goal).

Process of measuring how close the product to the specification and if it's working called testing, there are manual and automatic tests.

There's very huge difference between manual and automatic testing:

  • Manual tests require smaller amount of time - but it requires it every time you apply it.
  • Automatic tests require bigger amount of time - but it requires it only once, all next runs will be virtually free (some time will be spend on support, but it's very small).

That difference dictates when and how apply manual or automatic testing.

For example - manual testing incompatible with agile development. The cornerstone of all agile technics is the movement with small, frequent, incremental steps. Manual testing makes it impossible because after every step you need to ensure that product isn't broken and retest it. It makes the cost of every step very high in both money and time.

Automated testing require more time to be created, but when its created - it's very cheap and fast to apply it.

Usually there are mixed approach - the core functionality (at least 20%) should be covered with automatic tests, it allows to be sure that the product at any time at least have basic features working. Before the release there can be performed additional manual testing.

Still, it's important to remember that tests aren't free. Tt's a burden that requires time taken from the development of the product itself. So, it's important to keep a balance - use right testing strategy and right amount of tests.


Quality measures how close is the product to the specification. It can be measured by checking the product against set of specifications, for example - amount of use cases that works correctly.

It's imposible to measure quality exactly, it always estimated with some probability (or guarantee), for example you sure that 97% of product features works correctly with the probability let's say 80%. Usually probability not measured explicitly, but it's helpful to understand that it's there.

Bigger amount of test (done right) can provide better quality and higher guarantee, but it cost time and money that can be spent on the product.

The question is - how to provide higher quality and guarantee with smaller amount of test.

Let's consider different types of tests.

Unit Tests

Test small piece of functionality in isolation, check both - public interface and internal details.


  • Allow test tiny internal details.
  • Runs fast.
  • Allow easily locate the problem.


  • Can check only small piece of functionality.
  • Require additional work to provide isolation for tested piece of product.
  • You need lots of them (because they are small).
  • Usually too small to check high level functionality.
  • Have no meaning for business users.
  • Needs to be updated if internal details of product changed.

Acceptance Tests

Check high level functionality of the product according to the specification, uses only public interface of the product, knows nothing about its internals.


  • Cover lots of functionality.
  • Have direct meaning for business user.
  • In some cases can be used as executable specification itself.
  • Check the whole product.
  • Don't require update if internals of product changes.


  • Harder to locate problems.
  • Slower to run.
  • Can't check low-level internal details.
  • In some cases require special equipment to emulate user.

User is relative term - it may be human or another service or component. Special equipment (browser emulator) can be needed for testing web application.

Integration Tests

Something in-between of Acceptance and Unit Tests, check how pieces of product works together.


Technically it's the same tools, the difference is in intention and how it's applied.

With BDD you design executable specifications declaring how the product should work, every specification designed with some business case in mind.

It also has another meaning - as a tool for managing requirements for the product (it's a different topic, no less important than the quality).

Usually specifications for functionality designed before the implementation. I don't meant all of specifications designed before the product itself, it's iterative process, but for the piece of functionality - specification usually goes before the implementation.

With TDD the goal is to check if something works as expected, it's not always directly related to business case.

So, the difference is mainly in intention - in BDD the goal is to specify behavior and check product according to it, in TDD - the focus is on the testing.

The code and tools in both cases may look pretty similar, but the meaning of that code may be different.

I personally prefer BDD because set of product specifications seems to me more desirable that a set of tests. It also helps to keep focus on important stuff and clearly measure progress.

Optimal strategy

The question is - how to provide higher quality and guarantee with smaller amount of test.

There's no universal answer, it depends how high quality and guarantee should be. If it's absolutely critical (finance, air and medical control systems) - optimal strategy will be to write as much specs and tests as possible no matter how huge time and support cots would be. But, usually it's not the case, usually development productivity is more important and small amount of bugs (~1%) is acceptable.

I'd like to consider only one case - minimal strategy. Suppose we have a startup (or we are small relatively independent division of company) working on new product and the most critical things for us is to quickly deliver product and adopt to changing requirements, and it's required that about 95% of features of our product worked.

What we need is to cover as much functionality as possible with minimal amount of specifications.

Simple specifications (acceptance tests) should be defined for 60%-90% of business cases. Specs should be simple, clear and small, write only what really matters, ignore details. High level specs chosen because such approach allow quickly and efficiently cover huge amount of functionality.

Usually there are some critical and complex places of product that generate lots of bugs and need special treatment. Good thing is - such places are usually small and take less than 20% of the product code base. you can write unit tests for that components, but those unit tests should be threaten as a burden and keept as small as possible. Majority of tests should be high-level specifications.

Automatic testing allow to introduce lots of changes and adopt product while keeping it in working state, but small amount of bugs will pass trough it.

Those passed bugs will be discovered by rare manual testings and by users (that's why easy bug reporting system should be employed - to allow use users as a testers and collect discovered bugs).

If you wonder from where all those numbers goes from - from my experience and from the Power Law, also known as the Pareto Principle, or 20 / 80 rule.