Perfect software is a myth.

Published April 17th, 2015 by:
  • Rick Grey

    Rick Grey

Software that has high quality (in its context) is possible. However, quality is not tested into software, it must be designed and engineered into software.

So why test? How does it add value? And where does it fit in at DeveloperTown?

Here are some FAQ-style answers to questions that entrepreneurs are asking about software quality.

What is quality and where does it comes from?

Quality is designed and built into software, not tested in.

Things like an attractive appearance, intuitive controls, efficient navigation and usability are the domain of great design.

A well-architected, maintainable, low-defect, performant, and extensible system is a function of great engineering.

The absence of great design and great engineering is something that can be surfaced and reported by testing, but not corrected by testing.

I'm an entrepreneur and/or product owner, what do I need to know about software quality and software testing?

The most important three things to remember about software quality are:

  1. All software has "bugs."
  2. Quality requirements are not the same for all systems or for all end users.
  3. Quality is designed and built into software, not tested in.

To expand upon those briefly, in the context of software testing...

1. All software has "bugs."

Uncovering "bugs" is a major component of testing. "Bugs," simply defined, are any issues that "bug" someone who matters (the entrepreneur, investors, end users).

Software testing is a technical investigation of a system, designed to reveal quality-related information (and "bugs") to stakeholders, allowing them to make informed decisions about that system.

To make informed decisions, of course, an entrepreneur needs to have a clear vision of their targets around scope, schedule, and quality, and how to balance limited time and resources to achieve those targets. There are always tradeoffs.

2. Quality requirements are not the same for all systems or for all end users.

Are you building space shuttles or a free mobile phone game?

Are you building a platform that needs to be extended for decades, or a system that needs to support a one-time event a couple of months from today?

Are your users highly fault-intolerant late-adopters, or are they early-adopters who are willing to accept some issues in exchange for cutting-edge features?

These are some of the factors to consider when trying to set quality targets for the system you're building. There is no one-size-fits-all definition of quality. Understand the quality needs of your specific problem space and plan accordingly.

3. Quality is designed and built into software, not tested in.

Testing can uncover issues and risks, but it doesn't inject quality into a system. It can only provide information about the success with which the software was designed and engineered to achieve its goals.

It's a common belief that running a testing cycle at the end of the development process is a way to "polish rough edges" right before release. This is not a good strategy. It risks late discovery of issues that can lead to schedule slippage and cost overruns. Even if these are inevitable, early discovery is better than late discovery. Therefore, testing is best performed throughout the development process.

You can think of testing as a risk-mitigation exercise: testing identifies risks that can be addressed, mitigated, or ignored, based on the needs of the project (cost, scope, schedule, and quality targets). The amount of testing needed throughout a project is a function of the quality risk that needs to be removed from the project.

What do we do here at DeveloperTown in terms of testing/quality control and why?

DeveloperTown seeks to design and engineer quality into every system we build, based on the constraints of the project (cost, scope, schedule, and quality targets). Testing (of the manual, functional variety) is only one component of this.

By default, DeveloperTown seeks to include the following in everything we build:

  • Simple, elegant UX
  • Scalable, performant web systems (leveraging scalable cloud solutions)
  • Extensible architecture
  • Unit testing (isolated tests of methods and functions at the code level) that are automated and re-run every time new code is added to the project)
  • Code peer review before committing new code to the code base
  • Basic manual functional testing of features as they're delivered
  • Server and application security are leveraged from DeveloperTown's stack selections
  • Application and server performance, security, and code quality have automated monitoring in place

That said, the specifics of each project (cost, scope, schedule, and quality targets) drive specific choices around increasing or decreasing effort along different aspects of design, development, and testing.

For example, sometimes tight schedules and/or a limited budget drives short-term decision making to favor delivering more features over (say) ensuring an extensible architecture in part of an application. This is an example of technical debt. "Technical debt" is a metaphor created by Ward Cunningham. As described by Martin Fowler, one of the creators of Agile:

You have a piece of functionality that you need to add to your system. You see two ways to do it, one is quick to do but is messy - you are sure that it will make further changes harder in the future. The other results in a cleaner design, but will take longer to put in place.

Technical Debt is a wonderful metaphor developed by Ward Cunningham to help us think about this problem. In this metaphor, doing things the quick and dirty way sets us up with a technical debt, which is similar to a financial debt. Like a financial debt, the technical debt incurs interest payments, which come in the form of the extra effort that we have to do in future development because of the quick and dirty design choice. We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design. Although it costs to pay down the principal, we gain by reduced interest payments in the future.

Much like debt in the real world, sometimes it makes sense to take it on. DeveloperTown uses its experience building applications to help clients make good contextual decisions around technical debt, but the default mindset is to try to keep it to a minimum.

What is testing as it relates to functional testing?

Functional testing is most easily characterized as feature testing: does the feature do what it was built to do?

Under the umbrella of functional testing are the concepts of "confirmatory" and "disconfirmatory" testing.

Confirmatory testing seeks to prove that the feature works as designed.

Disconfirmatory testing seeks to find ways that the feature can fail, for example:

  • Testing with data that doesn't conform to expected use (e.g. characters in a field that is designed for numeric entry)
  • Testing error handling (e.g. submitting a form that has data missing from required fields)

Functional testing may, but is not specifically designed to, uncover issues around security and performance. Typically, these are both separate testing disciplines. (DeveloperTown builds in monitoring systems around security and performance, which is enough for most scenarios, but special application or industry needs may result in additional measures.)

Another side of "functional" testing is "para-functional" testing (also called "non-functional" testing). It seeks to test the "-ilities" of software. Examples include:

  • Usability
  • Learnability
  • Discoverability
  • Scalability
  • Maintainability
  • Localizability
  • Testability
  • Supportability

Para-functional testing has a large subjective component, and needs to be performed with the target users of the software in mind. You might think of it as a lightweight stand-in for usability testing (a separate discipline), or user-acceptance testing (a separate process or cycle).

What should I be thinking about as it relates to compatibility testing?

Compatibility testing is testing performed to confirm that software looks and behaves identically across different devices and/or operating systems and/or browsers.

Compatibility, like any aspect of quality, must be designed and engineered into a system. It cannot be tested in.

The broader the compatibility needs of a piece of software are (for example having to work on older browsers, or more mobile devices), the more effort has to go into careful design and engineering. Very often, the effort (and therefore cost) to design and build to support more and older systems is non-linear. Adding one more browser or one more older version of an operating system can increase cost significantly. This is especially true of testing. The testing effort required for broad compatibility is often disproportionately large relative to the effort needed for design and development.

Given limited budget and time, it's important to select compatibility targets that are as specific to your target users as possible. Looking ahead, I will be posting a series of write-ups on managing compatibility in ways that minimizes risk and cost. These future posts will dig deep into the details of our Platform and Device Guidelines Document, which discusses four elements to consider when trying to establish compatibility targets:

  • Target markets/users
  • A “consumerized” approach
  • “Certified” Vs. “Supported” environments
  • Usage Data

Together, these approaches give you tools to choose compatibility targets while balancing scope/schedule/cost.


I hope this has made the relationship between software quality and software testing a little clearer. Still have questions? Get in touch with us to discuss it further and/or subscribe to our newsletter for more information on software and startups. We love to talk about this stuff.