510k medical device submission

One of the most important documents for an FDA 510k medical device submission is the anomalies document, which outlines what bugs exist in the product and what risks it presents to patient safety. When the entire goal of your product is to improve patient outcomes, it’s important to keep that list as short as possible. 

When we develop software for 510k medical device submissions, we use a slightly modified approach to testing and quality assurance than we do for software products in other industries. Incorporating an iterative testing approach earlier in the process helps ensure the highest quality product.

The trouble with 510k medical device testing

The biggest difference between 510k-regulated software and other software projects is the formality of the testing process. After all of the software development is completed, revision controlled documents describe the manual testing that will be done to ensure the software meets the requirements. Depending on the size of the application, it may take a small team executing those test scripts between a few days to a few weeks to complete testing and validation. If any changes are made to the application, including fixing bugs found during testing, the entire set of test cases for the system needs to be re-executed. This is expensive in terms of labor and time and can delay submission to the FDA.

An iterative approach that works better

When working with such rigorous protocol, an iterative approach to testing and feedback can actually save you labor costs, speed your approval process, and get you to market quicker. Quality should reinforced throughout the process. This starts with developer and automated testing and continues with other checkpoints such as a Tech Lead review, formal test case creation, and user validation. While we always arrive at the formal verification testing, we seed the process earlier on to make sure the formal testing doesn’t present issues that need to be documented as anomalies or resolved and tested all over again.

Developer testing

Quality begins with the developer understanding the business requirements, user workflows, and edge cases related to the feature they are developing. They then make a branch and implement the feature, along with automated tests that replicate common use cases or edge use cases for the system. These verify the system executes as expected for all possible use cases after any code update. These tests are included in source control, and will run with every build to ensure there are no regressions in the software design or performance and no new bugs introduced. Once the developer has completed their work, they create a pull request to have their code reviewed before it’s part of the actual main software branch.

Technical Lead review and testing

The next step to ensuring quality comes when a technical lead reviews the proposed pull request. The tech lead will begin by downloading the code on their machine and running it locally, to test it manually as well as ensure the user interface looks matches the expected design and workflow. 

Next they will review the source code to ensure each line that is added or changing is appropriate for the feature. Then they will review the test coverage written by the developer to make sure it covers all normal workflows and accounts for all the possible edge cases. In addition to these quality checks, the tech leads look for good design patterns and clean maintainable code, which will set the project up for test success down the road. Not only are we creating a clear set of tests for the new future, were making sure future developers work with the code and the tests without any blockages. 

Formal test case creation

While the development is underway, we will begin creating drafts of the formal test cases we will be executing at the end of development. Depending on the composition of the team, this can range from the developer making an initial draft of the test plan to having one or more resources fully dedicated to writing the test cases from scratch.

Business user validation testing

Once the pull request is merged, we begin informal validation testing. With typically bi-weekly demos, we walk the business stakeholders through what was built to get their feedback and ensure it solves the business problem entirely. Sometimes when individual features are implemented, it can be hard for business users to visualize how it all comes together. These demos are a great opportunity to catch any oversights about how all of the enhancements work together. By validating early and often, we minimize the risk of uncovering major design issues during the final formal validation process, which will require unexpected re-work and set the project timeline back.

Formal validation testing

Once all of the software development is complete, we begin formal testing. This starts with a code freeze and controlled build, and the team manually executes all of the tests scripts that were created during development. With all of the prior quality gates, we tend to find very minimal anomalies. The bi-weekly informal validation tests ensure that we rarely encounter issues during the formal validation testing.

It’s easy to wait until the end of development to start thinking about testing, but that’s an unnecessary risk. Planning test cases earlier in the process and incorporating informal, iterative testing throughout ensures the product is developed with as few anomalies as possible. This decreases the likelihood of delaying a 510k submission—or even worse, getting it rejected—and, ultimately, you’ll end up with an intrinsically higher-quality product.

Let's Talk