Subscribe to Newsletter
Business & Regulation Quality & Compliance, Technology and Equipment, Business Practice, Trends & Forecasts

Building a QbD Masterpiece with Six Sigma

In the May issue of The Medicine Maker, I wrote about “The Beginning of the End of Quality by Design”. I described the history of how Quality by Design (QbD) came about – and why there may come a day when the concept is so deeply entrenched in pharma manufacturing that it no longer exists. But we are certainly not at that point yet and to get there, we need to accelerate the use of QbD. A well-used – and excellent – set of ‘tools’ for QbD is Six Sigma. The term ‘Six Sigma’ was coined by Motorola in the 1980s, and involves using a data-driven methodology to reduce variability during manufacturing,  resulting in process improvements.

The Five Sigma barrier

In less than two years after Six Sigma became established in manufacturing in Motorola and GE, practitioners found themselves at a point where the opportunities and improvements they suggested using Six Sigma started becoming too expensive. They had encountered the “Five Sigma Barrier” (1), which equates to 233 defect parts per million,  versus Six Sigma, where there would be no more than 3.4 defect parts per million opportunities (99.9997 percent error free). Six Sigma revolves around improving existing processes, but there can be limits to the level of improvement possible. Eventually, improvement efforts reach a point where the cost starts negating the anticipated financial merit. With an existing process, certain features will be inherent. For example, if the design was not well defined at the outset there may inadvertently be limitations, or even quality issues, designed into it. Even if the design flaws are identified prior to product launch, they cannot always be rectified easily – the later in the development cycle they are discovered, the more costly they are to correct (2). Sometimes, the only option is to redesign the product, which can be too costly or too late in the product lifecycle.

Some manufacturers may be content with Five Sigma, but many strive for the near perfection offered by Six Sigma, particularly in the pharma industry where quality is crucial. Six Sigma has traditionally been driven by the popular DMAIC methodology (define, measure, analyze, improve, control) and has focused on continuous improvement of an already existing process. DFSS uses the DMADV (define, measure, analyze, design, verify) methodology (3) to create new processes and is used when no process exists, or when an existing process has already been optimized through DMAIC and still does not meet the required level. In other words, if you have hit the Five Sigma barrier, then DFSS can help you break through.

The aim of DFSS is to clearly understand the requirements at the outset and then to design a process that is highly capable of meeting or exceeding those requirements with minimal variation. DFSS also provides the tools and a structured approach to efficiently create these new processes by helping to minimize the effort, time and costs required to design and eventually manufacture the new product on an ongoing basis. The fundamental premise behind DFSS is that to effectively achieve these goals, we must thoroughly understand the process and product so that we can identify and appropriately control critical material and process parameters. The DFSS toolbox has a wide variety of tools and methods, some of which are shared with the conventional Six Sigma methodology of DMAIC.

Joseph Juran, the originator of QbD, distinguished “quality improvement” from “quality planning”: improvement is concerned with solving existing problems; planning is concerned with shutting down the hatchery that creates those problems in the first place. In the pharma industry, we know QbD as a systematic approach to development that starts with predefined objectives, and emphasizes product and process understanding, as well as process control based on sound science and risk management. The DFSS methodology and toolbox fits neatly into the QbD framework of developing robust products with good process understanding (see Figure 1).

Many Six Sigma tools are used in the pharma industry at different stages during the product’s lifecycle, such as Design of Experiments and Control Charts. The tools described in this article are from all stages of the DMADV cycle. I have selected tools with the greatest relevance and potential for pharmaceutical applications, as well as their current usage status. However, they don’t necessarily need to be used in the stages described below for all products and processes. Exactly how they are used ultimately depends on you!

Figure 1. Juxtaposing the DMAIC, QbD and DFSS methodologies.

Define phase: quality function deployment

In the beginning of QbD-based development, pharma scientists build a Quality Target Product Profile (QTPP) for patients to build quality, safety and efficacy into the product (4). A quality function deployment (QFD) made in the beginning of product development helps focus on the requirements of multiple stakeholders; not just patients but different regulatory agencies, business targets, manufacturing sites and supply chain partners. Part of QFD is the House of Quality. Many free templates for creating a House of Quality are available online, and the aim is to correlate desired aspects of the end product with specific processes and specific business outcomes. Figure 2 is an example of a very simple House of Quality.

Figure 2. A House of Quality in a QFD.

  • The “What” section on the left is the door to the house and should include all of the aspects desired in the final product. The Kano Model (explained later) can be used to build the customer prioritization scale, which is the blue window section on the left. In other words, what are the most important aspects?
  • The yellow “How” section is the set of product characteristics that would meet the customer’s requirements. As an illustration, a regulatory requirement of efficacy may be met by different in vivo studies and a business requirement of cost per unit may be met by a defined process yield. 
  • The relationship matrix is the main room in our House of Quality. This is where “What” meets “How”. This is a quantitative risk-based assessment and a pre-defined scale to show how well the chosen product characteristic describes the customer requirement. 
  • The roof of the house is the correlation matrix. This section suggests inter-relationship between the product features. These inter-relationships can ease and hasten product development, and facilitate investigations in case of failures. As an example, product characteristics, such as moisture content and endotoxin level, may have a complementary effect on one other, so controlling moisture content through shelf-life would be one way to ensure an endotoxin-free product. 
  • The “How Much” section is the foundation of the House, and is the sum of the product of the “What” and the “How” for each product characteristic. It should be a risk-based assessment for the product developer of what is most critical for product success. It is in this section that the critical quality attributes (CQAs) for the patient can be identified and other key quality attributes for the business can also be seen here. 
  • The “Why” section on the right side of the house includes a window of competitor benchmarking; for example, if you’re developing a generic product you may wish to look at the innovator. In this section, the performance of the proposed product can be compared with that of competitors to understand what features need to be emphasized. You should also examine the performance of necessary features. For example, in the case of a generic product, specific characterization studies may be necessary to distinguish the in vivo performance from that of competitors, or to prove similar performance to that of the innovator. 
  • The door on the right is the back door and is a final assessment to check that all customer requirements have been adequately met by the first level of quality planning for product characteristics.
  • After building your first House of Quality, it will cascade into a whole estate of houses. As an illustration, critical material attributes of raw materials, in-process CQAs and critical process parameters (CPP) can be built in subsequent Houses of Quality. It may sound obvious, but this is an excellent method for bringing QbD elements like QTPP, CQAs, CMAs (critical material attributes) and CPPs together. . It’s also effective at encouraging people to think about different areas, such as getting development scientists to think about manufacturing. And it isn’t just useful for defining product development at the outset; it can be used throughout a product’s lifecycle.
Define phase: Kano model

A second tool that can be used for the define phase of DMADV is the Kano model. The Kano model was developed in the 1980s by Noriaki Kano, who is today a professor at the Tokyo University of Science. The aim of the model is to provide insight into product attributes that are perceived to be important to customers. Traditional ideas around quality assumed that customer satisfaction was simply proportional to how functional the product or service was. In the Kano diagram (see Figure 3), this proportional relationship is represented by the line passing through the origin at 45 degrees to the horizontal. But in reality, customer requirements are not one-dimensional; for instance, “wow” elements can also make an impact on a product’s attractiveness. In pharma, of course, the final customer is the patient. A well-designed product is not only effective, but also helps patient compliance.

A good Kano assessment can help define product expectations at the very start of development, allowing you to prioritize the characteristics that will be most important to patients. To sum up, a Kano model can (5):

Figure 3. A typical Kano model

  1. Set priorities for development by understanding the product characteristics that have the greatest influence on customer satisfaction via the QFD described above.
  2. Provide valuable help in trade-off situations in product development.
  3. Make it easier to custom-tailor solutions to specific problems.

Moreover, discovering and fulfilling attractive requirements helps create a wide range of opportunities for differentiation.

At the beginning of this article, I mentioned that products can have inherent quality issues or limitations inadvertently built into the product. The define phase of the DMADV philosophy is crucial because it can avoid this issue and ensure that the right requirements are built in from the start.

Other development phase tools such as the Pugh matrix and the Hoshin Kanri method are also useful. A Pugh Matrix can help scientists in evaluating multiple ideas or design concepts against each other in relation to a baseline. Hoshin Kanri may be used after the selection of the right strategy for deploying and monitoring of resources. For further reading on these,  I recommend reference 6.

Measure phase: design of experiments-based Gage R&R

While all analytical methods in the pharma industry meet prescribed ICH Q2 R1 standards for validation of analytical procedures, problems can arise (7). Here is one scenario that many of you may have encountered. An Analytical Method consists of an elaborate derivatized sample preparation. This sample has limited stability under stringent conditions. The sample is then prepared for analysis under a specific (also elaborate) preparation method. The sample is then analyzed using LC-MS. The method is validated, but the measurement system starts to throw up a few surprises. Ordinarily, a rivaling elaborate risk assessment is done, which helps to narrow the list of probable causes, and then a large list of risk mitigation measures are established for all these probable causes. Often, problems go away with such an approach, but this isn’t always the case.

A design of experiments (DoE)-based gage repeatability and reproducibility (R&R) study can be used to tell you exactly how much of the method’s variability comes from every one of the unit operations rather than setting acceptance criteria for each of them individually. Gage R&R is a statistical tool that measures variation in the measurement system. Using this tool means that when something goes wrong, an analyst will be able to identify a potential culprit based on sound science and statistical risk by the distribution of variance amongst all these unit operations. A multivariate approach is also an excellent way to check your method for its robustness and is now being encouraged by FDA Guidance on the topic of Analytical Procedures and Methods Validation (8). Results from Gage R&R also represent opportunities for continuous improvement in the lifecycle of the analytical method.

Figure 4. How not to make a Fishbone diagram.

Figure 5. A better illustration of a Fishbone diagram.

Analysis phase: fishbone diagram

Fishbone diagrams are commonly used in QbD-based development at multiple stages to identify potential causes of a problem. A Google search for “Fishbone Diagram Pharmaceuticals” yields many results similar to Figure 4, which is a fishbone diagram to identify what is critical to achieving consistently correct assay results. An effective fishbone diagram should go through the six (or seven) Ms: man, machine, materials, measurements, methods, milieu and, for some applications, management. Unfortunately, I find that it is common practice to simply put the usual suspects on a fishbone-shaped diagram, which is the case in Figure 4. Figure 5 shows a better fishbone diagram for the same problem. Analyzing all of the six Ms and their effect on the expected quality attribute can lead to a better understanding of all possible causes of variations for a process under development, as well as all the possible reasons why a quality attribute misbehaves during manufacture.

Design phase: Monte Carlo Simulation

Allow me to present a typical scenario in pharma manufacturing. Manufacturing requests specifications on process parameters for a new product from the development team. The development team doesn’t really know what these limits should be. Realistic specifications are everyone’s desire, of course, but with little knowledge of how to set realistic specifications, development usually opts to set specs so tight that they are guaranteed to work. Unfortunately, this makes life more difficult for manufacturing – and also, in turn, development. Each time a particular lot of product does not meet spec, manufacturing must ask development to help with investigations. The specifications may then be modified and eventually the specs are widened to realistic limits. Specifications should be robust, but also realistic. And it’s better if you set them at the very start rather than using a back and forth approach.

The Monte Carlo method is a probabilistic technique based on generating a large number of random samples to simulate variability in a complex system. The objective is to simulate and test as early as possible to anticipate quality problems, to avoid costly design changes that might be required at a later phase and, more generally, to make life a lot easier on the shop floor. What is required?

  1. a good Transfer Function (Y=f(X)) from design of experiments,
  2. some knowledge about the distribution of data of the variables (Xs),
  3. a little bit of adventure.

The result? Figure 6 – and an understanding of exactly how changes in Xs can affect Ys (9). This knowledge will help you to confidently set specifications for Y given operating conditions, or the other way around – settings for parameters Xs to achieve Y with a prescribed performance.

Figure 6. An illustration of a Monte Carlo Simulation using Minitab Devize.

Verify phase: process performance and capability indices

The final stage of DMADV is to verify or validate that the design will meet the intended needs repeatedly. When it comes to verification, the terms Process Capability Index (CPK) and Process Performance Index (PPK) are common. I have seen and used statistics a lot (and not just in the pharmaceutical industry), but when it comes to pharma, the amount of debate over the terms CPK and PPK is mindboggling!

It’s easy to get caught up in the debates about what these values mean so here are a few simple points about what CPK and PPK represent. 

  1. CPK represents the potential process capability (which is to say, how well a given process could perform in the ideal situation of no special causes of variability).
  2. PPK addresses how the process has performed without demonstration of process stability. 
  3. In general, PPK is less than CPK.
  4. If there is a significant CPK–PPK difference, it implies that the process is not stable; thus, you will need to identify/eliminate special causes to reduce variability. 
  5. CPK can be used to forecast future batch failure rate and PPK cannot (10).

Using PPK as a metric of performance at the development stage is not common, but it can be extremely useful in terms of setting a benchmark for the product’s future performance. For example, if PPK is performed at the laboratory or bench scale, then assessing the feasibility of technology transfer and manufacturing performance becomes easier. However, I would like to point out that PPK requires corrections for smaller sample sizes, which can either be done statistically or by using higher number of samples (with caution) per batch.

Strive for the end

DFSS has been around for a long time now, but its use with QbD is not very popular yet. This may be attributed to DMAIC’s popularity over DFSS, as well as the fact that any ‘beginning’ is always difficult. With no ‘regulatory guidance’ or publications suggesting the use of these tools, I am sure that several companies would think twice before including use of these tools in their dossiers, even if they were used in product development. This will change gradually as the relevance of these tools for business becomes clearer, with greater elucidations through articles like this.

So please go ahead; try out these DFSS tools and share your experiences. Together, we can drive QbD to its ‘end’.

My next article will focus on the right ways to use statistics in the product’s lifecycle.

Jasmine is Principal Scientist - Quality by Design at Dr. Reddy’s Laboratories SA. The views expressed are personal and do not necessarily reflect those of Jasmine’s employer or any other organization with which she is affiliated.

Receive content, products, events as well as relevant industry updates from The Medicine Maker and its sponsors.
Stay up to date with our other newsletters and sponsors information, tailored specifically to the fields you are interested in

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. G Gianacakes, “Breaking the 5-Sigma Barrier with Systems Engineering”, INCOSE International Symposium, 13, 313–323 (2003).
  2. M Adams, Pharma Manufacturing, “Design for Six Sigma: A Potent Supplement to QbD” (2010). Available at: bit.ly/2b2qCeg. Accessed August 12, 2016.
  3. RD Snee and RW Hoerl, “Leading Six Sigma: A Step-by-Step Guide Based on Experience with GE and Other Six Sigma Companies”, Financial Times/Prentice Hall, 1st edition. Referencing press: 2002.
  4. JM Juran, “Juran on Quality by Design: Planning, Setting and Reaching Quality Goals”, Simon and Schuster, Revised edition. Referencing press: 2008.
  5. XX Shen, KC Tan and M Xie, “An integrated approach to innovative product development using Kano’s model and QFD”, European Journal of Innovation Management, 3, 91-99 (2000).
  6. B Emiliani, “Better Thinking, Better Results: Using the Power of Lean as a Total Business Solution”, Center for Lean Business Management, LLC, 2nd edition. Referencing press: 2007.
  7. ICH, “Validation of Analytical Procedures: Text and Methodology” (1994). Available at: bit.ly/29QmO31. Accessed: August 12, 2016.
  8. FDA, “Analytical Procedures and Methods Validation for Drugs & Biologics”, (2015). Available at: bit.ly/2a1k7Ku. Accessed: August 12, 2016.
  9. B Scibilia, “How Could You Benefit from Monte Carlo Simulation to Design New Product?” The Minitab Blog (2015). bit.ly/2agMxTh. Accessed: August 12, 2016.
  10. LX Yu, “Using Process Capability to ensure Pharmaceutical Product Quality”. Presented at the FDA/PQRI Conference on Evolving Product Quality; September 16-17, 2014; Bethesda, MD, USA.
About the Author
Jasmine

Jasmine is a Principal Scientist, Quality by Design, at Dr. Reddy’s Laboratories SA

Related White Papers
Highly sensitive and robust LC-MS/MS solution for quantitation of nitrosamine impurities in metformin drug products

| Contributed by Thermo Fisher Scientific

Overcoming the challenges of nitrosamine impurities in drugs

| Contributed by Thermo Fisher Scientific

HRAM LC-MS method for the determination of nitrosamine impurities in drugs

| Contributed by Thermo Fisher Scientific

Most Popular
Register to The Medicine Maker

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Medicine Maker magazine

Register