Subscribe to Newsletter
Manufacture Quality & Compliance, Small Molecules, Analytical Science, Bioprocessing - Upstream & Downstream, Bioprocessing - Upstream & Downstream

The Bioprocess Model Maker

Industry is waking up to the fact that modeling can reduce time and costs in bioprocess development, while also helping to meet Quality by Design (QbD) initiatives. In recent years, data-driven models, in particular, are becoming a popular choice, but developing such models is not straightforward. It is crucial to remember that a model’s success hinges on the data used to create it. Jarka Glassey, Professor of Chemical Engineering Education at Newcastle University, UK, tells us more.

Why is modeling so important?

A significant problem in the biopharma industry is that new drugs are expensive. I understand why drugs must be sold at such high prices, but at the same time feel there must be something we can do to change this. Bioprocess modeling, optimization and monitoring control has been a focus of research throughout my career. It’s a great field to work in because it can potentially accelerate the development of new medicines, as well as making them more effective and less costly – and therefore more accessible to society at large. Bioprocess modeling is all about using data obtained from the process to organize and improve the process – and it’s significantly faster and more efficient than just relying on traditional (and somewhat limited) experimental work data analysis.

Modeling is not something new. Even back in the days of my PhD, artificial neural networks were very fashionable and there was interest in using simulations to predict process behavior under various conditions to help understand and optimize the process early on. By the time you start producing at large scale, the aim is to have consistent, high productivity – an obvious advantage. Over the years, in collaboration with a number of pharma companies, we have shown that a better understanding of the bioprocess also allows biologists to be much more effective at developing new biopharmaceuticals, as well as better bioprocessing methods to produce them.

QbD and Process Analytical Technology (PAT) initiatives have really wetted pharma’s appetite for modeling and statistical approaches, which are now more frequently used in industry than when I was doing my PhD. Principal component analysis, for example, is very commonplace – almost everyone in the industry has heard of it, even if they don’t know exactly what it is. But now that the industry is more aware of the power of modeling, there is the potential to take it further.

How are bioprocesses currently monitored?

I’ve co-written a number of review articles that examine the current state of modeling (1),(2). The last few decades have seen incredible advances in analytical techniques, coupled with miniaturization. In particular, there has been a huge drive to develop new, noninvasive sensors, such as spectroscopic sensors or other fingerprinting techniques. Many of these new sensors enable multiple reagents or multiple intermediates to be measured in one go. Rather than having dedicated sensors for every single species, it is now possible to make some estimations about the state of the process as a whole, leading to greater process understanding. For example, we’ve published an article that links processing conditions upstream during fermentation to the glycosylation pattern of monoclonal antibodies (3).

Current methods have their limitations, however. Biopharma manufacture is complex and even a small change can have a significant impact on the end product. In an ideal world, we would be able to monitor changes in real-time and have the ability to adjust the process in response. Although some of today’s sensing technology delivers rapid data, the analysis of that data can be an involved process, meaning that actionable information comes too late.

Better understanding of the bioprocess also allows biologists to be much more effective at developing new biopharmaceuticals.

If we could measure not just quantity, but also the quality of a product at each step of the bioprocess, we could begin to look at how to modify process conditions to achieve desired quality attributes, which is exactly what the FDA wants for QbD. To do this, we need to be able to measure in-line – and in small concentrations compared with all the other components that may be present in the biopharma broth. It is a significant challenge – even more so when we consider doing it cost effectively.

From my point of view, we either need remote sensing technology – and to this end we are actually working on disposable, printed sensors that can be used wherever needed (4) – or physical sensors that give immediate and reliable answers about product quality. We are not at this stage yet, but the rate of progress in the field of modeling and monitoring is accelerating. In five years’ time, things may be very different.

What different types of models are available?

Today, different sensors reveal different information about a process and provide many data points, but converting this to actionable knowledge and understanding is always going to be a challenge. Traditional modeling approaches tend to use mathematical equations based on fundamental principles to form results – and the model is gradually improved through testing, which takes time and work. An alternative to modeling is a data-driven model, which uses existing process data. My work combines both approaches in a hybrid model – using fundamental principles, as well as information collected from various sensors. A hybrid model has the ability to exploit a broader knowledge base (5).

What are the main challenges of working with bioprocessing models?

I would like to draw attention to one particular issue: many companies are jumping on the data-driven model bandwagon without fully understanding how the models work. It’s true that modeling approaches are becoming easier to use – some models operate with push-button ease, with the computer doing all the work. And though we want models to be accessible, there is a danger that users are unable to question the validity of the model – especially, if the results correlate with what the user was expecting. When lecturing my students on data-based modeling techniques, I often give examples of where things go wrong. I’ve seen many instances where models have been developed with limited datasets – and where the queries being asked force the model to extrapolate beyond the range it was designed for.

Biologists who carry out experiments and understand the biology behind them can often tell by looking at a sensor reading if something is potentially wrong. Increasingly in bioprocessing today, sensors employ spectroscopy or other techniques that produce data that are very difficult for the human brain to interpret; such data therefore enter a model, which makes sense – but we can’t expect everyone to be experts in data modeling techniques. Take my example of principal component analysis; it’s a very common term in the industry, but even if you’ve seen its power in a particular context, you may not know exactly what it is or how it works. And why should you? If you are a specialist in bioprocessing, with the task of improving a particular process, modeling is just another tool – and learning everything there is to know about all the tools we may use is unrealistic for most.

Getting the balance right is a big challenge for the modeling community, but I would like to see models becoming more robust and easier to use. The more people automatically turn to modeling, the faster we can build additional interest in the field and advance it.

Drawing on models in the early days of a new company can influence the entire approach to development.

On the other side of the fence, there is a danger that experts in model development may not know enough about bioprocessing to understand the best data to introduce into the model – perhaps they will choose specific variables rather than derived variables, for example (which a biologist will often use naturally). Such a decision drastically limits the potential of the model – and I’ve seen many models written off without being given a proper chance. Data-based modeling techniques are only as good as the data used to develop the model. And one bad experience with a model can put a company off using models ever again...

How can the industry capitalize on the potential of bioprocessing models?

The answer is obvious: we need modeling experts and bioprocessing experts to talk, which is why my PhD students in modeling always have a joint industrial supervisor. My own preference is always to work with industry because it is rewarding to see the impact – and you tend to see verification of your work very quickly in real-life process conditions.

I have always been fortunate to work with people in industry who are very forward-looking and have seen the benefits and value of modeling. But this isn’t necessarily the industry standard – and that needs to change. At Newcastle University, we’ve been making sure that our graduates, whether engineers or biologists, are aware of the power of modeling. It’s an important first step, because they will take their knowledge wherever they go.

What is the best way to get started with bioprocess modeling?

Many large biopharma companies already use models – and have the resources to create specific units and departments to invest in the approach. In some instances, they may also be fortunate enough to have well-established academic collaborations. Smaller companies (and especially start ups), on the other hand, may not even have considered modeling as a valuable tool for the optimization of bioprocesses – and that means they are missing an opportunity to gain a competitive edge. Perhaps even more importantly, drawing on models in the early days of a new company can influence the entire approach to development.

How do you get started? Well, you could turn to a specialist company that performs multivariate data analysis or offers partial sequence models based on your data – you simply pay for the results.

Many large biopharma companies already use models – and have the resources to create specific units and departments to invest in the approach.

You could also invest in a proprietary model, but fledgling companies tend not to have a solid understanding of their own process, let alone enough knowledge to explain those processes for the purpose of model development.

Working with academia can be another effective option – but expect much longer timelines; a PhD student usually needs three years to complete a PhD, whereas a business may only have six months to make a crucial decision on whether to go ahead with a project or not. That said, taking the academic route does develop solid process understanding along the way. Importantly, academics have the freedom to use whatever tools are most appropriate; many companies that already have a specific approach to modeling tend to try to shoehorn everything into that current modeling approach, whereas academia tends to look more broadly at what will best suit each individual project. Overall, I think it’s really important for industry and academia to work together more. Academia can come up with fantastic ideas, but they are not always feasible in the real world because of cost. Academia can learn realism by collaborating with industry, whereas industry benefits from academia’s freedom of exploration, which often results in breakthrough ideas as opposed to incremental improvements.

Receive content, products, events as well as relevant industry updates from The Medicine Maker and its sponsors.
Stay up to date with our other newsletters and sponsors information, tailored specifically to the fields you are interested in

When you click “Subscribe” we will email you a link, which you must click to verify the email address above and activate your subscription. If you do not receive this email, please contact us at [email protected].
If you wish to unsubscribe, you can update your preferences at any point.

  1. J Glassey, “Multivariate data analysis for advancing the interpretation of bioprocess measurement and monitoring data”, Adv. Biochem. Eng. Biotechnol, 132, 167-191, (2013). PMID: 23292129
  2. MJ Carrondo et al., “How can measurement, monitoring, modeling and control advance cell culture in industrial biotechnology?”, J. Biotech, 7, 1522-1529 (2012). PMID: 22949408
  3. A Green, J Glassey, “Multivariate analysis of the effect of operating conditions on hybridoma cell metabolism and glycosylation of produced antibody,” J. Chem. Technol. Biotechnol., 90, 303–313 (2015).
  4. BioRapid, “Biorapid”, (2016). Available at: www.bio-rapid.eu. Last accessed January 16, 2017.
  5. M von Stosch et al., “Hybrid modeling for quality by design and PAT – benefits and challenges of applications in biopharmaceutical industry,” Biotechnol. J., 9, 719-726 (2014). PMID: 24806479.
About the Author
Jarka Glassey

Jarka Glassey is Professor of Chemical Engineering Education at Newcastle University, UK.

Register to The Medicine Maker

Register to access our FREE online portfolio, request the magazine in print and manage your preferences.

You will benefit from:
  • Unlimited access to ALL articles
  • News, interviews & opinions from leading industry experts
  • Receive print (and PDF) copies of The Medicine Maker magazine

Register