We have a solution for everything – and the corresponding experience. In our current seven areas of expertise, we are at your disposal with our comprehensive know-how.

All Solutions

Software solutions

Software solutions

With our Werum PAS-X MES - on premise or cloud-based - and our software solutions for analysis, track & trace, networked factories, and intelligent packaging, we are the world's leading supplier and partner to the pharmaceutical and biotech industries.

Overview Software solutions

Handling systems

Handling systems

We are specialists for complete handling systems for pharmaceutical and medical products. Our solutions are decisive in the field of contactless and safe transport of e.g. glass syringes.

Overview Handling systems

Inspection machines


As the world's leading inspection expert, we develop solutions for the pharmaceutical and biotech industries. Our range of products extends from high-performance machines and semi-automatic machines to laboratory units and inspection applications for in-process control.

Overview Inspection

Machine finder

Packaging machines

Packaging machines

We are a leading supplier of packaging machines for liquid and solid pharmaceutical and medical products. With our blister, sachet and stick packaging machines we offer solutions for primary packaging. Our side and topload cartoners set standards worldwide for secondary packaging.

Overview Packaging machines

K.Pak Topload Case Packer

Introducing our latest solution from Körber; the K.Pak Topload Case Packer! Created specifically for the pharmaceutical industry, the K.Pak solution provides operator-friendly machines to complete any production line. Our solution focuses on innovative technology, high-quality design and expert handling and packaging of your product. It’s time to start connecting the dots with Körber!

Packaging solutions

Packaging solutions

As long-standing specialists, we develop packaging solutions for innovative and high-quality secondary pharmaceutical packaging made of cardboard. We offer you solutions for counterfeit protection, standard folding boxes and much more.

Overview Packaging solutions



Our experts will advise you during the analysis of your requirements, show you optimisation potential and support you during the implementation of projects in all areas of the pharmaceutical, biotech and medical device industry.

Overview Consulting

Ken Fountain
Vice President Scientific Applications, TetraScience

Prof. Dr. Christoph Herwig
Senior Scientific Advisor, Körber Business Area Pharma


A new paradigm in bioprocessing intelligence to accelerate time to market

How can biopharmaceutical companies optimize manufacturing and accelerate time to market? The key lies in converting data into actionable insights using a holistic data strategy and AI-driven analytics, all while ensuring regulatory compliance.

In this Q&A, industry experts Prof. Dr. Christoph Herwig from Körber and Ken Fountain from TetraScience will dive into:

  • Challenges and solutions in transforming scientific data across product life cycle stages into analytics and AI-friendly formats. 
  • How this data fuels advanced analysis, enabling real-time process insights, prediction, verification, and correlation between critical process parameters (CPPs) and critical quality attributes (CQAs).
  • A compelling case study from a global biopharma company leveraging cloud-based data for real-time insights and faster time to market.

Christoph and Ken, could you tell us about your background and expertise in the biopharmaceutical industry?

Christoph: Of course! My name is Christoph Herwig, and I am a bioprocess engineer with a PhD in bioprocess identification. I've had the privilege of being a full professor for biochemical engineering at the Vienna University of Technology, focusing on data science methods for efficient bioprocess development. My industry experience, including working with Lonza, involved designing and commissioning major biopharmaceutical facilities. I'm now part of Körber, specifically focusing on "PAS-X Savvy," our groundbreaking data science software for the biopharmaceutical lifecycle.

Ken: I'm Ken Fountain, the Vice President of Scientific Applications at TetraScience. My career journey began as a chemist, developing analytical methods for biologics characterization. Over the years, I've worked in different roles at Waters Corporation and Genzyme Corporation, gaining extensive experience in analytical and biopharmaceutical technology. Currently, I lead a team focused on helping pharmaceutical and biotechnology companies replatform and reengineer their scientific data to the cloud for advanced analytics and AI/ML.

What are the current challenges in bioprocess development and manufacturing, specifically related to data integrity initiatives within Pharma 4.0?

Christoph: This question touches multiple dimensions.

  • Time to market is slow; the full life cycle of a new product, from discovery to development, and further on to process validation and licensing for manufacturing, still takes 6-10 years and is accompanied by very high costs of up to $2 billion USD per successful commercialization.
  • Once in manufacturing, we still face an old process design with non-optimized productivity and high batch-to-batch variability. We know that process intensification, such as integrated continuous biomanufacturing, presents a business opportunity. However, to make these processes robust, we need a solid scientific CMC control strategy following ICH Q12, which is also applicable in real time.
  • Digital transformation is perceived as a central enabler to increase productivity, manufacturing flexibility, and product quality. This is summarized in the Pharma 4.0 initiatives, as promoted by the ISPE. They define data integrity and data maturity as the main enablers for successful digital manufacturing and the product life cycle. However, currently, we are encountering issues with data integrity, leading to the FDA and the EMA issuing many warning letters.

Ken, why do you think these pain points exist today?

Ken: The root cause of these challenges lies in fragmented data – data is stored in isolated silos, making it unsuitable for effective analysis and utilization with AI and machine learning. Why is that? 

  • One primary issue is the manual transfer of data between different sources and targets, such as from experimental setups (ELN) to analytical software (CDS, bioreactors). This manual transfer is not only time-consuming but also prone to errors. 
  • There’s a discrepancy in data types – continuous data from bioreactors needs to be matched with discrete analytical measurements, and this matching is currently done manually, leading to inefficiencies and non-reusability. 
  • Another problem is that uncontextualized data makes it challenging to search efficiently due to inconsistencies in data formats, naming conventions, metadata, and more. 
  • Data sharing and collaboration suffer due to these isolated data silos. Internally and externally, sharing data becomes cumbersome, resulting in duplicated experiments and wasted time. 
  • The visualization and analysis of data also present hurdles. Connecting time-series data from bioreactors to associated process analytical data often involves using USB drives and multiple Excel sheets. This disjointed process hinders a comprehensive visualization of these data sets together, impeding effective analysis.

Christoph, what are possible solutions to this problem and what are the requirements to be successful?

Christoph: We all know that data is the new gold, and the biopharmaceutical industry must recognize its significance and adapt accordingly. To address the challenges effectively, we need a holistic approach towards utilizing data. This includes gathering data from every stage of the process chain, at least from upstream to downstream, and ideally also encompassing formulation, fill finish, and the patient himself. In addition to the dimension of the process chain, we need to import data from the entire product life cycle as well.

However, just amassing data in a repository like a data lake or historian is not sufficient. We must contextualize the imported data – linking various levels of information to facilitate holistic data analysis. 

This involves:

  • Adding Metadata: Incorporating details like shift personnel, lot numbers, and up to moon phase.
  • Identifying process phases and events: Understanding where the data fits in the process phases and events.
  • Converting data to information: Transforming raw data into entities that are independent of scale and initial conditions.
  • Handling data variability: Developing routines to manage diverse data types, dimensions, and frequencies, such as unfolding time series or extracting features from spectra or images.

This contextualization can be achieved through subject matter experts (SMEs), or automated functions integrated into data import, analysis routines, or ontologies.

Importantly, there should be no interface between data management and data analysis. Otherwise, there is a risk of breaching data integrity. Therefore, as a product-agnostic technical requirement, the system must allow seamless data import, management, contextualization, and data analysis.

If a holistic approach towards utilizing data did exist, what could a solution like this be able to do?

Ken: The solution could effectively address the outlined challenges, unlocking new insights and relationships within diverse datasets. It would enable AI/ML utilization to optimize processes, offering intelligence on sources of variability. Moreover, it could facilitate the creation of high-fidelity digital twins essential for bioprocess scale-up, tech transfer, and ongoing validation. The cloud-based data accessibility would ensure seamless sharing and scalability across the globe.

Christoph: Think about current solutions for multivariate data analysis: Data is copied from Excel into an isolated solution for principal component analysis. An outlier is identified, and the analyst goes back to the raw data, extracts data in Excel, and imports it again. After 6 months, no one will remember which data were used for a plot, which might have been included in an IND or BLA filing, or used to address an OOS. With a seamless solution, we avoid any doubts about data integrity, whether we are in process development or process characterization, which are not necessarily subject to Part 11 compliance, or in the regulated environment of process validation or manufacturing itself. It represents a consistent interpretation of the Pharma 4.0 guidelines.

Of course, in addition to the improved data quality by avoiding manual errors, we also achieve a significant acceleration in data handling. We free up SMEs from the manual task of copying and pasting data, allowing them to focus on what they are paid for: data analysis. This applies to all activities throughout the product life cycle.

On the data analysis side, we have the potential to use all data for holistic data analysis, such as:

  • analyzing interactions between unit operations rather than developing single unit operations in isolation
  • linking data across different stages of the product life cycle, leading to well-informed iterations in process characterization studies and true Continued Process Verification (CPV), thus enabling continuous improvement, which is the true driver for CPV.

In essence, we develop a comprehensive control strategy that remains up-to-date, as life cycling along ICH Q12 is enabled.

Christoph and Ken, can you explain the joint Körber/TetraScience solution to these issues?

Ken: TetraScience handles data replatforming to the cloud, and then reengineering that data to ensure it is compliant, liquid, and accessible for analytics and AI/ML. This enables comparing variables across various batches, units, and sites, as well as merging discrete and continuous data, something that is either not easily done today or not done at all.

Christoph: With PAS-X Savvy, Körber offers a comprehensive SaaS data intelligence suite, fueled by data from the TetraScience platform. This involves data contextualization and advanced analysis, including process prediction, continued process verification, correlation analysis between critical process parameters (CPPs) and critical quality attributes (CQAs), and real-time detection of process deviations. Additionally, it enables real-time control for optimizing process performance during manufacturing. The platform facilitates teamwork and sharing of results through automatic report generation for GMP or continued process verification (CPV) purposes.

Brochure: Werum PAS-X Savvy

Werum PAS-X Savvy revolutionizes managing, analyzing and reporting of your pharma and biotech process data.


What are the benefits of the joint solution from Körber and TetraScience?

Christoph: The joint solution promises a comprehensive utilization of data aligned with Pharma 4.0's data maturity model. It goes beyond just visualizing what's happening; it provides insights into why it's happening, ensuring a holistic understanding of critical aspects like CMC and process transparency. Furthermore, the solution offers predictive capabilities, foreseeing future events. It extends to self-optimizing capabilities while upholding data integrity at every stage of data maturity, even from early process development.

You recently implemented this solution as a POC (Proof of Concept) with a top 25 global biopharma company. Can you tell us more about that?

Ken: Certainly. In a recent customer project in the field of biopharmaceutical process development, we encountered several data-related challenges: We had a vast amount of data coming from various sources, including high throughput devices, USP and DSP, offline analytics, LIMS, and ELN.
The pain points we identified were numerous. Existing macro-based spreadsheets required constant maintenance, and their usage was limited to a select few individuals in the group due to specialized knowledge, creating a bottleneck and yet another data silo. Moreover, data analysis often needed manual transfer between different software packages, complicating the workflow. Specifically, acquiring DSP data on a batch level proved difficult due to changing nomenclature, necessitating manual data retrieval. Comparing data across multiple experiment blocks was laborious and time-consuming. Additionally, networking capabilities needed improvement to facilitate faster data transfer both within the company and with external partners.

Christoph: Our goal was clear – to establish one global repository for seamless data collection, harmonization, and access, enhancing the user interface for connecting and sharing data from various sources within the PharmScience data landscape. We aimed for comprehensive data contextualization, robust visualization tools, and efficient data analysis processes to replace the existing Excel-based workflows. Furthermore, we aimed to improve data distribution, publishing, and archiving processes. The final objective involved meticulous data modeling, linking data from diverse sources and manual entries logically to construct a coherent ontology and data cloud.

Ken, can you explain in more detail how the requirements were met?

Ken: Certainly. TetraScience played a pivotal role in meeting these requirements by seamlessly integrating with various scientific workflows in bioprocessing. The data collected from these workflows underwent a transformation, resulting in what we refer to as "Tetra Data." This transformed data became ready for utilization in downstream analytics applications.

Within this Tetra Data, crucial labels like project IDs, location/site, user, and compound IDs were meticulously embedded, providing essential context. To ensure efficient monitoring and retrieval of this contextualized data, PAS-X Savvy utilized calls to the search EQL API endpoint, enabling real-time tracking of newly ingested files and retrieval of pertinent labels.

The interface of PAS-X Savvy was designed to empower users, allowing them to select specific data for visualization based on defined data fields within the Tetra Data. This seamless integration and data transformation process ensured a robust foundation for comprehensive data visualization and analysis in alignment with the defined requirements.

What benefits did your holistic approach bring to the Top 25 biotech company?

Christoph: The company achieved significant outcomes, including:

  1. Time savings: We managed to boost productivity by 50% through automatic data collection.
  2. Reduced risk: By eliminating manual transcription of data, we achieved a 75% reduction in errors. Additionally, long-term data trending helped proactively flag OOS, OOT, and OOE events.
  3. Faster market entry and extended exclusivity: Our approach sped up the process development and tech transfer to manufacturing by 6-12 months, potentially leading to an extra $250-$550 million in revenue due to accelerated market entry and extended market exclusivity.

Ready to revolutionize your data approach?

Request a free demo!


No comments

Write comment

* These fields are required

Back to top
Back to top