6 expert essays on the future of biotech – World Economic Forum

Posted: January 25, 2020 at 11:48 am

What exactly is biotechnology, and how could it change our approach to human health?

As the age of big data transforms the potential of this emerging field, members of the World Economic Forum's Global Future Council on Biotechnology tell you everything you need to know.

Elizabeth Baca, Specialist Leader, Deloitte Consulting, and former Deputy Director, California Governors Office of Planning and Research & Elizabeth ODay, Founder, Olaris, Inc

What if your doctor could predict your heart attack before you had it and prevent it? Or what if we could cure a childs cancer by exploiting the bacteria in their gut?

These types of biotechnology solutions aimed at improving human health are already being explored. As more and more data (so called big data") is available across disparate domains such as electronic health records, genomics, metabolomics, and even life-style information, further insights and opportunities for biotechnology will become apparent. However, to achieve the maximal potential both technical and ethical issues will need to be addressed.

As we look to the future, lets first revisit previous examples of where combining data with scientific understanding has led to new health solutions.

Biotechnology is a rapidly changing field that continues to transform both in scope and impact. Karl Ereky first coined the term biotechnology in 1919. However, biotechnologys roots trace back to as early as the 1600s when a Prussian physician, Georg Ernst Stahl, pioneered a new fermentation technology referred to as zymotechnology.

Over the next few centuries, biotechnology was primarily focused on improving fermentation processes to make alcohol and later food production. With the discovery of penicillin, new applications emerged for human health. In 1981, the Organization for Economic Cooperation and Development (OECD) defined biotechnology as, the application of scientific and engineering principles to the processing of materials by biological agents to provide the goods and services.

Today, the Biotechnology Innovation Organization (BIO) defines biotechnology as technology based on biology - biotechnology harnesses cellular and biomolecular processes to develop technologies and products that help improve our lives and the health of our planet.

In the Fourth Industrial Revolution, biotechnology is poised for its next transformation. It is estimated that between 2010 and 2020 there will be a 50-fold growth of data.

Just a decade ago, many did not even see a need for a smart phone, whereas today, each click, step we take, meal we eat, and more is documented, logged and analyzed on a level of granularity not possible a decade ago.

Concurrent with the collection of personal data, we are also amassing a mountain of biological data (such as genomics, microbiome, proteomics, exposome, transcriptome, and metabolome). This biological-big-data coupled with advanced analytical tools has led to a deeper understanding about fundamental human biology. Further, digitization is revolutionizing health care, allowing for patient reported symptoms, feelings, health outcomes and records such as radiographs and pathology images to be captured as mineable data.

As these datasets grow and have the opportunity to be combined, what is the potential impact to biotechnology and human health? And better still, what is the impact on individual privacy?

Disclaimer: The authors above do not necessarily reflect the policies or positions of the organizations with which they are affiliated.

Image: Infographic developed by the California Biotechnology Foundation: A special thank you to Patricia Cooper, Executive Director, California Biotechnology Foundation

Daniel Heath, Senior Lecturer in the University of Melbourne's Department of Biomedical Engineering & Elizabeth Baca & Elizabeth ODay

One of the most fundamental and powerful data sets for human health is the human genome. DNA is our biological instruction set composed of billions of repeating chemical groups (thymine, adenine, guanine, and cytosine) that are connected to form a code. A persons genome is the complete set of his or her DNA code, ie the complete instructions to make that individual.

DNA acts as a template to produce a separate molecule called RNA through the process of transcription. Many RNA molecules in turn act as a template for the production of proteins, a process referred to as translation. These proteins then go on to carry out many of the fundamental cellular tasks required for life. Therefore any unwanted changes in DNA can have downstream effects on RNA and proteins. This can have little to no effect or result in a wide range of diseases such as Huntingtons disease, cystic fibrosis, sickle cell anaemia, and many more.

Genomic sequencing involves mapping the complete set, or part of individuals DNA code. Being able to detect unwanted changes in DNA not only provides powerful insight to understand disease but can also lead to new diagnostic and therapeutic interventions.

The first human genome sequence was finished in 2003, took 13 years to complete, and cost billions of dollars. Today due to biotech and computational advancements, sequencing a persons genome costs approximately $1,000 and can be completed in about a day.

Important milestones in the history of genomics

1869 - DNA was first identified

1953 - Structure of DNA established

1977 - DNA Sequencing by chemical degradation

1986 - The first semi-automated DNA sequencing machine produced

2003 - Human genome project sequenced first entire genome at the cost of $3 billion

2005 - Canada launches personal genome project

2007 - 23andMe markets first direct to consumer genetic testing for ancestry of autosomal DNA

2008 - First personal genome sequenced

2012 - England launched (and finished in 2018) 100K genome project

2013 - Saudi Arabia launched the Saudi Human Genome Program

2015 - US launched plan to sequence one million genomes

2015 - Korea launched plan to sequence 10K genomes

2016 - US launched All of Us Research cohort to enroll one million or more participants to collect lifestyle, environment, genetic, and biologic data

2016 - China launched the Precision Medicine initiative with 60 billion RMB

2016 - France started Genomic Medicine 2025 Project

Treatments available today due to DNA technology

Knowing the structure and function of DNA has also enabled us to develop breakthrough biotechnology solutions that have greatly improved the quality of life of countless individuals. A few examples include:

Genetic screenings for diseases. An individual can scan his or her DNA code to look for known mutations linked to disease. Newborns are often screened at birth to identify treatable genetic disorders. For instance, all newborns in the US are screened for a disease called severe combined immunodeficiency (SCID). Individuals with this genetic disease lack a fully functional immune system and usually die within a year, if not treated. However, due to regular screenings, these newborns can receive a bone marrow transplant, which has a more than 90% of success rate to treat SCID. A well-known example in adults is screening women for mutations in the BRCA1 and BRCA2 genes as risk factor for developing breast cancer or ovarian cancer.

Recombinant protein production. This technology allows scientists to introduce human genes into microorganisms to produce human proteins that can be introduced back to patients to carry out vital functions. In 1978, the company Genentech developed a process to recombinantly produce human insulin, a protein needed to regulate blood glucose. Recombinant insulin is still used to treat diabetes.

CAR T cells. CAR T cell therapy is a technique to help your immune system recognize and kill cancer cells. Immune cells, called T-cells, from a cancer patient are isolated and genetically engineered to express receptors that allow them to identify cancer cells. When these modified T cells are put back into the patient they can help find and kill the cancer cells. Kymriah, used to treat a type of leukemia, and Yescarta, used to treat a type of lymphoma are examples of FDA approved CAR T cell treatments.

Gene therapy. The goal of gene therapy is to replace a missing or defective gene with a normal one to correct the disorder. The first in vivo gene therapy drug, Luxterna, was approved by the FDA in 2017 to treat an inherited degenerative eye disease called Lebers congenital amaurosis.

Disclaimer: The authors above do not necessarily reflect the policies or positions of the organizations with which they are affiliated.

Frontiers in DNA technology

Our understanding of genetic data continues to lead to new and exciting technologies with the potential to revolutionize and improve our health outcomes. A few examples being developed are described below.

Organoids for drug screening. Organoids are miniature and simplified organs that can be developed outside the body with a defined genome. Organoid systems may one day be used to discover new drugs, tailor treatments to a particular persons disease or even as treatments themselves.

CRISPR-Cas9. This is a form of gene therapy - also known as genetic engineering - where the genome is cut at a desired location and existing genes can either be turned off or modified. Animal models have shown that this technique has great promise in the treatment of many hereditary diseases such as sickle cell disease, haemophilia, Huntingtons disease, and more.

We believe sequencing will become a mainstay in the future of human health.

While genomic data is incredibly insightful, it is important to realize, genomics rarely tells the complete story.

Except for rare cases, just because an individual has a particular genetic mutation does not mean they will develop a disease. Genomics provides information on what could happen to an individual. Additional datasets such the microbiome, metabolome, lifestyle data and others are needed to answer what will happen.

Disclaimer: The authors above do not necessarily reflect the policies or positions of the organizations with which they are affiliated.

Elizabeth ODay & Elizabeth Baca

The microbiome is sometimes referred to as the 'essential organ', the'forgotten organ', our 'second genome' or even our 'second brain'. It includes the catalog of approximately 10-100 trillion microbial cells (bacteria, archea, fungi, virus and eukaryotic microbes) and their genes that reside in each of us. Estimates suggest we have 150 times more microbial DNA from more than 10,000 different species of known bacteria than human DNA.

Microbes reside everywhere (mouth, stomach, intestinal tract, colon, skin, genitals, and possibly even the placenta). The function of the microbiome differs according to different locations in the body and with different ages, sexes, races and diets of the host. Bacteria in the gut digest foods, absorb nutrients, and produce beneficial products that would otherwise not be accessible. In the skin, microbes provide a physical barrier protecting against foreign pathogens through competitive exclusion, and production of antimicrobial substances. In addition, microbes help regulate and influence the immune system. When there is an imbalance in the microbiome, known as dysbiosis, disease can develop. Chronic diseases such as obesity, inflammatory bowel disease, diabetes mellitus, metabolic syndrome, atherosclerosis, alcoholic liver disease (ALD), nonalcoholic fatty liver disease (NAFLD), cirrhosis, hepatocellular carcinoma and other conditions are linked to improper microbiome functioning.

Milestones in our understanding of the microbiome

1680s - Dutch scientist Antonie van Leeuwenhoek compared his oral and fecal microbiota. He noted striking differences in microbes between these two habitats and also between samples from individuals in different states of health.

1885 - Theodor Escherich first describes and isolates Escherichia coli (E. coli) from the feces of newborns in Germany

1908 - Elie Metchnikoff, Russian zoologist, theorized health could be enhanced and senility delayed by bacteria found in yogurt

1959 - Germ-free animals (mice, rats, rabbits, guinea pigs, and chicks) reared in stainless steel in plastic housing to study the effects of health in microbe-free environments

1970 - Dr. Thomas D. Luckey estimates 100 billion colonies of microbes in one gram of human intestinal fluid or feces.

1995 - Craig Venter and a team of researchers sequence the genome of bacterium Haemophilus influenza, making it the first organism to have its genome completely sequenced.

1996 - The first human fecal sample is sequenced using 16S rRNA sequencing.

2001- Scientist Joshua Lederberg credited with coining term microbiome.

2005 - Researchers identify bacteria in amniotic fluid of babies born via C-section

2006- First metagenomic analysis of the human gut microbiome is conducted

2007- NIH sponsored Human Microbiome Project (HMP) launches a study to define how the microbial species affect humans and their relationships to health

2009- First microbiome study showing an association between gut microbiome in lean and obese adults

2011- German researchers identify 3 enterotypes in the human gut microbiome: Baceroids, Prevotella, and Ruminococcus

2011- Gosalbes performed the first metatransciptomic analysis of healthy human gut microbiota

2012 - HMP unveils first map of microbes inhabiting healthy humans. Results generated from 80 collaborating scientific institutions found more than 10,000 microbial species occupy the human ecosystem, comprising trillions of cells and making up 1-3% of the bodys mass.

2012 - American Gut Project founded, providing an open-to-the-public platform for citizen scientists seeking to analyze their microbiome and compare it to the microbiomes of others.

2014 - The Integrative Human Microbiome Project (iHMP), begins with goal of studying 3 microbiome-associated conditions.

2016 - The Flemish Gut Flora Project, one of the worlds largest population-wide studies on variations in gut microbiota publishes analysis on more than 1,100 human stool samples.

2018 - The American Gut Project publishes the largest study to date on the microbiome. The results include microbial sequence data from 15,096 samples provided by11,336 participants across the US, UK, Australia and 42 other countries.

What solutions are already (or could be) derived from this dataset?

Biotechnology solutions based off microbiome data have already been developed or are in the process of development. A few key examples are highlighted below:

Probiotics. Probiotics are beneficial bacteria that may prevent or treat certain disease. They were first theorized in 1908 and are now a common food additive. From yogurts to supplements, various probiotics are available for purchase in grocery stores and pharmacies, claiming various benefits. For example probiotic VSL#3 has been shown to reduce liver disease severity and hospitalization in patients with cirrhosis.

Diagnostics. Changes in composition of particular microbes are noted as potential biomarkers. An example includes the ratio of Bifidobacterium to Enterobacteriaceae know as the B/E ratio. A B/E greater than 1 suggests a healthy microbiome and a B/E less than 1 could suggest cirrhosis or particular types of infection.

Fecal Microbiome transplantation (FMT). Although not FDA-approved, fecal microbiome transplantation (FMT) is a widely used method where a fecal preparation from a healthy stool donor is transplanted into the colon of patient via colonoscopy, naso-enteric tube, or capsules. FMT has been used to treat Clostridium difficile infections with 80-90% cure rates (far better efficacy than antibiotics).

Therapeutics. The microbiome dataset is also producing several innovative therapies. Development of bacteria consortia and single strains (both natural and engineered) are in clinical development. Efforts are also underway to identify and isolate microbiome metabolites with important function, such as the methicillin-resistant antibiotics that were identified by primary sequencing of the human gut microbiome.

By continuing to build the microbiome dataset and expand our knowledge of host-microbiome interactions, we may be able correct various states of disease and improve human health.

Disclaimer: The authors above do not necessarily reflect the policies or positions of the organizations with which they are affiliated.

Pam Randhawa, CEO and founder of Empiriko Corporation, Andrew Steinberg, Watson Institute for International and Public Affairs, Brown University, Elizabeth Baca & Elizabeth ODay

For centuries, physicians were limited by the data they were able to obtain via external examination of an individual patient or an autopsy.

More recently, technological advancements have enabled clinicians to identify and monitor internal processes which were previously hidden within living patients.

One of the earliest examples of applied technology occurred in the 1890s when German physicist Wilhelm Rntgen discovered the potential medical applications of X-rays.

Since that time, new technologies have expanded clinical knowledge in imaging, genomics, biomarkers, response to medications, and the microbiome. Collectively, this extended database of high quality, granular information has enhanced the physicians diagnostic capabilities and has translated into improved clinical outcomes.

Todays clinicians increasingly rely on medical imaging and other technology- based diagnostic tools to non-invasively look below the surface to monitor treatment efficacy and screen for pathologic processes, often before clinical symptoms appear.

In addition, the clinicians senses can be extended by electronic data capture systems, IVRS, wearable devices, remote monitoring systems, sensors and iPhone applications. Despite access to this new technology, physicians continue to obtain a patients history in real-time followed by a hands-on assessment of physical findings, an approach which can be limited by communication barriers, time, and the physicians ability to gather or collate data.

One of the largest examples of clinical data collection, integration and analysis occurred in the 1940s with the National Heart Act which created the National Heart Institute and the Framingham Heart Study. The Framingham Original Cohort was started in 1948 with 5,209 men and women between the ages of 30-62 with no history of heart attack or stroke.

Over the next 71 years, the study evolved to gather clinical data for cardiovascular and other medical conditions over several generations. Prior to that time the concepts of preventive medicine and risk factors (a term coined by the Framingham study) were not part of the medical lexicon. The Framingham study enabled physicians to harness observations gathered from individuals physical examination findings, biomarkers, imaging and other physiologic data on a scale which was unparalleled.

The adoption of electronic medical records helped improve data access, but in their earliest iterations only partially addressed the challenges of data compartmentalization and interoperability (silos).

Recent advances in AI applications, EMR data structure and interoperability have enabled clinicians and researchers to improve their clinical decision making. However, accessibility, cost and delays in implementing global interoperability standards have limited data accessibility from disparate systems and have delayed introduction of EMRs in some segments of the medical community.

To this day, limited interoperability, the learning curve and costs associated with implementation are cited as major contributors to physician frustration, burnout and providers retiring early from patient care settings.

However, an interoperability platform known as Fast Healthcare Interoperability Resources (FHIR, pronounced "FIRE") is being developed to exchange electronic health records and unlock silos. The objective of FHIR is to facilitate interoperability between legacy health care systems. The platform facilitates easier access to health data on a variety of devices (e.g., computers, tablets, cell phones), and allows developers to provide medical applications which can be easily integrated into existing systems.

As the capacity to gather information becomes more meaningful, the collection, integration, analysis and format of clinical data submission requires standardization. In the late 1990s, the Clinical Data Interchange Standards Consortium (CDISC) was formed to develop and support global, platform-independent data standards which enable information system interoperability to improve medical research. Over the past several years, CDISC has developed several models to support the organization of clinical trial data.

Milestones in the discovery/development of clinical data and technologies

500BC - The world's first clinical trial recorded in the Book of Daniel in The Bible

1747 - Linds Scurvy trial which contained most characteristics of a controlled trial

1928 - American College of Surgeons sought to improve record standards in clinical settings

Read the rest here:
6 expert essays on the future of biotech - World Economic Forum

Related Posts

Comments are closed.

Archives