With the mergence of new
technology, the time and cost
involved in dissecting reams
of data has significantly reduced.
Drug researchers, however, still face a
significant challenge in fully analysing
the data available to them. Without
really understanding what the data means,
precision medicine will struggle to reach its
full potential.
A good example of how studies can fail
to deliver due to the inability to translate
experimental findings into useful data, is the
world’s largest ‘human sequencing operation.’
In an exploratory study titled “Clinical
Interpretation and Implications of Whole-Genome Sequencing,” 12 adults from Stanford
University Medical Center had their entire
genome sequenced, with the aim of detecting
clinically meaningful genetic variations. The
results showed incomplete coverage of inherited
disease genes, low reproducibility of detection
of clinically relevant genes and disagreement
among experts about which findings were most
significant. For the most part, despite the scale of
the project, the findings were not actionable.
In 2016, precision medicine gained a
renewed focus after President Barack Obama
invested $215 million “to broadly support
research, development, and innovation.” His
Precision Medicine Initiative highlighted how
successful research could revolutionize the
entire health care system and change disease
outcomes drastically. But what does this
mean in practice, and how can the increased
collection of data lead to improved and more
effective life sciences R&D?
All the gear, but still no idea
According to recent research from Forrester,
the big data market is set to increase by 13%
over the next five years. The continuous growth
of big data has allowed current and emerging
technologies to provide complex data specific
to each human. This technology, however, far
outpaces the ability to effectively mine data,
to draw conclusions and develop clinically
relevant products. To achieve this, scientists
must be able to search and analyze various
sources of evidence including experimental,
clinical and published data to find out what has
already been discovered.
Scientists must also process multiple types of
datasets and use sophisticated entity recognition
and pattern matching software to identify
meaningful associations between targets and
molecules. The difficulty is not the data alone, as
85 percent of medical data is unstructured, yet
still clinically relevant, but rather knowing what
the data means and how it can be applied to
different research projects.
Lifting boundaries
Precision medicine R&D can only progress
with the use of specific tools that can deal with
the complex challenges precision medicine
brings. For biopharmaceutical products to
reliably deliver, researchers need a better
understanding not only of genomics, but also
of molecular pathways, proteomics and the
impact of epigenetics (e.g., genetic changes
due to environmental factors rather than
simple DNA sequence variants) on disease
susceptibility, development and progression.
To overcome issues with data quality, tools are
Data Key to Unlocking Precision
Medicine Potential
Standardization, novel tools will
enable R&D to effectively
progress from data
generation to
analysis