In the near future, genome sequencing, among other biological measures, will be as routine as X-rays and cholesterol testing. The challenge, though, will be accurately interpreting the vast amount of data and effectively using it to guide decisions about health care.
In a position statement published in Hepatology, Mayo Clinic researchers layout perspectives of various stakeholders (e.g., providers, payers, governments, and health care institutions) on the clinical questions to be answered using big data in genomics and how innovative analytical methods such as machine learning may help with interpretation.
"Accumulation of big data is increasing at an unprecedented pace in medicine — doubling the sum of medical knowledge every 73 days in 2020 compared to every 50 years in 1950," says Konstantinos Lazaridis, M.D., the Everett J. and Jane M. Hauck Associate Director of Mayo Clinic's Center for Individualized Medicine and co-author of the article. Dr. Lazaridis is the William O. Lund, Jr. and Natalie C. Lund Program Director for Clinomics.
In the article, Dr. Lazaridis and co-author Arjun Athreya, Ph.D., M.S., an electrical and computer engineer within Mayo Clinic's Department of Molecular Pharmacology and Experimental Therapeutics, explore the complexity of generated data, the level of sophistication in analytical approaches, and the degree of interpretation from the different methods.
"As computing technologies evolve, we have to jointly address the complexity posed by medical data, emphasizing the need for close collaborative partnerships," says Dr. Athreya. "We need to gather additional information and apply analytical innovations to be able to uncover answers for patients and improve health outcomes."
The review summarizes the importance and challenges of managing and analyzing big data for patients, providers, health care institutions, payors, and government agencies.
Patients need to know how their data affects their health outcomes
Drs. Lazaridis and Athreya state an increasing number of people are willing to participate in large-scale population studies and have accepted that their risk scores, if not immediately related to disease, paves the way for routine screening. Study participants expect simple interpretations of their genetic data to convey risks of health care outcomes, such as disease prognosis and drug response. They also expect the interpretation of their genomics data, lifestyle, and clinical characteristics, including age and family history, to factor in their health outcomes.
They also emphasize that once the risk for a disease is known, people can be enrolled in routine screening programs or be advised of lifestyle changes to delay the onset of anticipated outcomes. Routine screening visits would allow for collecting biospecimen samples for study and developing predictive methods involving data about a person over a long period of time. Researchers point out that there are still unsolved bioethics issues of whether such data need to be shared with payors if they are reimbursing the cost of tests.
Health care institutions need to invest in analytical methods
The authors suggest that health care institutions view multi-omics data and data-driven analytical technologies as investments into cutting-edge health care technology, such as surgical robots, to advance practice and disease outcomes. Multi-omics data sets are multiple "omes," such as the genome, proteome, transcriptome, epigenome, metabolome, and microbiome.
Dr. Athreya says patients often are invited to participate in research protocols where biospecimens are collected. Still, there is a cost to collect, measure, analyze, and store data — all of which warrant a rich data platform.
"Even if costs of individual technical components, including processing and storage, as well as testing biospecimens drop in the coming years, the volume of tests, subsequent data generated, and storing-retrieving them will still impose high operational costs," says Dr. Athreya.
Dr. Athreya also says such data deluge also imposes the need for skilled data analysts, computer engineers, and information technology specialists to work closely with clinical units to integrate technologies at the point of care.
"While these operating and personnel costs seem high upfront, there is a silver lining opportunity to generate rich insights 'know-how' of care management, which can then be shared among other providers and researchers," says Dr. Athreya.
Payors need best practices for standardized tests
While payors develop actuarial models to assess disease risk and care costs, it is plausible that future health care risk models will consist of cross-sectional and long-term multi-omics and clinical measures.
Researchers say it is essential that testing platforms produce data with standard formats and scales across all participating diagnostic and clinical laboratories. Just as genetic markers need Clinical Laboratory Improvement Amendments (CLIA) certifications, best practices for using other new and nonstandard biological measures are required to produce consistent and verifiable results.
According to Dr. Lazaridis, as novel biomarkers are identified and pursued in laboratories of academic and commercial research settings, developing best practices to derive standardized tests requires close coordination among laboratories.
"Our clinicians need easily interpretable data for each patient so they can understand the data and also be in a position to explain the potential implications to the patient," says Dr. Lazaridis.
Government agencies need to invest in analytical technologies
Researchers explain that government agencies have an incentive to invest in and develop new analytical technologies to improve disease outcomes for the public's health. Challenges with existing policy barriers, ethical issues, and limited infrastructure and or limited funding for prospective validation of novel analytical tools at the government or institutional levels may limit progress.
Drs. Lazaridis and Athreya say that investigators and health care institutions will look to learn the process and standards that regulatory agencies would need to set to approve technologies driven by analytics for adoption in routine care. More financial support is necessary for investigators to design and run prospective studies testing analytical methods that promise to individualize therapy using multi-omics measures.
Researchers need to improve genetic data interpretation, diagnosis, and treatment
Mayo Clinic's Translational Omics Program and Clinomics Program develop innovative processes to improve genetic data interpretation. Multi-disciplinary teams work on diagnosing and treating patients with rare and undiagnosed genetic diseases, healthy genome screening, and pre-myeloid (cancer that starts in the blood) disease testing.
"Interpreting and translating the results of this testing is highly challenging, requiring teams to combine bioinformatics algorithms, multi-omics data, and scientific knowledge to generate actionable findings," says Dr. Lazaridis.
The programs also improve patient care by turning genomic research into real-world personalized medicine applications, particularly new and better genomics-based diagnostic tests.
"Physicians can search genetic code quickly and effectively for clues that help diagnose and optimally treat a condition — or keep a person healthy by preventing future disease," says Dr. Lazaridis.
Financial support: This study is supported by National Science Foundation Award 2041339 (APA) and RC2 DK118619 (KNL), and the Mayo Clinic Center for Individualized Medicine.
Read more stories about advances in individualized medicine.
Register to get weekly updates from the Mayo Clinic Center for Individualized Medicine blog.
Tags: Artificial Intelligence, mayo clinic, Mayo Clinic Center for Individualized Medicine, multi-omics, Multi-omics, Precision Medicine, Precision Medicine, predictive genomics, Rare and undiagnosed diseases