Advancing Risk Assessment through the Application of Systems Toxicology
Toxicological Research 2016;32:5−8
Published online January 31, 2016;  https://doi.org/10.5487/TR.2016.32.1.005
© 2016 Korean Society of Toxicology.

John Michael Sauer1, Andr? Kleensang2, Manuel C. Peitsch3, and A. Wallace Hayes4

1Predictive Safety Testing Consortium, Critical Path Institute, Tucson, AZ, USA, 2Johns Hopkins University, Center for Alternatives to Animal Testing (CAAT), Bloomberg School of Public Health, Baltimore, MD USA, 3PMI Research and Development, Neuchatel, Switzerland, 4Harvard University and Michigan State University Institute for Integrative Toxicology, Andover, MA 01810
A. Wallace Hayes, Harvard University and Michigan State University Institute for Integrative Toxicology, Andover, MA 01810, E-mail: awallacehaeys@comcast.net
Received: December 2, 2015; Accepted: January 13, 2016
This is an Open Access article distributed under the terms of the Creative Commons Attribution Non-Commercial License (http://creativecommons.org/licenses/by-nc/3.0) which permits unrestricted non-commercial use, distribution, and reproduction in any medium, provided the original work is properly cited.
Abstract

Risk assessment is the process of quantifying the probability of a harmful effect to individuals or populations from human activities. Mechanistic approaches to risk assessment have been generally referred to as systems toxicology. Systems toxicology makes use of advanced analytical and computational tools to integrate classical toxicology and quantitative analysis of large networks of molecular and functional changes occurring across multiple levels of biological organization. Three presentations including two case studies involving both in vitro and in vivo approaches described the current state of systems toxicology and the potential for its future application in chemical risk assessment.

Keywords : Systems toxicology, Risk assessment
INTRODUCTION

Risk assessment, in the context of public health, is the process of quantifying the probability of a harmful effect to individuals or populations from human activities. The approach to quantitatively assessing the health risks of chemical exposure has not changed appreciably in the past 80 years; the focus remains on low-throughput, high-dose studies that measure adverse outcomes in homogeneous animal populations. Conservative extrapolations are relied upon to relate animal studies to much lower-dose human exposures. The relevance of the current approach to translational safety applied to predict risks to humans at typical low exposures is questionable. Furthermore, this approach has made little use of understanding the mechanism of action by which chemicals perturb biological processes in human cells and tissues.

With increasing public health concerns regarding the potential risks associated with chemical exposure, there is a clear need for more predictive and accurate approaches to risk-assessment (1). Developing these approaches requires a mechanistic understanding of the process by which xenobiotic substances perturb biological systems and lead to toxicity. Supplementing the shortfalls of traditional risk assessment with mechanistic biological data has been widely discussed, but not routinely implemented in the evaluation of chemical exposure. These mechanistic approaches to risk assessment have been generally referred to as systems toxicology. Systems toxicology borrows heavily from systems biology, and attempts to model chemically induced pathophysiology of the body with computational tools (2). Systems toxicology makes use of advanced analytical and computational tools to integrate classical toxicology and quantitative analysis of large networks of molecular and functional changes occurring across multiple levels of biological organization (3).

Systems toxicology enables the integration of quantitative systems-wide molecular changes in the context of chemical exposure measurements and a causal succession of molecular events linking exposures with toxicity. Computational models are then built to describe these processes in a quantitative manner. This integrated data analysis leads to the determination of how biological pathways are perturbed by chemical exposure, and ultimately enables the development of predictive computational models of toxicological processes, thereby improving the accuracy of risk assessment (Fig. 1).

In a recent symposium at the ASIATOX 2015 meeting, supported by an educational donation provided by Philip Morris International R&D, three presentations described the current state of systems toxicology and the potential for its future application in chemical risk assessment. A summary of each presentation is outlined below.

PRESENTATION SUMMARIES

Toxicology as now used in the drug development/regulatory process remains descriptive, having changed little over the last several decades but must become more mechanistic in its approach if the process is to reach the goals of the 21st Century. Several examples of compounds where human safety issues were not anticipated based on data from non-clinical studies were presented. It was proposed that we lack the tools and understanding required to implement a true translational safety strategies in drug discovery and development. The missing or incomplete tools include in vitro assays to predict human safety, well characterized selective and sensitive safety biomarkers, and an integrated understanding of systems biology/toxicology. Each of these tools was discussed and a holistic approach to integration of these tools was described as quantitative translational safety.

Insufficient therapeutic index is a major cause of candidate attrition in drug development. The lack of appropriate prediction of safety liabilities results in unforeseen adverse events in clinical trials or the unwarranted abandonment of potentially safe and effective therapies. Key to any translational strategy is the ability to monitor basic biological processes in both animals and humans. And with chemical induced tissue injury, biomarkers are essential to bridging toxicity between species.

Throughout the drug discovery process, therapeutic and toxic exposures are determined, and clinical safety bio-markers are essential for maximizing therapeutic index/clinical safety in several ways. For example, safety biomarkers can be applied to address candidate selection and manage risk by monitoring the no-adverse-effect levels of exposure in preclinical and clinical studies. Safety biomarkers are also useful for assessing the human relevance of preclinical safety findings and enabling the development of safe or safer dosing paradigms. In nonclinical studies, target organ toxicity is assessed using histopathological analysis. However, in clinical trials, histopathological analysis is rarely available and biomarkers are critical to assess potential target tissue toxicity in humans. Thus, the most impactful safety biomarkers will be those used in clinical trials with direct translational ties to nonclinical safety studies.

The roles of the individual scientific stakeholders, including regulators, academic scientists and industry scientists, were discussed in the context of translational safety. Regulators/health authorities will continue to play a dual role in this process as both supporters of innovation and one of the causes of stagnation through necessary regulation. Academic researchers will need to better support translational safety objectives and applied science. Industry scientists will need to embrace a more mechanistic approach to safety assessment and not just follow the box-checking exercise that current regulations promote. Finally, it was acknowledged that the implementation and optimization of quantitative translational safety strategies will take years to complete, and therefore must be implemented in a progressive manner that allows the regulatory environment to keep pace with scientific thinking and innovation.

Toxicology testing in the drug development process costs $3 billion to regulate $10 trillion of trade. Issues with the current paradigm for toxicology testing include throughput, cost, use of animals, mixtures, need for high-dose to low-dose extrapolation, current applicability to new products/hazards (e.g. nano), and inter-individual and inter-species differences. In addition, the current paradigm for toxicology testing has low predictive capacity and the process has become too precautionary. For example, today nonclinical toxicology testing of aspirin in animals would result in an LD50 = 150~200 mg/kg in rats and labeling as “harmful if swallowed”, “irritant to eyes”, “respiratory irritant”, “irritant to skin”, non-carcinogenic, but co-carcinogenic (promotor), unclear mutagenicity, with embryonic malformations in cat, dog, rat, mice, rabbit, and monkey. In fact, these findings would make it very difficult to bring aspirin to market today.

In 2008 the NIH proposed a shift from primarily in vivo animal studies to in vitro assays, in vivo assays with lower organisms, and computational modeling for toxicity assessments. In 2011, the FDA has stated that thanks to advancements in regulatory science and new tools including functional genomics, proteomics, metabolomics, high-throughput screening, and systems biology, there is the potential to replace current toxicology assays with tests that incorporate the mechanistic underpinnings of disease and of underlying toxic side effects.

Currently the adverse events that occur in response to a toxicological insult are measured. What should be measured are the upstream events that cause the observed toxicology including macromolecular interactions such as receptor/ligand interactions, DNA binding and protein oxidation, as well as cellular responses such as gene activation, protein production and altered signaling.

In addition to differential transcriptomics data, there is also a need for genomics (i.e. methylation), proteomics (i.e. phosphorylation), metabolomics, visualization, interpretation, text mining, and systems/network approaches. System/network approaches are important because differential/ANOVA statistics are not adequate to understand complex systems. Enrichment (KEGG, BioCyc etc.) is dependent on using existing annotations and knowledge. Systems/network approaches allow for examining biology as a system rather than on a part-by-part basis and offer a useful dimensionality reduction.

A systems toxicology approach was used to reanalyze an MPTP study in mice (4). Twelve transcriptomics arrays were evaluated in total (four per group: 24 hr and 7 days, MPTP 30 mg/kg and saline i.p.). First a Weighted Gene Correlation Network Analysis (WGCNA) was used to build initial de novo network/modules. Next, a search for transcription factor (TF) binding site enrichment was completed in each module. Text-mining for TF candidates and MPTP and/or Parkinson was performed and candidates were validated by a database of published interactions.

The WGCNA found 1247 genes in five significant clusters. Clusters captured the relevant pathways including lysosomes, mitochondria (KEGG pathway Parkinson’s and KEGG pathway oxidative phosphorylation), negative regulation of apoptosis or programmed cell death, molecular transport, and protein localization. Text-mining for TF candidates and MPTP and/or Parkinson’s found several well-known TFs including JUN, NRF2 and ELK1. In addition, SP1, which had not described before, was found in almost all modules indicating the potential that it plays a central role in MPTP toxicity. Additional evidence for the central role of SP1 was found when candidates were validated by databases of published interactions including FANTOM4, EdgeExpressDB database on ChIP data, siRNA and others.

In summary, the systems toxicology approach was confirmed by reanalyzing an MPTP study in mice which identified known MPTP pathways of toxicity. The key module with transcription factors had no significant annotations. In addition to confirming known pathways of MPTP toxicity, this exercise using text mining and published interactions derived a new hypothesis that TF SP1 plays a central role in MPTP toxicity.

Manuel Peitsch presented an overview of systems biology and defined it as the study of biological networks. Mutations in a component part of the biological network can cause disease. Drugs can also cause an adverse advent by perturbation of a biological network. Biological network perturbations are pivotal to understanding the causal link between toxicants and their effects on health.

Systems Toxicology is the integration of the classic toxicology paradigm with the quantitative analysis of the many molecular and functional changes occurring across multiple levels of biological organization. Systems Toxicology research is aimed at developing a detailed, mechanistic, and dynamic understanding of toxicological processes. A systems toxicology assessment leverages this detailed mechanistic knowledge to enable a new paradigm of product assessment including inter-species and system translation at the mechanistic level.

The process of developing a biological network model starts with a static model which maps and visualizes molecular interactions through high level interpretation of experimental data. A computable biological network model that is able to quantify biological impact is then built by adding new data, biological expression language, and gene ontology to the static model. With the addition of kinetic data, an executable biological network model is built which enables the prediction of outcome. It is important to understand the utility of animal models in toxicology, what biology can be applied across species and which in vitro systems are necessary to recapitulate a meaningful part of whole body biology in order to translate data into a workable predictive biological network model.

A five step approach for systems toxicology-based product testing was proposed. First, experimental data are collected to measure multiple perturbations across multiple experimental systems. Second, computation of differential response profiles from a large number of measured biological variables is used to compute the system response profiles. Third, biological processes perturbed by the product are identified. Fourth, the perturbations of individual networks are quantified to compute network perturbation amplitudes. And fifth, the product’s biological impact factor is computed by quantifying the overall perturbation induced by the product.

Experimental data collection requires the conduct of systematic experiments in several test systems. This includes adequate selection of systems (primary cells whenever possible and animal model of disease), measurement of dose-responses, time-resolved data, deep analysis of exposure data, and multi-omics to cover mechanisms of toxicity. It is important to build and maintain these biological networks in order to identify mechanisms affected by product exposure and build computable Network Models which must be maintained over time as knowledge accumulates.

The computational process involves taking the systems biology data, feeding it into the biological network model to obtain network perturbation amplitude scores. The scores are then aggregated to obtain a Biological Impact Factor.

Two case studies were presented to demonstrate the utility of systems toxicology-based product testing. In both examples the serious diseases that smoking causes, such as cardiovascular disease, lung cancer and chronic obstructive pulmonary disease (COPD), were examined. The first case study used a systems toxicology in vitro approach to compare the biological impacts of a reference combustible cigarette (3R4F) and a prototypic reduced-risk product (RRP) on primary human and rat lung epithelial cells. The second case study used a systems toxicology in vivo approach to compare the biological impact of a 3R4F and a prototypic RRP on the development of emphysema in C57Bl/6 mice. From the data presented it was clear that use of a systems toxicology-based approach was able to differentiate the impact of each product.

CONCLUSION

At the most fundamental level, systems toxicology will eventually be integrated into our current approach to translational safety. In order for this to occur more quantitative biomarkers are needed to assess human safety following chemical exposure. Due to lack of sensitivity and/or specificity, the current battery of commonly used safety biomarkers is unable to reliably detect chemically induced organ injury. Without accurate prediction by safety biomarkers, it is impossible to construct and apply a reliable translational safety strategy for use in quantitative risk assessment regardless of the strength of nonclinical and in vitro safety data.

Figures
Fig. 1. An integration of systems biology and systems toxicology as a graphical representation of a quantitative translational safety strategy for evaluating adverse effects of chemical in humans.
References
  1. Sauer, JM, Hartung, T, Leist, M, Knudsen, TB, Hoeng, J, and Hayes, AW (2015). Systems toxicology: the future of risk assessment. Int J Toxicol. 34, 346-348.
    Pubmed CrossRef
  2. Hartung, T, van Vliet, E, Jaworska, J, Bonilla, L, Skinner, N, and Thomas, R (2012). Systems toxicology. Altex. 29, 119-128.
    Pubmed CrossRef
  3. Sturla, SJ, Boobis, AR, FitzGerald, RE, Hoeng, J, Kavlock, RJ, Schirmer, K, Whelan, M, Wilks, MF, and Peitsch, MC (2014). Systems toxicology: from basic research to risk assessment. Chem Res Toxicol. 27, 314-329.
    Pubmed KoreaMed CrossRef
  4. Miller, RM, Callahan, LM, Casaceli, C, Chen, L, Kiser, GL, Chui, B, Kaysser-Kranich, TM, Sendera, TJ, Palaniappan, C, and Federoff, HJ (2004). Dysregulation of gene expression in the 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine-lesioned mouse substantia nigra. J Neurosci. 24, 7445-7454.
    Pubmed CrossRef


e-submission

e-submission

e-submission

This Article

Archives