Mechanistic validation

Main Article Content

Thomas Hartung , Sebastian Hoffmann, Martin Stephens
[show affiliations]

Abstract

Validation of new approaches in regulatory toxicology is commonly defined as the independent assessment of the reproducibility and relevance (the scientific basis and predictive capacity) of a test for a particular purpose. In large ring trials, the emphasis to date has been mainly on reproducibility and predictive capacity (comparison to the traditional test) with less attention given to the scientific or mechanistic basis. Assessing predictive capacity is difficult for novel approaches (which are based on mechanism), such as pathways of toxicity or the complex networks within the organism (systems toxicology). This is highly relevant for implementing Toxicology for the 21st Century, either by high-throughput testing in the ToxCast/Tox21 project or omics-based testing in the Human Toxome Project. This article explores the mostly neglected assessment of a test’s scientific basis, which moves mechanism and causality to the foreground when validating/qualifying tests. Such mechanistic validation faces the problem of establishing causality in complex systems. However, pragmatic adaptations of the Bradford Hill criteria, as well as bioinformatic tools, are emerging. As critical infrastructures of the organism are perturbed by a toxic mechanism we argue that by focusing on the target of toxicity and its vulnerability, in addition to the way it is perturbed, we can anchor the identification of the mechanism and its verification.

Article Details

How to Cite
Hartung, T., Hoffmann, S. and Stephens, M. (2013) “Mechanistic validation”, ALTEX - Alternatives to animal experimentation, 30(2), pp. 119–130. doi: 10.14573/altex.2013.2.119.
Section
Food for Thought ...

Most read articles by the same author(s)

<< < 7 8 9 10 11 12 13 14 15 16 > >>