This article provides a comprehensive overview of read-across approaches for chemical safety assessment, exploring their foundational principles, methodological applications, and optimization strategies.
This article provides a comprehensive overview of read-across approaches for chemical safety assessment, exploring their foundational principles, methodological applications, and optimization strategies. Tailored for researchers, scientists, and drug development professionals, it examines how structural and biological similarity can predict toxicity for data-poor chemicals. The content covers integrative frameworks combining traditional read-across with New Approach Methodologies (NAMs), addresses common implementation challenges, and analyzes validation criteria and global regulatory acceptance patterns to support confident application in biomedical and chemical development.
Read-across is a defined methodology used in chemical risk assessment to predict the (eco)toxicological properties of a target substance for which little or no experimental data exists by using information from one or several similar, well-characterized substances, known as source substances [1] [2]. It functions as a data gap-filling strategy within a broader weight-of-evidence evaluation [3] [2]. As a New Approach Methodology (NAM), read-across is part of a transformative shift in toxicology, aiming to increase the efficiency of safety assessments, lower testing costs, reduce reliance on animal testing, and improve the human relevance of data [3].
The fundamental principle underpinning read-across is that structurally similar compounds are likely to exhibit similar biological properties and toxicological effects [3]. This principle allows risk assessors to make informed predictions about the safety of a data-poor target substance. The European Food Safety Authority (EFSA) has developed formal guidance to standardize the application of read-across in food and feed safety risk assessment, detailing a structured workflow to ensure transparency and scientific robustness [2]. This guide objectively compares the core principles, regulatory expectations, and practical implementation of the read-across approach against traditional toxicological methods.
The practice of read-across is built upon several key concepts and a structured workflow. Understanding this terminology is essential for researchers and regulators.
The EFSA guidance outlines a systematic workflow to ensure reliable and transparent assessments [2]. The following diagram visualizes this multi-step process, which forms the logical backbone of a robust read-across assessment.
The regulatory landscape for read-across is evolving, with significant developments in the European Union setting a precedent for its standardized application.
Regulatory bodies provide structured frameworks to guide the application of read-across, emphasizing scientific rigor and transparency.
A primary challenge in regulatory submission is adequately demonstrating that the source and target substances are sufficiently similar for the specific endpoint being assessed, as minor structural differences can lead to significant changes in toxicological behavior [3]. Regulators expect read-across to be supported not only by structural similarity but also by mechanistic evidence, such as data on the mode of action or kinetics [3]. Consequently, stand-alone evidence from read-across may not be considered sufficient to conclude on the toxicity of a target substance; it is generally more acceptable when presented as part of a weight-of-evidence approach in conjunction with other lines of evidence (e.g., in vivo, in vitro, OMICs data) [1].
Transitioning from principle to practice requires specific tools and an understanding of how read-across compares to traditional toxicological testing.
The following table details key resources and tools that are essential for developing and justifying a read-across assessment.
Table 1: Key Research Reagents and Tools for Read-Across Assessments
| Tool / Resource Name | Function / Purpose | Example Platforms / Sources |
|---|---|---|
| Chemical Databases | For searching structurally similar compounds and accessing experimental data. | eChemPortal, CompTox Chemicals Dashboard [3] |
| Grouping & Read-Across Tools | Software to systematically compare molecular structures, properties, and toxicity data. | OECD QSAR Toolbox, CEFIC AMBIT tool, EPA Analog Identification Methodology (AIM) Tool [3] |
| In Vitro Data Platforms | Provide mechanistic toxicology data to bolster biological plausibility of the read-across. | Tox21, ToxCast [3] |
| Uncertainty Analysis Template | A structured framework to document and evaluate uncertainties in the assessment. | Provided in EFSA's draft guidance [3] |
A objective comparison of the performance and characteristics of read-across against traditional animal testing reveals distinct advantages and limitations.
Table 2: Comparison of Read-Across and Traditional Animal Testing
| Feature | Read-Across Approach | Traditional Animal Testing |
|---|---|---|
| Fundamental Principle | Predicts properties based on similarity to known substances [3]. | Directly measures effects in a live animal model. |
| Primary Objective | To fill data gaps without conducting new animal tests [3]. | To generate hazard data for a specific substance. |
| Data Output | Predicted data, with associated uncertainties [1]. | Empirical experimental data. |
| Time Requirement | Generally faster, leveraging existing data [3]. | Can take months to years per substance. |
| Financial Cost | Lower, as it avoids costly in vivo studies [3]. | Very high, due to husbandry and procedural costs. |
| Animal Use | Significantly reduces or eliminates animal use [3]. | High reliance on animal models. |
| Human Relevance | Can be enhanced by integrating human-relevant NAMs data [3]. | Limited by interspecies differences. |
| Key Challenge | Justifying similarity and managing uncertainty to gain regulatory acceptance [3] [1]. | Ethical concerns, cost, time, and translatability to humans. |
| Regulatory Acceptance | Evolving, guided by new frameworks (e.g., EFSA 2025); requires robust justification [3] [2]. | Well-established and historically standardized. |
Read-across is a scientifically sound and practical approach for chemical safety assessment, defined by its core principle of leveraging data from similar substances to fill knowledge gaps. Its structured workflow, as detailed in EFSA's 2025 guidance, emphasizes problem formulation, rigorous substance characterization, and critical uncertainty assessment to ensure reliable and transparent predictions [2]. While the approach offers significant advantages in reducing animal testing and accelerating the assessment process, its successful application and regulatory acceptance depend on a robust justification of similarity, often supported by integrating data from New Approach Methodologies. For researchers and drug development professionals, mastering the principles and practices of read-across is increasingly essential for navigating the future landscape of evidence-based chemical safety research.
Read-across is a widely used data gap-filling technique within category and analogue approaches for regulatory purposes, playing a critical role in chemical safety assessment under frameworks such as the European Union's Registration, Evaluation, Authorisation and Restriction of Chemicals (REACH) regulation [4] [5]. The fundamental principle underpinning traditional read-across is the chemical similarity principle, which posits that chemically similar compounds are likely to exhibit similar biological effects and toxicity profiles [4]. This principle has provided the foundation for predicting chemical-induced responses based primarily on chemical structure alone, enabling hazard assessment without the need for extensive animal testing [4] [5].
Despite its regulatory acceptance and widespread applicationâevidenced by its use in up to 75% of analyzed REACH dossiers for at least one endpointâtraditional read-across faces significant challenges [5]. The accuracy of predictions based solely on chemical structural similarity often proves inadequate due to the complex mechanisms of toxicity underlying many adverse outcomes [4] [6]. Regulatory acceptance remains a major hurdle primarily due to the lack of objectivity and clarity about how to practically address uncertainties in what has largely been a subjective expert judgment-driven assessment [5].
This article traces the evolution from traditional chemical structure-based read-across to more advanced integrative approaches that combine chemical structural information with biological activity data. We will objectively compare the performance of these methodologies, provide detailed experimental protocols, and analyze how the integration of multiple data streams addresses the limitations of traditional approaches while enhancing prediction accuracy and regulatory acceptance.
Traditional read-across approaches rely exclusively on chemical structural similarity to predict the toxicity of a target compound by inferring from structurally similar source chemicals with available toxicity data [4] [5]. The methodological foundation involves identifying a set of structural analogues and using their known toxicological properties to estimate the properties of the target chemical. This process typically employs chemical descriptors and similarity metrics to quantify the degree of structural resemblance between compounds [4].
The quantitative foundation for traditional read-across predictions is expressed in the following equation, where the predicted activity of a compound (Apred) is calculated from the similarity-weighted aggregate of the activities Ai of k nearest neighbors:
Equation 1: Traditional Read-Across Prediction
In this equation, Si represents the pairwise Tanimoto similarity between the target molecule and its ith neighbor, calculated from chemical descriptor space using the Jaccard distance [4]. The similarity-weighted aggregate ensures that the activities of more similar neighbors receive higher weights when calculating the predicted activity, providing a quantitative basis for what has often been treated as a qualitative assessment.
The application of traditional read-across has been particularly valuable in regulatory contexts where data gaps exist for specific endpoints. Under the REACH regulation, more than 20% of high production volume chemicals submitted for the first deadline relied on read-across for hazard information on various toxicity endpoints necessary for registration [4]. Similarly, a comparable proportion of High Production Volume chemicals submitted to the US EPA under the Toxic Substances Control Act have been evaluated using read-across approaches [5].
The OECD QSAR Toolbox represents one of the most widely used implementations of traditional read-across methodology, enabling users to identify structural analogues and fill data gaps through systematic similarity searching and grouping [4] [5]. Other software tools such as ToxMatch and ToxRead further facilitate nearest neighbor predictions using different similarity indices, providing the toxicology community with practical resources for implementing read-across in various decision contexts [5].
Despite its utility, traditional read-across faces several significant limitations that impact its predictive accuracy and regulatory acceptance. The approach fundamentally struggles with addressing complex mechanisms of toxicity that cannot be adequately captured by structural similarity alone [4] [6]. This limitation becomes particularly problematic when predicting complex in vivo outcomes from chemical structure, where similar structures may exhibit different metabolic pathways or biological interactions.
The subjective nature of analogue selection and similarity assessment introduces substantial variability and uncertainty into predictions [5]. Without objective criteria for defining similarity thresholds and selecting appropriate analogues, different experts may arrive at divergent read-across conclusions for the same target chemical, undermining regulatory confidence. Furthermore, traditional approaches offer limited capability for mechanistic interpretation, as they lack the biological context necessary to explain why certain structural features correlate with specific toxicological outcomes [4].
The limitations of traditional read-across have spurred the development of integrative approaches that combine chemical structural information with biological activity data. The conceptual foundation for these methods rests on the recognition that toxicity pathways and adverse outcome pathways provide a mechanistic bridge between chemical structure and toxicological effects that cannot be fully captured by structural similarity alone [5]. By incorporating biological response data, these approaches aim to enhance the biological relevance of read-across predictions while reducing uncertainty.
Integrative methods leverage the growing availability of high-throughput screening data from programs such as ToxCast and the Toxicogenomics Project-Genomics Assisted Toxicity Evaluation system (TG-GATES) [4] [5] [7]. These data streams provide information on biological responses at molecular and cellular levels that can serve as indicators of potential toxicity mechanisms, offering a complementary dimension to traditional structural similarity assessments [4]. The integration of chemical and biological information enables a more comprehensive characterization of a chemical's potential hazard, moving beyond what can be inferred from structure alone.
The Chemical-Biological Read-Across (CBRA) approach represents a significant methodological advancement in integrative read-across [4] [6]. This approach infers each compound's toxicity from those of both chemical and biological analogs, with similarities determined by the Tanimoto coefficient applied to both descriptor types [4]. The CBRA prediction is calculated using an expanded version of the traditional read-across equation:
Equation 2: Chemical-Biological Read-Across Prediction
This equation incorporates both biological neighbors (kbio) and chemical neighbors (kchem) in a unified similarity-weighted prediction framework [4]. The method employs radial plots to visualize the relative contribution of analogous chemical and biological neighbors, enhancing the transparency and interpretability of predictions [4] [6].
Another significant development is the Generalized Read-Across (GenRA) approach, which provides a systematic framework for predicting toxicity across structurally similar neighborhoods in large chemical libraries [5]. This method enables objective evaluation of read-across performance using chemical structure and bioactivity information to define local validity domainsâspecific sets of nearest neighbors used for prediction [5] [8].
The integration of biological data in read-across has been enabled by advances in high-throughput screening technologies that allow comprehensive profiling of chemical effects on biological systems. The ToxCast program and related initiatives have generated bioactivity data for thousands of chemicals across hundreds of assay endpoints, capturing effects on diverse biological targets and pathways [5] [7]. These data provide a rich source of biological descriptors for integrative read-across.
Specific biological data types used in integrative read-across include gene expression profiling from toxicogenomics studies, cytotoxicity screening data measuring intracellular ATP and caspase-3/7 activation, and targeted assays measuring specific pathway activities [4] [7]. For example, the study by Lock et al. screened 240 compounds across 81 human lymphoblast cell lines, measuring both cytotoxicity and apoptosis induction to generate biological response profiles that capture interindividual variability in chemical susceptibility [7].
Table 1: Data Types Used in Integrative Read-Across Approaches
| Data Type | Specific Endpoints | Example Sources | Application in Read-Across |
|---|---|---|---|
| Gene Expression | 2,923 transcripts from TG-GATES | Toxicogenomics Project [4] | Hepatotoxicity prediction |
| Cytotoxicity Screening | Intracellular ATP, caspase-3/7 activation | ToxCast, qHTS [4] [7] | Acute toxicity classification |
| Pathway-Based Assays | 821 ToxCast assay endpoints | ToxCast Program [5] | Mechanistic profiling for various toxicity endpoints |
| Chemical Descriptors | Dragon descriptors, structural fingerprints | Dragon Software, RDKit [4] [8] | Structural similarity assessment |
Rigorous comparison of traditional and integrative read-across approaches requires standardized evaluation across multiple toxicity endpoints and chemical domains. Low et al. conducted a comprehensive assessment using four distinct data sets with different toxicity endpoints: sub-chronic hepatotoxicity (127 compounds from TG-GATES), hepatocarcinogenicity (132 compounds from DrugMatrix), mutagenicity (185 compounds from CCRIS), and acute lethality (122 compounds with rat oral LD50 data) [4]. This experimental design enabled direct comparison of classification accuracy between traditional read-across (using chemical descriptors alone) and CBRA (using both chemical and biological descriptors).
Similarly, the GenRA framework was systematically evaluated for predicting up to ten different in vivo repeated dose toxicity study types using a set of 1778 chemicals from the ToxCast library [5] [8]. The approach utilized 3239 different chemical structure descriptors supplemented with outcomes from 821 in vitro assays, with prediction performance assessed for 600 chemicals with in vivo data [5] [8]. This large-scale evaluation provided robust statistical power for comparing method performance across diverse chemical spaces and toxicity endpoints.
The comparative performance assessment reveals consistent advantages for integrative read-across approaches across multiple toxicity endpoints. In the CBRA study, the integrated chemical-biological approach demonstrated superior classification accuracy compared to methods using either chemical or biological descriptors alone [4] [6]. The performance advantage was particularly notable for complex endpoints such as hepatotoxicity and hepatocarcinogenicity, where mechanisms involve multiple biological pathways that cannot be fully captured by structural alerts alone.
The GenRA evaluation demonstrated that incorporating bioactivity descriptors from ToxCast assays improved prediction performance for many in vivo toxicity endpoints compared to using chemical descriptors alone [5]. This systematic analysis established a performance baseline for read-across predictions and highlighted the value of bioactivity data in reducing prediction uncertainty, particularly for data-poor chemicals where structural analogues are limited or insufficiently similar.
Table 2: Performance Comparison of Read-Across Approaches Across Different Endpoints
| Toxicity Endpoint | Traditional RA (Chem Only) | Biological Similarity Only | Integrative CBRA | Key Study Findings |
|---|---|---|---|---|
| Hepatotoxicity | Moderate accuracy | Moderate accuracy | High accuracy | CBRA exhibited consistently high external classification accuracy [4] |
| Hepatocarcinogenicity | Variable performance | Improved over chemical | Most reliable | Biological data provided complementary predictive information [4] |
| Mutagenicity | Good performance | Good performance | Enhanced performance | Both approaches benefited from integration [4] |
| Acute Lethality | Moderate accuracy | Moderate accuracy | Substantial improvement | Cytotoxicity profiles enhanced prediction [4] |
| Repeated Dose Toxicity | Limited applicability | Mechanistic relevance | Uncertainty reduction | Bioactivity data addressed key uncertainties [5] |
A critical aspect of performance comparison involves assessing uncertainty and defining applicability domains for different read-across approaches. Traditional read-across typically defines applicability based on chemical structural similarity within a local validity domain [5]. While this approach identifies structurally related analogues, it may miss important biological considerations that affect toxicity potential.
Integrative approaches enable a more comprehensive definition of applicability domains that incorporates both chemical and biological similarity [4] [5]. This expanded domain characterization helps identify situations where structural similarity may not translate to similar biological activity, or conversely, where structurally diverse chemicals may share common toxicity mechanisms through different structural features. The transparency of the CBRA approach, aided by radial plots showing the relative contribution of chemical and biological neighbors, facilitates more informed uncertainty assessment by explicitly representing the evidence basis for predictions [4] [6].
The experimental foundation for both traditional and integrative read-across begins with comprehensive chemical structure curation and descriptor calculation. The standard protocol involves:
Structural standardization: Chemical structures undergo rigorous curation procedures including standardization of representation, removal of salts and duplicates, and filtering of problematic structures (e.g., metal-containing compounds or those with molecular weight >2000) [4] [5].
Descriptor calculation: Dragon software (v.5.5 or later) is typically used to compute a comprehensive set of chemical descriptors capturing diverse structural and physicochemical properties [4]. Alternative approaches may employ extended-connectivity fingerprints or other structural representation methods [8].
Descriptor preprocessing: All chemical descriptors undergo range scaling to values between 0 and 1, followed by removal of low-variance descriptors (standard deviation <10^(-6)) and highly correlated descriptors (pairwise r² >0.9) to reduce dimensionality and minimize multicollinearity [4].
Similarity calculation: Pairwise similarity between compounds is quantified using the Tanimoto coefficient, derived from Jaccard distance calculations across the descriptor space [4]. The similarity values are normalized between 0 and 1, with 1 indicating identical pairs.
The generation of biological descriptors for integrative read-across follows standardized protocols tailored to specific assay technologies:
Gene Expression Profiling (TG-GATES Protocol):
Cytotoxicity Screening (qHTS Protocol):
The implementation of integrative read-across follows a systematic workflow that can be visualized as follows:
Successful implementation of integrative read-across requires access to specialized computational tools, data resources, and experimental systems. The following table summarizes key resources that form the essential toolkit for researchers in this field.
Table 3: Essential Research Resources for Integrative Read-Across
| Resource Category | Specific Tools/Resources | Key Functionality | Application in Read-Across |
|---|---|---|---|
| Chemical Structure Tools | Dragon Software [4] | Calculation of chemical descriptors | Structural representation and similarity assessment |
| RDKit [8] | Open-source cheminformatics | Chemical fingerprint generation and manipulation | |
| Biological Data Resources | ToxCast/Tox21 [5] [7] | High-throughput screening data | Source of bioactivity descriptors for mechanism inference |
| TG-GATES [4] [8] | Toxicogenomics database | Gene expression profiles for hepatotoxicity prediction | |
| DrugMatrix [4] | Toxicogenomics resource | Gene expression data for hepatocarcinogenicity assessment | |
| Similarity Assessment Tools | OECD QSAR Toolbox [4] [5] | Read-across and category formation | Structural analogue identification and data gap filling |
| ToxMatch [5] | Similarity profiling | Alternative similarity metrics and neighbor identification | |
| Data Analysis Environments | R/Python with specialized packages [8] | Statistical analysis and modeling | Implementation of similarity calculations and prediction models |
| Reference Databases | CCRIS [4] | Chemical Carcinogenesis Research Information System | Mutagenicity reference data for model training and validation |
| CPDB [4] | Carcinogenicity Potency Database | Hepatocarcinogenicity reference data |
The evolution from traditional to integrative read-across approaches represents a significant advancement in chemical safety assessment methodology. The integration of chemical structural information with biological activity data has consistently demonstrated improved prediction accuracy across multiple toxicity endpoints while addressing key limitations of traditional structure-based approaches [4] [5] [6]. The quantitative performance assessments summarized in this article provide compelling evidence for the value of incorporating bioactivity data to reduce prediction uncertainty and enhance mechanistic interpretability.
The regulatory acceptance of read-across stands to benefit substantially from these methodological advances [5]. Integrative approaches address several key challenges that have hindered confidence in traditional read-across, including the subjective nature of analogue selection, limited mechanistic basis for predictions, and inadequate characterization of uncertainty. By providing a more transparent, objective, and biologically grounded framework for data gap filling, integrative read-across can support more reliable chemical safety decisions while reducing animal testing requirements.
Future developments in integrative read-across will likely focus on several key areas. First, the incorporation of adverse outcome pathway frameworks will strengthen the mechanistic basis for biological similarity assessments, enabling more targeted selection of bioactivity descriptors relevant to specific toxicity endpoints [5]. Second, advances in high-content screening and transcriptomics technologies will expand the breadth and depth of biological data available for integration, capturing more complex biological responses and pathway perturbations. Finally, standardized performance benchmarking frameworks and best practice guidelines will be essential for establishing confidence in these methods and promoting their consistent application in regulatory contexts [5] [8].
As chemical safety assessment continues to evolve toward more mechanistic and human-relevant approaches, integrative read-across methodologies will play an increasingly important role in bridging between traditional toxicology and emerging paradigms based on pathway-based risk assessment. The continued refinement and validation of these approaches will be essential for addressing the growing need for efficient and reliable chemical safety evaluation in both regulatory and product development contexts.
Read-across is a fundamental methodology in chemical risk assessment used to predict the properties of a data-poor substance by leveraging existing data from similar, data-rich substances [9]. This approach is grounded in the principle that structurally similar substances are likely to have comparable physicochemical properties, environmental fate, and toxicological effects [10]. Under regulatory frameworks like the Toxic Substances Control Act (TSCA) in the United States and the European Food Safety Authority (EFSA) in the EU, read-across serves as a critical alternative to animal testing for filling data gaps, thereby streamlining the safety evaluation of new chemicals [11] [9]. This guide details the core concepts of source and target substances and the two primary grouping approaches, providing a structured comparison for professionals in chemical safety research.
The target substance is the chemical entity under assessment for which specific property or toxicity data is lacking [11] [10]. This is the data-poor chemical that requires evaluation before it can enter the marketplace or be approved for use.
The source substance (or source analogue) is a chemically similar compound for which the necessary experimental data on the relevant properties or endpoints is already available [9] [12]. Data from the source substance is used to make predictions about the target substance.
Table 1: Core Definitions in Read-Across
| Term | Definition | Role in Assessment |
|---|---|---|
| Target Substance | The data-poor chemical being assessed [10]. | The subject of the safety evaluation; its unknown properties need to be predicted. |
| Source Substance | The data-rich, structurally similar chemical used for comparison [9]. | Provides the experimental data to fill the data gaps for the target substance. |
The two main methodological frameworks for grouping chemicals in read-across are the analogue approach and the category approach. The choice between them depends on the number of suitable source substances available and the desired robustness of the prediction.
The analogue approach involves a direct, one-to-one comparison between a target substance and a single source substance that is considered to be its closest match [11] [9]. This method is typically chosen when one particularly strong analogue is available. It relies on a high degree of structural and mechanistic similarity to justify the direct extrapolation of data from the source to the target [10].
The category approach is a more robust method that involves grouping the target substance with at least two or more source substances that form a chemically similar category [11] [9]. This approach allows for the identification of trends or patterns in the data across the category. Predictions for the target substance can then be made through interpolation or extrapolation within these established trends, which can lead to more reliable and nuanced estimates than a single analogue [10].
Table 2: Analogue Approach vs. Category Approach
| Feature | Analogue Approach | Category Approach |
|---|---|---|
| Definition | Direct comparison of a target with a single source chemical [9]. | Grouping of a target with multiple source chemicals [11]. |
| Basis for Grouping | High degree of structural and mechanistic similarity [10]. | Common functional group, incremental change (e.g., carbon chain length), or common mode of action [11] [10]. |
| Prediction Method | Direct extrapolation of data from source to target [9]. | Interpolation, extrapolation, or trend analysis within the category [11]. |
| Data Robustness | Relies on the strength of a single analogue; can be less robust. | Leverages multiple data points; generally considered more robust and reliable [10]. |
| Best Use Case | When one exceptionally well-matched source analogue is available. | When several similar chemicals exist, allowing for trend analysis and a stronger weight of evidence. |
The following workflow diagram illustrates the decision process and steps involved in selecting and applying these two approaches.
Executing a scientifically valid read-across assessment requires a structured workflow. The following protocols, synthesized from regulatory guidance, ensure a systematic and transparent process [9] [10].
The following tools and databases are critical for conducting a robust read-across assessment.
Table 3: Essential Research Tools for Read-Across
| Tool / Resource | Type | Function in Read-Across |
|---|---|---|
| EPA CompTox Chemicals Dashboard | Database | Provides access to a wealth of physicochemical, toxicity, and bioassay data for thousands of chemicals to identify and characterize source substances [12]. |
| Generalized Read-Across (GenRA) | Software Tool | An algorithmic, web-based application within the CompTox Dashboard that helps identify candidate analogues and make objective predictions of in vivo toxicity based on structural and bioactivity similarity [12]. |
| OECD QSAR Toolbox | Software Tool | A comprehensive tool to profile chemicals, identify structural analogues and metabolic pathways, and fill data gaps by grouping chemicals into categories [9]. |
| EPI Suite | Software Tool | A suite of physicochemical property and environmental fate estimators used to predict key properties for the target and source substances when experimental data is missing [11]. |
| New Approach Methodologies (NAMs) | Experimental Methods | A suite of non-animal methods (e.g., in vitro assays, high-throughput screening, omics technologies) used to generate mechanistic data that supports the biological similarity between source and target substances [9]. |
| Isopsoralenoside | Isopsoralenoside, MF:C17H18O9, MW:366.3 g/mol | Chemical Reagent |
| Olomoucine Ii | Olomoucine Ii, CAS:500735-47-7, MF:C19H26N6O2, MW:370.4 g/mol | Chemical Reagent |
Read-across is a fundamental methodology in chemical risk assessment used to predict the toxicological properties of a target substance with limited data by using information from structurally and mechanistically similar source substances [2]. This approach operates on the principle that structurally similar compounds exhibit similar biological effects, making it a reliable tool for prediction when experimental data is scarce [13]. As regulatory bodies worldwide increasingly aim to reduce animal testing, read-across has become an essential component of New Approach Methodologies (NAMs) for filling critical data gaps while maintaining human health protection standards [14]. The European Food Safety Authority (EFSA) has developed comprehensive guidance for using read-across in food and feed risk assessment, providing a step-by-step framework for problem formulation, substance characterization, uncertainty analysis, and conclusion reporting [2].
The scientific foundation of read-across rests on three interconnected pillars: structural similarity, which establishes the fundamental comparability between chemicals; toxicokinetics (what the body does to a chemical), which describes absorption, distribution, metabolism, and excretion; and toxicodynamics (what the chemical does to the body), which encompasses the biological interactions and effects at target sites [14]. Understanding these interrelated components allows researchers to make more accurate predictions about chemical safety, supporting the transition toward innovative, human-relevant risk assessment strategies that reduce reliance on traditional animal testing [15] [14]. This guide provides a comparative analysis of experimental approaches and computational methodologies that form the scientific basis for modern read-across applications in chemical safety research.
Structural similarity assessment forms the foundational basis for read-across, predicated on the principle that compounds with analogous molecular structures are likely to exhibit comparable biological activities and toxicological profiles [13]. Multiple computational approaches have been developed to quantify and evaluate structural similarity, each with distinct methodologies, strengths, and limitations. The accurate assessment of structural similarity is crucial for establishing valid read-across hypotheses and ensuring reliable toxicity predictions.
Table 1: Comparative Performance of Structural Similarity Assessment Methods
| Method Category | Specific Approach | Key Metrics/Descriptors | Reported Performance | Primary Applications |
|---|---|---|---|---|
| Cheminformatic Fingerprints | Extended Connectivity Fingerprint (ECFP), Atom Pair (AP), Pharmacophore Fingerprint (PHFP) [16] | Binary structural features, topological atom environments, pharmacophoric points | Varies by fingerprint type; Multi-representation fusion improves recall-precision balance [16] | Initial similarity screening, chemical space characterization |
| Multi-Representation Data Fusion | AgreementPred framework combining 22 molecular representations [16] | Combined similarity scores from multiple fingerprints, agreement scores | Recall: 0.74, Precision: 0.55 (agreement score threshold: 0.1) [16] | Drug and natural product category recommendation |
| Quantitative Read-Across Structure-Activity Relationship (q-RASAR) | Integration of QSAR descriptors with read-across predictions [13] | 0D-2D molecular descriptors, similarity-based predictions | Enhanced predictive accuracy vs. QSAR alone; Reduced mean absolute error (MAE) [13] | Predicting human toxicity endpoints (e.g., TDLo) |
| Explainable AI Integration | SHAP analysis with machine learning models [13] | Feature importance values, mechanistic interpretability | Improved model transparency and mechanistic insights [13] | Identifying key structural features linked to toxicity |
Recent advancements in structural similarity assessment have focused on multi-representation approaches and hybrid methodologies. The AgreementPred framework demonstrates that combining similarity search results from multiple molecular representations significantly improves the recall-precision balance in category recommendation tasks compared to single-representation methods [16]. Similarly, the development of q-RASAR models represents a substantive advancement by integrating traditional quantitative structure-activity relationship (QSAR) descriptors with similarity-based read-across predictions, resulting in enhanced predictive accuracy for human toxicity endpoints such as the toxic dose low (TDLo) [13]. These hybrid approaches effectively address the limitation of conventional read-across, which often struggles with interpreting key structural features responsible for observed toxicological effects.
A standardized workflow for structural similarity assessment in read-across applications typically involves several key stages. First, target substance characterization entails compiling comprehensive molecular information, including chemical structure, functional groups, and physicochemical properties. Subsequently, source substance identification involves searching chemical databases for structurally analogous compounds using multiple fingerprint methods and similarity metrics (e.g., Tanimoto coefficient). The third phase encompasses similarity validation, which assesses not only structural similarity but also mechanistic plausibility through biological pathway analysis. Finally, uncertainty quantification evaluates the confidence in similarity hypotheses using quantitative measures and potential adjustments through additional data from New Approach Methodologies (NAMs) [2] [13].
Figure 1: Structural similarity assessment workflow for read-across.
Toxicokinetics (TK) describes the time course of chemical absorption, distribution, metabolism, and excretion (ADME) within biological systems. In read-across applications, comparing the TK profiles of source and target substances provides critical insights into their internal dosimetry and potential biological effects. Significant advances in computational toxicokinetics have enabled more reliable extrapolations between structurally similar compounds, enhancing the predictive capability of read-across for human health risk assessment.
Table 2: Comparison of Toxicokinetic Modeling Approaches for Read-Across
| Methodology | Technical Approach | Data Requirements | Regulatory Applications | Key Advantages |
|---|---|---|---|---|
| Physiologically Based Pharmacokinetic (PBPK) Modeling [14] | Mathematical representation of ADME processes in physiological compartments | Chemical-specific parameters, in vitro metabolism data, physiological constants | Risk translation, exposure reconstruction, chemical-chemical interactions [14] | Species extrapolation, route-to-route extrapolation, quantitative dose-response prediction |
| High-Throughput TK (HTTK) Models [14] | High-throughput in vitro data integration with simplified TK models | High-throughput in vitro clearance data, chemical properties | Chemical prioritization and screening, initial TK parameter estimation [14] | Rapid screening of large chemical libraries, cost-effective initial assessment |
| Toxicogenomics Integration [14] | OMICS data analysis for metabolic pathway identification | Transcriptomic, proteomic, metabolomic data | Mechanism identification, point of departure (PoD) calculation [14] | Comprehensive pathway analysis, identification of susceptible populations |
| In Vitro-In Vivo Extrapolation (IVIVE) [14] | Mathematical extrapolation from in vitro systems to in vivo responses | In vitro bioactivity data, protein binding, metabolic stability | Benchmark dose modeling, risk assessment [14] | Reduction of animal testing, human-relevant data generation |
The integration of toxicokinetic data into read-across assessments significantly strengthens the scientific basis for extrapolations between source and target substances. For instance, the European Food Safety Authority (EFSA) recently utilized a PBPK model to establish a tolerable weekly intake (TWI) for four per- and polyfluoroalkyl substances (PFAS) based on immunotoxicity endpoints [14]. This application demonstrates how TK modeling can support quantitative risk assessment for chemical groups within a read-across framework. Similarly, high-throughput toxicokinetic tools such as httk and TK-plate are gaining prominence in chemical screening and prioritization, enabling more efficient evaluation of structurally related compounds [14].
The generation of toxicokinetic data for read-across applications typically follows a tiered approach. Initial screening employs high-throughput in vitro methods to assess fundamental ADME properties, including metabolic stability in liver microsomes or hepatocytes, cellular permeability in Caco-2 or MDCK models, and plasma protein binding [14]. For higher-tier assessments, more comprehensive investigations utilize advanced tissue models such as 3D spheroids, organoids, or microphysiological systems (MPS) that better recapitulate in vivo tissue complexity and metabolic capacity [14]. The resulting data are subsequently integrated into PBPK models for in vitro-to-in vivo extrapolation (IVIVE), enabling prediction of human exposure scenarios and internal tissue doses [14]. This integrated approach facilitates direct comparison of TK behaviors between source and target substances, strengthening the scientific basis for read-across conclusions.
Figure 2: Toxicokinetic data generation workflow for read-across.
Toxicodynamics encompasses the biochemical and physiological effects of chemicals on biological systems, including the molecular interactions and subsequent cascades of events leading to adverse outcomes. Mechanism-based read-across represents a significant advancement beyond structural similarity alone, as it focuses on establishing common modes of action between source and target substances. The Adverse Outcome Pathway (AOP) framework has emerged as a particularly valuable tool for organizing toxicodynamic knowledge and supporting mechanistic read-across predictions [14].
The AOP framework provides a structured representation of biologically plausible sequences of events spanning multiple levels of biological organization, from molecular initiating events to cellular, organ, and organism-level responses [14]. By mapping both source and target substances onto relevant AOPs, researchers can establish mechanistic similarity even in cases where structural similarity is moderate. This approach is particularly valuable for addressing complex toxicity endpoints where multiple structural classes may converge on common biological pathways. Computational toxicology tools such as molecular docking, molecular dynamics simulations, and systems biology models contribute significantly to characterizing molecular initiating events and key intermediate steps in AOPs [14].
Comprehensive toxicodynamic characterization for read-across applications typically employs a combination of in vitro and in silico approaches. Initial assessment involves identifying molecular initiating events through target-based assays, such as receptor binding studies or enzyme inhibition assays [14]. For nuclear receptors, which represent important targets for many endocrine-active chemicals, structural characterization using both experimental methods (X-ray crystallography) and computational approaches (AlphaFold 2 predictions) can provide insights into ligand-receptor interactions [17]. Subsequent evaluation of key events along relevant AOPs utilizes specialized in vitro models, including high-content screening in cell cultures, 3D tissue models, and transcriptomic or proteomic analyses [14]. Integration of these data points within the AOP framework enables a weight-of-evidence assessment of mechanistic similarity between source and target substances, substantially strengthening the scientific basis for read-across.
Figure 3: Toxicodynamic characterization workflow for read-across.
The most robust read-across assessments integrate all three scientific pillarsâstructural similarity, toxicokinetics, and toxicodynamicsâwithin a cohesive framework. This integrated approach aligns with the Integrated Approaches for Testing and Assessment (IATA) advocated by regulatory agencies such as the OECD [14]. By combining evidence from multiple sources, researchers can develop a comprehensive weight-of-evidence that substantially reduces uncertainty in read-across predictions.
The ASPIS cluster, comprising three major EU projects (ONTOX, PrecisionTox, and RISK-HUNT3R), exemplifies this integrated strategy with a collective investment of â¬60 million aimed at revolutionizing chemical safety assessment [15]. ONTOX develops innovative NAMs for predicting systemic repeated-dose toxicity effects by integrating AI-driven computational approaches with biological, toxicological, and kinetic data [15]. PrecisionTox employs a comparative toxicogenomics approach across multiple species to identify conserved molecular toxicity pathways and understand susceptibility variations within human populations [15]. RISK-HUNT3R focuses on implementing integrated, human-centric risk assessment tools using in vitro and in silico NAMs to evaluate chemical exposure, toxicokinetics, and toxicodynamics [15]. Together, these initiatives represent the cutting edge of mechanism-based read-across that transcends traditional structural similarity approaches.
Table 3: Performance Comparison of Read-Across Approaches
| Approach | Structural Basis | TK Consideration | TD Consideration | Uncertainty Management | Regulatory Acceptance |
|---|---|---|---|---|---|
| Traditional Structural Read-Across [18] | Primary focus | Limited | Limited | Qualitative assessment | Mixed success; often challenged [18] |
| q-RASAR Approach [13] | Quantitative | Indirect via descriptors | Indirect via endpoints | Statistical confidence measures | Emerging, with promising applications |
| Mechanism-Based Read-Across [14] | Foundation | Integrated TK modeling | AOP-informed | Weight-of-evidence framework | Growing acceptance for specific endpoints |
| Integrated TK/TD Approach [15] [14] | Comprehensive | PBPK modeling | AOP network analysis | Quantitative uncertainty analysis | High potential, currently developing |
Table 4: Key Research Reagents and Platforms for Read-Across Applications
| Tool Category | Specific Tools/Platforms | Primary Function | Application in Read-Across |
|---|---|---|---|
| Chemical Databases | TOXRIC, DrugBank, LOTUS, NPASS, HERB2.0 [13] [16] | Source of chemical structures, annotations, and toxicity data | Provides curated data for source and target substances |
| Cheminformatics Tools | KNIME Cheminformatics Extensions, OECD QSAR Toolbox [13] [14] | Molecular descriptor calculation, structural similarity assessment | Enables quantitative similarity assessment and descriptor generation |
| Toxicogenomics Platforms | ToxCast, comparative toxicogenomics databases [14] | Bioactivity screening, mechanistic data generation | Supports AOP development and mechanistic similarity assessment |
| TK Modeling Platforms | httk, TK-plate, PBPK modeling software [14] | TK parameter estimation, IVIVE, dose-response prediction | Facilitates TK similarity assessment and internal dose estimation |
| Structural Biology Tools | AlphaFold Protein Structure Database, RCSB PDB [17] | Protein-ligand interaction analysis, binding pocket characterization | Enables assessment of molecular initiating events |
| Machine Learning Frameworks | Random Forest, SVM, SHAP analysis [13] | Pattern recognition, toxicity prediction, model interpretability | Enhances prediction accuracy and provides mechanistic insights |
| Retaspimycin | Retaspimycin, CAS:857402-23-4, MF:C31H45N3O8, MW:587.7 g/mol | Chemical Reagent | Bench Chemicals |
| Dihydroartemisinic acid | Dihydroartemisinic acid, CAS:85031-59-0, MF:C15H24O2, MW:236.35 g/mol | Chemical Reagent | Bench Chemicals |
The implementation of robust read-across requires specialized computational tools and databases. The OECD QSAR Toolbox represents a particularly valuable resource that integrates multiple NAMs approaches, including in vitro data, OMICS, PBPK, and QSAR, to build weight of evidence for different chemicals and endpoints [14]. For structural similarity assessment, the AgreementPred framework demonstrates how combining multiple molecular representations (22 in its implementation) can improve the recall-precision balance in category recommendation tasks [16]. For toxicokinetic modeling, open-source tools such as httk provide high-throughput toxicokinetic parameters for chemical prioritization and initial risk assessment [14]. These tools, when used in combination, create a comprehensive ecosystem for implementing scientifically rigorous read-across that addresses all three key scientific pillars.
Read-across is a sophisticated method used in chemical risk assessment to predict the toxicological properties of a target substance by using experimental data from structurally and mechanistically similar substances, known as source substances [2]. This approach has gained significant traction within global regulatory frameworks as a New Approach Methodology (NAM) that can potentially reduce reliance on traditional animal testing while maintaining rigorous safety standards. For researchers and drug development professionals, understanding the nuanced acceptance patterns of read-across across different regulatory agencies is critical for successful chemical safety evaluation and regulatory submission.
The European Food Safety Authority (EFSA) has developed a systematic framework for applying read-across in food and feed safety assessment, emphasizing a weight-of-evidence evaluation for individual substances [2]. This framework provides a step-by-step workflow encompassing problem formulation, target substance characterization, source substance identification, source substance evaluation, data gap filling, uncertainty assessment, and conclusion reporting. The ultimate goal is to equip risk assessors and applicants with a comprehensive methodology to carry out read-across assessments systematically and transparently, thereby supporting the safety evaluation of chemicals throughout the food and feed chain.
Regulatory agencies worldwide have increasingly accepted specific alternative methods and defined approaches that align with the read-across paradigm. The table below summarizes the acceptance of selected methodologies across major international agencies:
Table 1: Regulatory Acceptance of Selected Alternative Methods and Defined Approaches
| Toxicity Area | Method/Approach | U.S. Acceptance | EU Acceptance | Applicable Regulations/Guidelines |
|---|---|---|---|---|
| Skin Sensitization | Defined approaches on skin sensitization | Accepted [19] | Accepted [19] | OECD Guideline 497 (2021, updated 2025) |
| Ocular Irritation/Corrosion | Defined approaches for serious eye damage and eye irritation | Accepted [19] | Accepted [19] | OECD Test Guideline 467 (2022, updated 2025) |
| Endocrine Disruption | Rapid androgen disruption activity reporter assay | Accepted [19] | Accepted [19] | OECD Test Guideline 251 (2022) |
| Developmental Neurotoxicity | Evaluation of data from developmental neurotoxicity testing battery | Accepted [19] | Accepted [19] | OECD Guidance Document 377 (2023) |
| Ecotoxicity | Fish cell line acute toxicity - RTgill-W1 cell line assay | Accepted [19] | Accepted [19] | OECD Test Guideline 249 (2021) |
| Immunotoxicity | In vitro immunotoxicity: IL-2 Luc assay | Accepted [19] | Accepted [19] | OECD Test Guideline 444A (2023, updated 2025) |
While harmonization through OECD test guidelines is evident, implementation varies by region and regulatory context. The U.S. EPA, FDA, and CPSC have issued agency-specific guidance documents that incorporate these methodologies into their chemical assessment frameworks [19]. Similarly, the European Union has established extensive protocols through EFSA for implementing read-across within food and feed safety assessment [2]. The year 2025 represents a significant milestone in regulatory evolution, with international agencies accelerating implementation of stricter rules that redefine global standards, particularly in sustainability, AI governance, and data privacy [20].
A notable trend across regulatory agencies is the emphasis on transparency and data quality in read-across submissions. EFSA's guidance specifically highlights the importance of clarity, impartiality, and quality to derive transparent and reliable read-across conclusions [2]. The analysis of uncertainty and strategies to reduce it to tolerable levels through standardized approaches and/or additional data from NAMs represents a critical component of regulatory acceptance across all major agencies.
The EFSA read-across framework provides a systematic methodology for chemical safety assessment that can be adapted across regulatory contexts. The workflow consists of sequential phases:
Table 2: Key Phases in Read-Across Assessment Methodology
| Phase | Key Activities | Methodological Considerations |
|---|---|---|
| Problem Formulation | Define assessment scope, data requirements, and chemical categories | Establish assessment goals and identify knowledge gaps |
| Target Substance Characterization | Comprehensive characterization of physicochemical properties, structural features, and metabolic pathways | Identify potential metabolites and impurities; determine adequacy of existing data |
| Source Substance Identification | Identify structurally and mechanistically similar substances | Establish similarity justification based on structural, metabolic, and mechanistic criteria |
| Source Substance Evaluation | Evaluate quality and adequacy of source substance data | Assess data reliability, relevance, and completeness for endpoint prediction |
| Data Gap Filling | Use source substance data to predict target substance properties | Justify applicability of data for specific endpoints; address uncertainties |
| Uncertainty Assessment | Evaluate and characterize uncertainties in read-across prediction | Identify sources of uncertainty and strategies for reduction |
| Conclusion and Reporting | Document rationale, evidence, and conclusions | Ensure transparency and reproducibility of assessment |
Successful read-across applications require robust experimental protocols to substantiate the similarity hypothesis between source and target substances. Key methodological considerations include:
Structural Similarity Assessment: Computational approaches including QSAR models, molecular fingerprinting, and functional group analysis establish structural similarity between source and target substances. The assessment must demonstrate that differences in structure do not significantly impact toxicological properties for the endpoints being assessed.
Metabolic Pathway Characterization: Comparative metabolism studies using in vitro systems such as hepatocytes or microsomal preparations identify similar metabolites and metabolic pathways between source and target substances. Discrepancies in metabolism may necessitate additional data or invalidate the read-across hypothesis.
Mechanistic Profiling: Mechanistic similarity is established through in vitro bioactivity profiling across multiple pathways relevant to the target endpoint. High-throughput screening assays and omics technologies provide mechanistic evidence supporting the similarity hypothesis.
Toxicokinetic Considerations: Comparative assessment of absorption, distribution, metabolism, and excretion (ADME) properties ensures similar internal exposure patterns between source and target substances. Physiologically based pharmacokinetic (PBPK) modeling may be employed to extrapolate internal doses across substances.
The methodological workflow for read-across assessment can be visualized as follows:
Read-Across Assessment Workflow
When applying read-across to evaluate chemical alternatives, several standardized tools facilitate systematic comparison of health, environmental, and physical hazards. These methodologies enable researchers to make informed decisions about safer substitutions:
Table 3: Standardized Tools for Chemical Alternative Assessment
| Assessment Tool | Developer | Key Features | Application Context |
|---|---|---|---|
| Column Model | German Federation of Institutions for Statutory Accident Insurance and Prevention (IFA) | Six hazard categories divided into risk levels from negligible to very high; uses GHS classifications | Small and medium-sized businesses assessing substitute substances with limited information [21] |
| Quick Chemical Assessment Tool (QCAT) | Washington Department of Ecology | Nine high-priority hazard endpoints; grades chemicals A-F along a continuum of concern | Rapid identification of chemicals equally or more toxic than the chemical being assessed [21] |
| Pollution Prevention Options Analysis System (P2OASys) | Massachusetts Toxics Use Reduction Institute | Scores chemicals based on quantitative and qualitative data for multiple hazard types; indicates very low to very high risk | Determining potential negative impacts of alternatives on workers, public health, or environment [21] |
| Green Screen for Safer Chemicals | Clean Production Action | Comprehensive hazard assessment using authoritative and screening data sources; identifies preferred chemicals | Benchmarking chemicals against specific hazard criteria to identify safer alternatives [21] |
A comprehensive read-across assessment for chemical alternatives requires evaluation across multiple hazard domains, including:
Acute Health Hazards: Acute toxicity, eye damage, skin damage, and sensitization (skin, respiratory) represent critical endpoints for comparative assessment [21]. These endpoints are particularly relevant for worker safety evaluation during chemical handling and use.
Chronic Health Hazards: Chronic toxicity, target organ toxicity, carcinogenicity, mutagenicity/genotoxicity, reproductive toxicity, developmental toxicity, endocrine disruption, neurotoxicity, and immune system effects require careful evaluation in alternatives assessment [21].
Physical and Environmental Hazards: Flammability, reactivity, explosivity, corrosivity, oxidizing properties, and pyrophoric properties must be considered alongside environmental fate and ecotoxicity endpoints [21].
The relationship between assessment components and hazard considerations can be visualized as:
Chemical Alternative Assessment Framework
Successful implementation of read-across approaches requires specific methodological tools and resources. The following table details essential components of the regulatory scientist's toolkit for read-across applications:
Table 4: Essential Research Reagents and Methodologies for Read-Across Applications
| Tool/Resource | Function | Application Context |
|---|---|---|
| OECD QSAR Toolbox | Grouping of chemicals into categories and filling data gaps | Systematic identification of structurally similar compounds and metabolic pathways |
| EPA's CompTox Chemicals Dashboard | Access to chemistry, toxicity, and exposure data for thousands of chemicals | Preliminary assessment of chemical similarities and data availability |
| VEGA (Virtual Engine for Geo- chemical Assessment) | Platform integrating QSAR models for toxicity prediction | Hazard assessment for multiple endpoints when experimental data are limited |
| OECD Test Guidelines | Standardized methodologies for specific toxicity endpoints | Generation of reliable data for read-across justification [19] |
| ToxTrack & High-Throughput Screening | Mechanistic bioactivity profiling across multiple pathways | Establishing mechanistic similarity between source and target substances |
| Toxicogenomics Platforms | Gene expression profiling for mode-of-action analysis | Understanding mechanistic similarities at molecular level |
| In Vitro ADME Systems | Hepatocytes, microsomes, permeability assays | Comparative assessment of metabolic fate and toxicokinetics |
| Chemotyping Approaches | Structural alert identification and categorization | Grouping chemicals based on reactive moieties and potential mechanisms |
| Indinavir sulfate ethanolate | Indinavir sulfate ethanolate, MF:C38H55N5O9S, MW:757.9 g/mol | Chemical Reagent |
| Cladosporide A | Cladosporide A | Cladosporide A is a natural antifungal agent for research use only (RUO). It inhibits pathogenic fungi like A. fumigatus. Explore its applications. |
The global regulatory landscape for chemical safety assessment demonstrates increasing convergence in the acceptance of read-across and New Approach Methodologies. The harmonization through OECD test guidelines and guidance documents provides a foundation for global alignment, while region-specific implementation frameworks reflect local regulatory priorities and historical contexts [19].
For researchers and drug development professionals, success in regulatory submission requires robust methodological execution of read-across assessments, with particular emphasis on transparent documentation of the similarity hypothesis, comprehensive uncertainty analysis, and integration of appropriate NAMs to strengthen the weight of evidence. The ongoing development of resources such as the Collection of Alternative Methods for Regulatory Application (CAMERA), with its planned public Beta release in Q3 2025, promises to further streamline regulatory acceptance of these approaches [19].
As international regulatory cooperation intensifies, particularly in response to emerging challenges in sustainability, AI governance, and chemical management, the read-across approach is positioned to play an increasingly central role in efficient, human-relevant chemical safety assessment across global markets.
In chemical safety assessment, read-across has emerged as a primary method for filling data gaps by predicting the toxicological properties of a data-poor target substance using information from structurally and mechanistically similar, data-rich source substances [9]. This guide compares the performance of traditional and advanced read-across methodologies, providing researchers with a clear framework for implementation.
A comparative study evaluated traditional chemical-based read-across against a hybrid chemical-biological method using two large toxicity datasets: Ames mutagenicity (3,979 compounds) and rat acute oral toxicity (7,332 compounds) [22]. The experimental design is summarized below.
| Parameter | Ames Mutagenicity Dataset | Rat Acute Oral Toxicity Dataset |
|---|---|---|
| Total Compounds | 3,979 | 7,332 |
| Toxic Compounds | 1,718 | Quantitative LD50 values |
| Non-Toxic Compounds | 2,261 | Quantitative LD50 values |
| Chemical Descriptors | 192 standardized 2-D MOE descriptors [22] | 192 standardized 2-D MOE descriptors [22] |
| Biological Profiles (Bioprofiles) | PubChem bioassays; biosimilarity calculated via CIIPro portal [22] | PubChem bioassays; biosimilarity calculated via CIIPro portal [22] |
| Prediction Method | Nearest neighbor in training set [22] | Nearest neighbor in training set [22] |
| Methodology | Dataset | Sensitivity | Specificity | CCR (Balanced Accuracy) |
|---|---|---|---|---|
| Traditional Read-Across (Chemical Similarity Only) | Ames Mutagenicity | 0.79 | 0.73 | 0.76 |
| Hybrid Read-Across (Chemical + Biological Similarity) | Ames Mutagenicity | 0.85 | 0.80 | 0.83 |
| Traditional Read-Across (Chemical Similarity Only) | Acute Oral Toxicity | 0.71 | 0.69 | 0.70 |
| Hybrid Read-Across (Chemical + Biological Similarity) | Acute Oral Toxicity | 0.78 | 0.75 | 0.77 |
| Item | Function in Read-Across Assessment |
|---|---|
| MOE (Molecular Operating Environment) Software | Generates essential 2-D chemical descriptors for calculating structural similarity between compounds [22]. |
| PubChem Database | Provides public repository of biological assay data used to generate bioactivity profiles (bioprofiles) for biosimilarity assessments [22]. |
| CIIPro (Chemical In Vitro-In Vivo Profiling) Portal | A specialized tool for obtaining and processing PubChem bioassay data to calculate biosimilarity metrics [22]. |
| OECD QSAR Toolbox | Software that facilitates the systematic grouping of chemicals into categories using chemical similarity read-across and trend analysis [9] [22]. |
| ToxMatch | An open-source software application that encodes chemical similarity calculation tools to support the development of chemical groupings and read-across [22]. |
| Hongoquercin B | Hongoquercin B: Antibacterial Research Compound |
| Glucodichotomine B | Glucodichotomine B, MF:C20H22N2O9, MW:434.4 g/mol |
The European Food Safety Authority (EFSA) has developed a structured workflow to standardize the read-across process, ensuring transparency and reliability in chemical safety assessments [9]. This workflow is visualized below.
The core of a successful read-across assessment lies in establishing a robust similarity justification between the source and target substances. This involves multiple, interconnected factors [9].
The hybrid read-across method demonstrates a statistically significant improvement in predictive performance over the traditional approach for complex toxicity endpoints. By integrating publicly available biological data with traditional chemical descriptors, the hybrid method partially resolves the "activity cliff" issue and offers a more robust, data-driven framework for chemical safety assessment [22]. The standardized EFSA workflow provides a transparent, systematic structure for applying these methodologies, ensuring reliable and defensible read-across conclusions in regulatory contexts [9].
In toxicology, chemical grouping provides a science-based framework for organizing structurally or functionally related substances, facilitating more efficient evaluations and strengthening the overall weight of evidence in risk assessments [23]. This approach is particularly valuable in regulatory submissions for Extractables and Leachables (E&L) assessments, where grouping supports a read-across strategy for data-poor substances by establishing biological similarity with data-rich analogues [23]. Chemical grouping allows the classification of newly identified E&L compounds by shared structural features and toxicological profiles, enabling researchers to make informed decisions throughout the product lifecycle [23].
While grouping strategies based on structural similarity or broad class-level clustering provide initial categorization, further scientific justification is normally required to justify chemical groupings, typically including considerations of bioavailability, metabolism, and biological/mechanistic plausibility [24]. A systematic approach to category formation and read-across prediction is essential for regulatory acceptance, requiring careful assessment of both similarity and uncertainty in the grouping rationale [24].
Chemical grouping strategies can be implemented at multiple levels, with structural similarity serving as the foundational element for initial category formation. Previous studies have examined classification of chemicals in the E&L space at both structural and functional levels [23]. A two-tiered clustering approach based on broad class-level (Tier 1) and more granular subclass distinctions (Tier 2) has shown promise for developing triaging strategies to rapidly categorize and identify compounds that pose critical risk [23].
Table 1: Tiered Clustering Approach for Chemical Grouping
| Tier Level | Scope | Primary Function | Similarity Assessment |
|---|---|---|---|
| Tier 1 | Broad class-level | Initial categorization and triage | Structural features, basic properties |
| Tier 2 | Granular subclass | Detailed risk assessment | Toxicological profiles, mechanistic data |
The structural approach organizes substances based on shared molecular frameworks, common functional groups, or similar physicochemical properties. This method is particularly effective for identifying structural alerts - specific molecular arrangements associated with particular toxicological effects. Functional grouping extends beyond structural similarities to categorize chemicals based on their biological activity or mechanistic behavior, which may include shared metabolic pathways, receptor binding affinities, or mode of action information [23].
Read-across represents a powerful application of chemical grouping where data from one or more source substances are used to predict the properties of a similar target substance with missing data. A scientifically justified read-across argument requires more than just structural similarity; it demands thorough assessment of toxicokinetic and toxicodynamic similarities between source and target substances [24]. The uncertainty associated with read-across predictions must be systematically characterized, considering both the similarity justification and the completeness of the overall read-across argument [24].
Table 2: Read-Across Justification Framework
| Assessment Domain | Key Considerations | Data Requirements |
|---|---|---|
| Chemical Similarity | Structural features, physicochemical properties, reactivity | Molecular structure, log P, pKa, molecular weight |
| Toxicokinetic Similarity | Absorption, distribution, metabolism, excretion | ADME studies, metabolic pathways, bioavailability |
| Toxicodynamic Similarity | Mechanism of action, biological effects, molecular initiating events | In vitro assays, omics data, pathological findings |
Templates have been developed to assist in assessing similarity across chemistry, toxicokinetics, and toxicodynamics, as well as to guide the systematic characterization of uncertainty [24]. These templates help researchers document the scientific rationale for category membership and provide transparent justification for regulatory submissions. The workflow for reporting a read-across prediction should clearly articulate the hypothesis, similarity justification, data gap filling, and uncertainty characterization [24].
Confidently establishing the equivalence of measurement processes for chemical category development requires careful experimental design. When evaluating multiple materials or substances, comparisons should employ a linear model approach that examines the relationship between assigned values and measurement results made under repeatability conditions [25]. This methodology, refined over more than a decade of experience in high-level metrological comparisons, involves four critical steps: (1) design, (2) measurement, (3) definition of a reference function, and (4) estimation of degrees of equivalence [25].
The experimental design and measurement tasks are most critical to the eventual utility of the comparison, as creative mathematics cannot fully compensate for fundamental design flaws [25]. Researchers should prioritize proper design of comparisons, particularly when substances differ in analyte quantity or matrix composition. The reference function serves as a benchmark against which individual substances or measurements can be evaluated, providing a quantitative basis for assessing category membership and identifying outliers that may not fit within the proposed grouping [25].
Quantitative data analysis methods are crucial for evaluating chemical categories, facilitating the discovery of trends, patterns, and relationships within datasets [26]. These methods employ mathematical, statistical, and computational techniques to uncover patterns, test hypotheses, and support decision-making regarding category membership. The process involves two main categories of statistical approaches: descriptive statistics that summarize and describe dataset characteristics, and inferential statistics that use sample data to make generalizations, predictions, or decisions about a larger population [26].
Table 3: Quantitative Analysis Methods for Chemical Category Assessment
| Method Type | Specific Techniques | Application in Chemical Grouping |
|---|---|---|
| Descriptive Statistics | Mean, median, mode, range, variance, standard deviation | Characterizing central tendency and dispersion of category member properties |
| Inferential Statistics | T-tests, ANOVA, regression analysis, correlation analysis | Testing significant differences between groups, predicting properties across categories |
| Cross-Tabulation | Contingency table analysis | Analyzing relationships between categorical variables in chemical groups |
| Data Mining | Pattern recognition algorithms | Detecting hidden patterns, relationships, and correlations within category data |
For chemical category development, regression analysis is particularly valuable for examining relationships between dependent and independent variables to predict outcomes for data-poor substances [26]. Correlation analysis measures the strength and direction of relationships between different molecular descriptors or toxicological endpoints, helping to establish meaningful category boundaries. Effective data visualization through charts, graphs, and other visual tools transforms complex datasets into understandable formats, highlighting trends and patterns that support robust category formation [26].
The experimental workflow for chemical category development relies on specific reagents, software tools, and reference materials to generate reliable data for grouping decisions. The following table details essential research solutions and their functions in supporting category development and read-across assessments.
Table 4: Essential Research Reagent Solutions for Chemical Category Development
| Solution Category | Specific Tools/Materials | Function in Category Assessment |
|---|---|---|
| In Silico Profiling Tools | QSAR software, read-across platforms, toxicity predictors | Predicting toxicological endpoints across chemical clusters, identifying compounds of concern based on structural features [23] |
| Chemical Reference Materials | Certified Reference Materials (CRMs), proficiency testing materials | Assessing equivalence of measurement processes, method validation, quality control [25] |
| Statistical Analysis Software | R Programming, Python (Pandas, NumPy, SciPy), SPSS, Excel | Handling large datasets, statistical computing, data visualization, automated quantitative analysis [26] |
| Data Visualization Tools | ChartExpo, specialized graphing software | Creating advanced visualizations without coding, highlighting trends and patterns in category data [26] |
| Chemical Database Systems | Structure-searchable databases, toxicological data repositories | Supporting category formation with existing experimental data, identifying potential source substances for read-across [23] |
These research solutions enable the implementation of tiered classification strategies and in silico profiling within a clustering approach, which is especially powerful when applied to well-characterized chemical classes [23]. By analyzing predicted toxicological endpoints across a cluster (such as mutagenicity and potent dermal sensitization), chemical groups of potential concern can be identified, along with newly associated substances that share both the cluster chemical features and concerning properties [23].
The following diagram illustrates the systematic workflow for developing and justifying chemical categories using read-across methodology, incorporating similarity assessment, uncertainty characterization, and regulatory reporting elements.
Chemical Category Development Workflow
This workflow emphasizes the iterative nature of category development, where similarity assessments may need refinement based on emerging data or uncertainty analyses. The process begins with clear definition of the category purpose and scope, followed by systematic assessment of structural, physicochemical, and biological similarities. The identification of data gaps triggers the selection of appropriate source substances and development of a robust similarity justification that addresses both chemical and biological domains [24]. Finally, comprehensive uncertainty characterization and transparent documentation ensure the category approach meets regulatory standards for chemical safety assessment [24].
Chemical category development represents a powerful strategy for efficient and scientifically robust safety assessment of substances, particularly when supported by systematic read-across methodologies. By implementing tiered clustering approaches that integrate structural features with toxicological profiles, researchers can organize chemically related substances into meaningful categories that support data gap filling through scientifically justified read-across [23]. The success of these approaches depends on rigorous experimental design, comprehensive data evaluation, and transparent uncertainty characterization [24].
As chemical grouping strategies continue to evolve, the integration of in silico profiling and computational toxicology methods will further strengthen category approaches by providing mechanistic insights and supporting biological plausibility arguments [23]. Embedding grouping strategies and predictive toxicology into decision-making workflows provides an evidence-based, efficient risk management approach throughout the product lifecycle [23]. When properly implemented with appropriate scientific justification and uncertainty assessment, chemical category development serves as a valuable framework for advancing chemical safety assessment while reducing animal testing and resource requirements.
Read-across is a fundamental methodology in chemical risk assessment that predicts the toxicological properties of a data-poor "target" substance by using known information from one or more data-rich "source" substances that are structurally and mechanistically similar [9]. This approach operates on the fundamental tenet that substances sharing similar chemical structures and behaviors can be expected to elicit similar biological effects, providing a scientifically valid alternative to traditional animal testing for addressing data gaps in hazard assessment [9]. As regulatory agencies worldwide increasingly emphasize the reduction of animal testing, read-across has become one of the most common alternative approaches, supported by frameworks from organizations such as the European Food Safety Authority (EFSA), the European Chemicals Agency (ECHA), and the U.S. Environmental Protection Agency (EPA) [9] [27].
The read-across approach is typically applied through two primary chemical grouping strategies: the analogue approach, which compares a target substance with a limited number of closely related source substances, and the category approach, which relies on patterns or trends among several source substances to predict a target substance's properties [9]. Both methodologies require a systematic evaluation of similarity across multiple parameters to establish scientific confidence in the predictions. This guide provides a comprehensive comparison of the three critical similarity assessment parametersâstructural, metabolic, and toxicologicalâthat form the cornerstone of robust read-across assessments for researchers, scientists, and drug development professionals.
The credibility of any read-across assessment depends on a rigorous, multi-dimensional similarity justification between source and target substances. The table below summarizes the key aspects, assessment methods, and regulatory considerations for the three primary similarity parameters.
Table 1: Comprehensive Comparison of Similarity Assessment Parameters in Read-Across
| Parameter | Key Assessment Aspects | Common Methodologies & Tools | Regulatory Acceptance Considerations |
|---|---|---|---|
| Structural Similarity | - Functional groups- Carbon skeleton- Molecular size/weight- Substituent patterns- Reactivity | - Tanimoto/Dice indices- OECD QSAR Toolbox- Chemoinformatic analysis- Expert judgment | - Foundation for most assessments- Rarely sufficient alone- Requires complementary data- Well-established in guidance |
| Metabolic Similarity | - Primary metabolic pathways- Bioactivation/detoxification routes- Key enzyme systems- Reactive metabolite formation- Toxicokinetic profiles | - In vitro metabolism studies- In silico simulators (OASIS, TIMES)- Comparative metabolic mapping- Experimental metabolite identification | - Increasingly critical for acceptance- Explains dissimilar toxicological outcomes- Requires documented or simulated metabolic maps- Strengthens mechanistic plausibility |
| Toxicological Similarity | - Mechanism/mode of action- Adverse Outcome Pathways- Target organs- Potency and severity- Dose-response relationships | - In vitro bioassays (NAMs)- High-throughput screening- Toxicogenomics- Historical toxicity data- WoE integration | - Ultimate validation of read-across- Requires concordance across endpoints- NAMs data increasingly valued- Case-specific evidence requirements |
Objective: To establish and quantify the degree of structural similarity between target and source substances using computational and expert-driven approaches.
Methodology:
Objective: To compare metabolic pathways and transformations between target and source substances to establish toxicokinetic consistency.
Methodology:
Objective: To establish concordance in toxicological profiles and mechanisms of action between target and source substances.
Methodology:
The following diagram illustrates the integrated workflow for conducting a comprehensive similarity assessment in read-across, highlighting the interrelationships between the three key parameters.
Figure 1: Workflow for Integrated Similarity Assessment in Read-Across
The successful implementation of similarity assessments requires specific research tools and platforms. The table below details key reagent solutions and their applications in read-across testing.
Table 2: Essential Research Reagents and Platforms for Similarity Assessment
| Tool/Platform | Primary Application | Key Features & Utility |
|---|---|---|
| OECD QSAR Toolbox | Structural grouping & category formation | - Chemical category formation- Structural similarity assessment- Hazard profiling- Metabolic pathway screening [9] [28] |
| OASIS TIMES Platform | Metabolic similarity & toxicity prediction | - Metabolic simulator- Metabolic similarity quantification- Read-across justification- Hazard endpoint prediction [28] |
| EPA CompTox Dashboard | Chemical data integration & analogue identification | - Chemical property database- Structural analogue identification- Toxicity value database- Biomolecular screening data [27] |
| MetaPath Database | Metabolic pathway analysis | - Documented metabolic pathways- Metabolic tree representation- Enzyme kinetics data- Species-specific metabolism [28] |
| In Vitro Metabolism Kits | Experimental metabolic profiling | - Hepatocyte incubation systems- Metabolite identification- Metabolic stability assessment- Enzyme phenotyping [28] |
| NAMs Test Systems | Toxicological mechanism screening | - ToxCast/Tox21 assay batteries- High-throughput screening- Pathway-based assessment- Mechanistic toxicology [27] [29] |
Case Overview: This assessment addressed data gaps for pentamethylphosphoramide (PMPA) and N,N,N',N"-tetramethylphosphoramide (TMPA) using hexamethylphosphoramide (HMPA) as a source analogue [27].
Integrated Similarity Analysis:
Regulatory Outcome: HMPA was accepted as a suitable analogue for deriving screening-level toxicity values based on the integrated metabolic and mechanistic similarity justification [27].
Case Overview: Assessment of 4-methyl-2-pentanol (MIBC) using multiple structural analogues including its ketone derivative, methyl isobutyl ketone (MIBK) [27].
Integrated Similarity Analysis:
Experimental Support: Metabolic studies confirmed reversible metabolism between alcohol and ketone forms, supporting the biological relevance of structural similarity [27].
The scientific rigor and regulatory acceptance of read-across assessments fundamentally depend on a comprehensive, multi-parameter similarity justification that integrates structural, metabolic, and toxicological evidence. Structural similarity provides the initial foundation but is rarely sufficient alone; metabolic consistency often explains disparate toxicological outcomes among structurally similar compounds, while toxicological concordance ultimately validates the read-across hypothesis. The experimental protocols and case studies presented in this guide demonstrate that successful read-across applications systematically evaluate all three parameters through complementary methodologies, from computational predictions to experimental verification. As regulatory frameworks increasingly incorporate New Approach Methodologies, the integration of these similarity dimensions will continue to evolve, enabling more scientifically robust and animal-free chemical safety assessments for researchers and drug development professionals.
Read-across is a fundamental data gap-filling technique in chemical safety assessment, where the known properties of a well-studied "source" chemical are used to predict the unknown properties of a similar, data-poor "target" chemical [30]. This methodology plays an increasingly vital role in complying with chemical regulations worldwideâsuch as the European REACH regulationâwhile potentially offering significant savings in animal testing, product development time, and costs [30]. As a core component of New Approach Methodologies (NAMs), read-across represents a paradigm shift toward more efficient and human-relevant safety assessments.
Within this practice, two distinct methodological approaches have emerged: qualitative read-across, which assesses whether a target chemical is likely to exhibit a particular hazard (e.g., skin sensitization), and quantitative read-across, which predicts the potency or specific dose-response level at which effects may occur [11]. The distinction is critical for regulatory decision-making. Qualitative approaches answer "why" or "how" questions about the presence of a hazard, while quantitative approaches address "how much" or "how often" to determine safe exposure levels [31]. This article provides a comprehensive comparison of these methodologies, examining their applications, limitations, and implementation frameworks to guide researchers and chemical safety professionals.
Qualitative read-across involves identifying a chemical substructure common to both source and target substances and inferring the presence or absence of a property or activity for the target based on the same property or activity in the source analogue [11]. This approach is fundamentally binary and descriptive, focusing on whether a particular hazard exists without quantifying its potency. It relies on expert judgment to establish similarity based on structural attributes, functional groups, or mechanistic considerations.
The European Food Safety Authority (EFSA) emphasizes that qualitative read-across requires demonstrating structural and mechanistic similarity between substances, supported by a weight-of-evidence evaluation [2]. For example, if several structurally similar chemicals all demonstrate skin sensitization potential due to a common protein-reactive functional group, one might qualitatively predict that a new chemical sharing that functional group would also be a skin sensitizer, without specifying its precise potency.
Quantitative read-across extends beyond hazard identification to predict numerical values for toxicological properties or points of departure (PODs) such as benchmark doses or no-observed-adverse-effect levels (NOAELs) [27] [32]. This approach assumes that the potency of an effect shared by different analogous substances is similar, allowing for the estimation of specific threshold values for risk assessment.
The U.S. EPA has developed sophisticated frameworks for quantitative read-across, particularly for deriving screening-level toxicity values for data-poor chemicals encountered in programs like Superfund [27] [32]. For instance, in a case study involving pentamethylphosphoramide (PMPA) and N,N,N',N"-tetramethylphosphoramide (TMPA), the POD from their metabolic precursor hexamethylphosphoramide (HMPA) was adopted to establish quantitative toxicity values based on shared metabolic pathways and target organ toxicity [27].
Table 1: Core Conceptual Differences Between Qualitative and Quantitative Read-Across
| Aspect | Qualitative Read-Across | Quantitative Read-Across |
|---|---|---|
| Primary Question | Why? How? (mechanism) [31] | How much? How often? (potency) [31] |
| Data Type | Descriptive, categorical [31] | Numerical, continuous [31] |
| Analysis Method | Categorization, thematic analysis [31] | Statistical analysis [31] |
| Typical Output | Hazard identification/classification [11] | Point of departure (POD), toxicity values [27] |
| Uncertainty Focus | Similarity justification, mechanistic plausibility [30] | Potency extrapolation, dose-response alignment [27] |
Qualitative read-across finds particularly strong application in hazard identification and classification and labelling (C&L) under regulatory frameworks like REACH [30]. Its utility is most established for endpoints with clear structural alerts, where the presence of specific functional groups reliably predicts toxicological activity.
For genotoxicity and skin sensitization, the presence of functional groups associated with covalent reactivity (to DNA and proteins, respectively) provides a scientifically justifiable basis for qualitative read-across [30]. The Research Institute for Fragrance Materials (RIFM) extensively employs qualitative read-across in safety assessments, with over 80% of published fragrance ingredient assessments using read-across to address at least one endpoint [30].
The EFSA guidance for food and feed safety assessment outlines a structured workflow for qualitative read-across, emphasizing problem formulation, substance characterization, source identification, and uncertainty analysis [2]. This approach is particularly valuable for prioritizing chemicals for further testing or for making definitive hazard classifications when supported by strong mechanistic evidence.
Quantitative read-across is indispensable for risk assessment and deriving safe exposure levels when experimental dose-response data are unavailable for a target chemical. The U.S. EPA's read-across framework has been successfully applied to derive Provisional Peer-Reviewed Toxicity Values (PPRTVs) for Superfund site contaminants, enabling quantitative risk assessment for data-poor chemicals [27] [32].
Case studies demonstrate the power of quantitative read-across based on metabolic and mechanistic similarity. For instance, 4-methyl-2-pentanol (MIBC) and its ketone derivative methyl isobutyl ketone (MIBK) undergo bidirectional metabolism to a common metabolite, 4-methyl-4-hydroxy-2-pentanone, supporting quantitative read-across of toxicity values between these compounds [27]. Similarly, aliphatic alcohol/ketone pairs with shared metabolic pathways enable quantitative predictions across category members [27].
Beyond traditional toxicology, read-across methodologies are expanding into novel applications such as estimating chemical process emissions by leveraging data from structurally similar chemicals with known emission profiles [11]. This innovative extension demonstrates the versatility of the approach across multiple domains of chemical safety assessment.
Table 2: Typical Application Contexts for Read-Across Approaches
| Application Context | Qualitative Read-Across | Quantitative Read-Across |
|---|---|---|
| Regulatory Classification | Primary application [30] | Limited application |
| Risk Assessment | Screening level only [30] | Primary application [27] |
| Dose-Response Assessment | Not applicable | Essential tool [27] |
| Chemical Prioritization | Highly suitable | Less suitable |
| Food Additive Safety | Supported by EFSA guidance [2] | Case-specific application [2] |
| Environmental Contaminants | Limited utility | Critical for PPRTV derivation [27] |
The following diagram illustrates the generalized workflow for conducting read-across assessments, integrating elements from both qualitative and quantitative approaches:
Establishing scientifically defensible similarity between source and target chemicals requires a systematic, multi-faceted approach:
Structural Similarity Analysis: Begin by identifying structural analogues using computational tools (OECD QSAR Toolbox, EPA CompTox Chemicals Dashboard). Evaluate functional groups, carbon chain length, branched vs. linear structures, and position of substituents [30] [3]. Structural similarity is necessary but often insufficient alone for a robust read-across.
Metabolic and Toxicokinetic Evaluation: Assess metabolic pathways using in silico metabolism predictors and literature data. The case of HMPA, PMPA, and TMPA demonstrates the power of shared metabolic pathwaysâwhere HMPA is a metabolic precursor to both target compounds through sequential demethylationâas justification for read-across [27]. Evaluate absorption, distribution, metabolism, and excretion (ADME) properties.
Mechanistic and Toxicodynamic Similarity: For the specific endpoint of concern, investigate whether source and target chemicals operate through shared molecular initiating events and key events in adverse outcome pathways [27] [32]. This may incorporate New Approach Methodologies (NAMs) such as in vitro bioactivity profiling (Tox21, ToxCast) and high-throughput screening data [27] [32].
Physicochemical Properties Comparison: Compare key properties influencing bioavailability and toxicity, including log P (octanol-water partition coefficient), water solubility, vapor pressure, and molecular weight [11]. Significant discrepancies may challenge read-across justification.
The following diagram contrasts the specific methodological approaches for qualitative versus quantitative read-across:
Implementing robust read-across requires leveraging specialized databases, software tools, and methodological frameworks. The following toolkit categorizes essential resources for conducting read-across assessments:
Table 3: Essential Research Tools and Resources for Read-Across
| Tool/Resource | Type | Primary Function | Application Context |
|---|---|---|---|
| OECD QSAR Toolbox [30] [3] | Software | Chemical categorization, structural similarity, hazard profiling | Both qualitative and quantitative |
| EPA CompTox Chemicals Dashboard [27] [32] | Database | Chemical property data, toxicity values, bioactivity data | Both qualitative and quantitative |
| EPA AIM Tool [3] | Software | Analog Identification Methodology for systematic analogue search | Both qualitative and quantitative |
| Tox21/ToxCast [30] [32] | Database | High-throughput screening bioactivity data | Mechanistic support for both approaches |
| ECHA REACH Database [30] | Database | Registered substance information under REACH | Both qualitative and quantitative |
| EFSA Read-Across Guidance [2] | Framework | Methodological framework for food and feed safety | Both qualitative and quantitative |
| Adverse Outcome Pathway (AOP) Knowledge Base | Framework | Mechanistic support for grouping hypotheses | Primarily qualitative |
| ECETOC Category Framework [30] | Framework | Technical guidance for chemical categorization | Both qualitative and quantitative |
Both qualitative and quantitative read-across face significant scientific challenges that impact their application and regulatory acceptance:
Uncertainty in Similarity Arguments: The core assumption that "similar chemicals have similar properties" contains inherent uncertainty. Activity cliffsâwhere small structural changes cause dramatic toxicity differencesâpose particular challenges [30]. While uncertainty can be characterized qualitatively (low, medium, high), consensus on acceptable uncertainty levels remains elusive [30].
Endpoint Specificity: An analogue suitable for one endpoint may be inappropriate for another. For example, chemicals grouped for acute toxicity may not share carcinogenic potential [3]. This necessitates endpoint-by-endpoint assessment rather than blanket categorization.
Data Quality and Coverage: Large gaps in chemical space coverage persist, particularly for high-quality in vivo data [30]. Even when data exist, variability in test protocols, reporting standards, and reliability assessment complicates comparison across chemicals.
Quantitative Extrapolation Challenges: Quantitative read-across faces additional hurdles in potency extrapolation, as similar chemicals may differ in toxicokinetics that modify effective target tissue doses [27]. The U.S. EPA's experience implementing read-across for the Superfund program revealed challenges in identifying analogues with appropriate dose-response data [27].
Regulatory acceptance of read-across has been described as "slow and unpredictable" despite its potential [30]. A retrospective analysis of REACH submissions found that registrants have "often failed to satisfy regulatory requirements" from ECHA's perspective [18].
Key regulatory concerns include:
Insufficient Similarity Justification: Regulatory authorities frequently reject read-across cases due to inadequate demonstration of structural, mechanistic, or metabolic similarity [18]. ECHA's Read-Across Assessment Framework (RAAF) establishes rigorous standards that many submissions fail to meet [30].
Limited NAMs Acceptance: Analysis of ECHA Final Decisions revealed "no example for acceptance of read-across based on non-animal New Approach Methodologies" [18]. This highlights the gap between scientific innovation and regulatory practice.
Inconsistent Uncertainty Communication: Regulators report that read-across justifications often fail to transparently characterize and document uncertainties [2]. The EFSA guidance emphasizes uncertainty analysis as a critical component, providing templates to standardize this process [2].
Variable Regulatory Standards: Different regulatory programs apply different standards for read-across acceptance. While EFSA has developed detailed guidance for food and feed safety [2], and the U.S. EPA employs structured frameworks for Superfund assessments [27], other jurisdictions may apply read-across on a more ad hoc basis [3].
Qualitative and quantitative read-across represent complementary approaches with distinct applications in chemical safety assessment. Qualitative read-across serves as a powerful tool for hazard identification and classification, particularly when supported by structural alerts and mechanistic understanding. Quantitative read-across enables dose-response assessment and derivation of safe exposure levels, extending the utility of read-across to risk assessment contexts where quantitative values are essential.
The scientific rigor and regulatory acceptance of both approaches continue to evolve through frameworks like those developed by EFSA [2] and the U.S. EPA [27] [32]. Successful implementation requires systematic assessment of structural, metabolic, and mechanistic similarity, transparent characterization of uncertainties, and appropriate integration of New Approach Methodologies. As regulatory guidance matures and scientific methodologies advance, read-across promises to play an increasingly central role in next-generation chemical safety assessment, potentially reducing animal testing while enhancing human relevance.
Read-across has evolved from an expert-driven technique based primarily on structural analogy into a rigorously documented and mechanistically informed cornerstone of modern chemical safety assessment [33]. This approach predicts the toxicological properties of a target substance with limited or no data by using information from structurally and mechanistically similar source substances [2]. The maturation of read-across frameworks, including the European Food Safety Authority's (EFSA) 2025 guidance and the European Chemicals Agency's (ECHA) Read-Across Assessment Framework (RAAF), alongside the development of New Approach Methodologies (NAMs), has significantly enhanced the scientific robustness and regulatory acceptance of read-across predictions [33]. This guide examines successful read-across case studies across toxicological endpoints, comparing methodological frameworks, performance metrics, and practical applications to inform researchers and regulatory scientists.
| Framework | Scope | Key Features | Applicability |
|---|---|---|---|
| EFSA 2025 Guidance [2] [33] | Food and feed safety | Seven-step, uncertainty-anchored workflow; actively embeds NAMs and Adverse Outcome Pathways (AOPs) | Provides a transparent "how-to" template for applicants |
| ECHA RAAF [33] | Industrial chemicals under REACH | Six scenario types and assessment elements; defines evidence requirements | Functions as an evaluator's rubric, standardizing regulatory scrutiny |
| Good Read-Across Practice (GRAP) [33] | Cross-domain | Emphasizes mechanistic plausibility, exhaustive analogue selection, and uncertainty characterization | Supplies conceptual best practices influencing other frameworks |
| Tool/Platform | Type | Key Features | Application in Read-Across |
|---|---|---|---|
| intelligent Read Across (iRA) [34] | Python-based tool | Similarity-based algorithms; calculates pairwise similarity, optimizes read-across, identifies important features | Nanotoxicity prediction using molecular descriptors |
| OrbiTox [35] | Read-across platform | Chemistry-based similarity searching, Saagar molecular descriptors, >1 million data points, >100 QSAR models, built-in metabolism predictor | Streamlining regulatory submissions for chemicals |
| Quantitative Read-Across Structure-Activity Relationship (q-RASAR) [36] | Modeling approach | Combines QSAR with similarity-based read-across; uses similarity values and molecular descriptors | Predicting acute human toxicity (pTDLo endpoint) for diverse chemicals |
| Generalized Read-Across (GenRA) [37] | Computational approach | Based on similarity-weighted activity predictions; implemented in R using chemical fingerprints | Predicting acute oral toxicity (LD50) for chemicals like 1-chloro-4-nitrobenzene |
A 2024 next-generation risk assessment (NGRA) case study was conducted to determine the highest safe concentration of daidzein in a body lotion based on similarities with its structural analogue, genistein [38]. This study established a proof-of-concept for the value added by NAMs in read-across, using in silico information, in vitro toxicodynamic, and toxicokinetic data to support biological similarity and establish potency [38].
The safety assessment followed a 10-step tiered workflow evaluating systemic toxicity [38]:
The workflow integrated data from various NAMs, including PBPK modeling, cell stress assays, pharmacology profiling, transcriptomics, and EATS assays for endocrine disruption endpoints [38].
The case study successfully established a safe use concentration for daidzein in a body lotion:
This case study demonstrated that NAMs can provide valuable support for read-across assessments and help foster their regulatory acceptance [38]. The approach highlighted the use of NAMs in a tiered workflow to conclude on the highest safe concentration of an ingredient without animal testing, showcasing a viable path for animal-free safety assessments [38].
The widespread use of nanoparticles (NPs) in medicine, sensors, and cosmetics presents potential human health and environmental risks [34]. Experimental evaluation of NP toxicity is resource-intensive and raises ethical concerns, necessitating computational methods for toxicity assessment [34].
This study introduced a Python-based tool called "intelligent Read Across" (iRA) for evaluating nanoparticle toxicity [34]:
The tool was validated using three small datasets (⤠30 samples) containing nanotoxicity data [34]. The methodology followed basic similarity-based read-across approaches to perform predictions and identified structural characteristics and properties contributing to toxicity [34].
The iRA tool demonstrated significant improvements in prediction accuracy:
The iRA tool provides a computational solution for prioritizing data-poor nanoparticles, addressing a critical gap in nanotechnology risk assessment [34]. Its ability to identify structural features contributing to toxicity helps guide the development of safer nanomaterials [34].
A 2024 study introduced a novel read-across concept for ecotoxicological risk assessment of phosphate chemicals, considering species sensitivity differences within structurally similar compound groups [39]. This approach addressed limitations of traditional read-across, which can show significant variations between predicted and observed toxic values (up to 3.2 times in fish and 5.1 times in crustaceans for aromatic amines) [39].
The study developed a novel read-across concept through several key steps:
The novel read-across concept demonstrated strong predictive performance:
This approach demonstrated that considering specific modes of action and species sensitivity improves the reliability and accuracy of read-across predictions for ecotoxicological assessments [39]. The method provides a framework for addressing aquatic toxicity data gaps while reducing reliance on animal testing [39].
| Case Study | Endpoint | Performance Metrics | Key Advantages |
|---|---|---|---|
| iRA for Nanotoxicity [34] | Nanoparticle toxicity | Improved external validation metrics vs. previous models | Handles very small datasets (â¤30 samples); identifies toxicity drivers |
| q-RASAR for Acute Toxicity [36] | Human acute toxicity (pTDLo) | R² = 0.710, Q² = 0.658; external validation: Q²F1 = 0.812, Q²F2 = 0.812 | Combines QSAR with similarity-based read-across; screens large chemical libraries |
| GenRA for Acute Oral Toxicity [37] | Rodent acute oral toxicity (LD50) | Predicts LD50 for data-poor chemicals | Uses similarity-weighted activity predictions; implemented in open-source R package |
| NAM-based for Systemic Toxicity [38] | Endocrine disruption (ERα assay) | Correlation between in vitro PoD and in vivo NOAEL | Integrated NAMs workflow; animal-free safety assessment |
Analysis of successful read-across cases reveals several critical success factors:
| Tool/Resource | Type | Function in Read-Across | Example Applications |
|---|---|---|---|
| PBPK Modeling [38] | Computational tool | Extrapolates external to internal doses; supports toxicokinetic similarity | Converting in vitro PoD to external safe doses in NAM-based assessments |
| ECOTOX Knowledgebase [39] | Database | Provides curated ecological toxicity data for aquatic species | Source of LC50 data for fish, crustaceans, and insects in ecotoxicity read-across |
| OECD QSAR Toolbox [41] | Software | Profiling, categorization, and filling data gaps for chemicals | Identifying structural analogues and metabolic pathways for read-across |
| TAME 2.0 [37] | Computational platform | Conducts generalized read-across (GenRA) predictions | Predicting acute oral toxicity (LD50) for data-poor chemicals |
| OrbiTox [35] | Read-across platform | Chemistry-based similarity searching with extensive database and QSAR models | Streamlining regulatory submissions for chemicals with data gaps |
| GLORYx/Meteor Nexus [38] In silico metabolism | Software | Predicts likely metabolites of target and source chemicals | Assessing metabolic similarity in read-across justifications |
The case studies presented demonstrate that modern read-across approaches have matured into scientifically robust and regulatory relevant tools for toxicity prediction. Key advancements include the integration of New Approach Methodologies, the development of quantitative frameworks (q-RASAR), and the implementation of structured uncertainty assessment. The convergence of regulatory frameworks (EFSA, ECHA RAAF, GRAP) signals an emerging international consensus on defensible read-across practices [33]. For researchers and regulatory scientists, successful implementation requires careful attention to problem formulation, comprehensive similarity assessment (structural, metabolic, mechanistic), transparent documentation, and appropriate uncertainty characterization. As these methodologies continue to evolve, read-across promises to play an increasingly vital role in addressing chemical data gaps while reducing animal testing, ultimately supporting the development of safer chemicals and products.
In the evolving landscape of chemical safety assessment, read-across approaches have emerged as powerful new approach methodologies (NAMs) that enable prediction of toxicological properties for data-poor chemicals using information from structurally similar, data-rich substances [2] [9]. While these methodologies offer significant potential to reduce reliance on animal testing and accelerate safety evaluations, their implementation faces considerable challenges regarding uncertainty quantification and regulatory acceptance. This guide examines the core pitfalls in read-across applications and provides structured frameworks to enhance scientific robustness, drawing from recent EFSA guidance and practical implementation case studies.
A standardized workflow is fundamental to implementing reliable read-across assessments. The European Food Safety Authority (EFSA) outlines a structured process encompassing key stages from problem formulation to uncertainty analysis [2] [9]. The following diagram illustrates this systematic workflow:
The Challenge: Simply demonstrating structural similarity through basic molecular descriptors is insufficient for regulatory acceptance. Studies show that small structural differences can significantly impact toxicological behavior, leading to potentially inaccurate predictions [3].
Experimental Protocol for Robust Characterization:
Supporting Data: Comparative analysis of 65 risk/safety assessments revealed that 20 of 65 assessments showed 30-fold variability in values, primarily attributable to insufficient characterization of structural-functional relationships [42].
The Challenge: Failure to adequately quantify and document uncertainty remains a primary source of regulatory skepticism. EFSA emphasizes that uncertainty analysis must determine whether overall uncertainty can be "lowered to tolerable levels" through standardized approaches [2].
Experimental Framework for Uncertainty Quantification:
Table 1: Uncertainty Assessment Framework for Read-Across Applications
| Uncertainty Source | Assessment Method | Quantification Approach | Tolerability Threshold |
|---|---|---|---|
| Structural Analogy | Tanimoto similarity index | Distance metrics in chemical descriptor space | >0.8 for high confidence |
| Toxicological Relevance | In vitro bioactivity profiling | Concordance analysis of ToxCast/Tox21 assay results | >85% similarity in bioactivity profiles |
| Metabolic Concordance | In vitro metabolomics | Comparative metabolic stability and metabolite identification | >70% shared major metabolites |
| Dose-Response Consistency | Benchmark dose modeling | Point of departure comparison across analogs | <3-fold difference in POD values |
The Challenge: Overreliance on traditional read-across without complementary NAMs limits mechanistic understanding and reduces regulatory confidence [43] [3].
Experimental Protocol for NAMs Integration:
Table 2: Research Reagent Solutions for Enhanced Read-Across
| Research Tool Category | Specific Tools/Platforms | Primary Function | Regulatory Application |
|---|---|---|---|
| Chemical Database Platforms | PubChem [44], Reaxys [44], SciFinder [44] | Chemical property data acquisition | Source substance identification and characterization |
| Toxicogenomics Resources | Tox21 [3], ToxCast [3] | High-throughput screening data | Mechanistic similarity assessment |
| QSAR and Read-Across Tools | OECD QSAR Toolbox [3], CEFIC AMBIT [3], EPA AIM Tool [3] | Structural similarity assessment and analogue identification | Category formation and hypothesis generation |
| Physiologically Based Kinetic Models | QIVIVE, PBK modeling [43] | In vitro to in vivo extrapolation | Dose-response extrapolation and kinetic consistency |
Implementation Workflow: The integration of multiple evidence streams follows a logical progression from basic characterization to advanced mechanistic support, as illustrated below:
Table 3: Performance Comparison of Read-Across Implementation Approaches
| Assessment Dimension | Traditional Read-Across | Enhanced Read-Across (NAM-Integrated) | Regulatory Impact |
|---|---|---|---|
| Structural Similarity | Basic functional group comparison | Multi-descriptor similarity with toxicophore mapping | Reduces uncertainty in category justification |
| Mechanistic Evidence | Limited or inferred | Comprehensive AOP-based analysis with in vitro confirmation | Addresses mode of action consistency requirements |
| Metabolic Considerations | Often omitted or superficial | Experimental metabolite profiling and PBK modeling | Mitigates cross-species extrapolation uncertainties |
| Uncertainty Characterization | Qualitative description | Quantitative uncertainty bounds with sensitivity analysis | Enables transparent risk-based decision making |
| Regulatory Acceptance Rate | Variable (case-specific) | Consistently higher with documented precedents | Streamlines submission review process |
Analysis of regulatory assessment patterns demonstrates that applications incorporating comprehensive uncertainty assessment and NAMs integration show significantly improved outcomes [42] [3]:
Addressing uncertainty and regulatory skepticism in read-across applications requires systematic implementation of structured workflows, comprehensive uncertainty assessment, and strategic integration of new approach methodologies. The experimental protocols and comparative data presented herein provide researchers with evidence-based frameworks to enhance the scientific robustness of read-across cases. As regulatory guidance continues to evolveâwith EFSA's final read-across guidance anticipated by end of 2025âthe emphasis on transparency, mechanistic relevance, and quantified uncertainty will increasingly determine successful implementation. By adopting these advanced approaches, researchers can transform read-across from a data gap-filling exercise into a scientifically rigorous component of modern chemical safety assessment.
New Approach Methodologies (NAMs) are revolutionizing chemical safety assessment by providing innovative, human-relevant tools for hypothesis strengthening in read-across approaches. This guide compares the performance of integrated NAMs frameworks against traditional single-method assessments, demonstrating through experimental data how combining in vitro, in silico, and in chemico methods enhances predictive accuracy, reduces uncertainty, and supports regulatory acceptance. By examining specific case studies and providing detailed protocols, we illustrate how hypothesis-driven integration of NAMs creates a robust weight-of-evidence framework for chemical safety evaluation.
Read-across is a fundamental technique in chemical safety assessment that involves using data from chemically or biologically similar substances (source substances) to predict the properties of a data-poor target substance [2] [3]. This approach has evolved from simple structural comparisons to sophisticated hypothesis-driven frameworks incorporating multiple lines of evidence from New Approach Methodologies. NAMs encompass a broad suite of innovative toolsâincluding in vitro models, computational approaches, omics technologies, and mechanistic frameworksâdesigned to provide more human-relevant safety data while reducing reliance on traditional animal testing [45].
The integration of NAMs into read-across represents a paradigm shift from traditional toxicology toward mechanistically informed, predictive safety assessment. By generating targeted data on specific biological pathways and key events, NAMs strengthen the scientific justification for read-across hypotheses, address uncertainties in similarity justifications, and provide human-relevant biological context [45] [46]. Regulatory agencies worldwide, including the EPA, EFSA, and OECD, are increasingly encouraging this integrated approach through updated guidance documents and training initiatives [2] [47] [48].
Recent studies have systematically evaluated the performance of various NAMs combinations for specific toxicological endpoints. The table below summarizes experimental data from a comparative study assessing Defined Approaches (DAs) for eye hazard identification, particularly for surfactantsâa challenging chemical class where structural similarity alone may be insufficient for accurate read-across [49].
Table 1: Performance Comparison of Defined Approaches for Eye Hazard Identification of Surfactants
| Defined Approach (DA) | Test Methods Included | UN GHS Category 1 Sensitivity | UN GHS Category 2 Sensitivity | No Category Accuracy | Applicability Domain |
|---|---|---|---|---|---|
| DASF | Recombinant human cornea-like epithelium (TG 492) + modified Short Time Exposure (TG 491) | 90.9% (N=23) | 77.8% (N=9) | 76.0% (N=17) | Surfactants |
| Other DA Combinations | Various OECD-adopted NAMs | Variable; often below minimum performance criteria | Variable; often below minimum performance criteria | Variable; often below minimum performance criteria | Primarily non-surfactants |
| Minimum Performance Criteria | As per OECD TG 467 | â¥75% | â¥50% | â¥70% | - |
The experimental data demonstrate that the DASF approach, which strategically combines human tissue models with a modified animal cell assay, meets all minimum performance criteria for surfactants, whereas other NAM combinations show variable and often insufficient performance [49]. This highlights the importance of fit-for-purpose method selection and integration rather than simply combining available tests.
The integration of multiple NAMs creates significant advantages over single-method approaches by providing complementary data streams that address different aspects of chemical-biological interactions. The comparative analysis below illustrates these advantages across key evaluation parameters.
Table 2: Comprehensive Comparison of NAMs Integration vs. Single-Method Approaches
| Evaluation Parameter | Traditional Single-Method Approach | Integrated NAMs Framework | Experimental Evidence |
|---|---|---|---|
| Predictive Accuracy | Limited to specific endpoint; may miss complex biology | Comprehensive coverage of multiple toxicity pathways | DASF achieved 90.9% Cat. 1 sensitivity vs. variable performance of single methods [49] |
| Human Relevance | Variable depending on model system | High through human cell systems, tissue models, and computational biology | Organ-on-chip platforms replicate organ-level functions with human cells [45] |
| Mechanistic Insight | Limited to assay design | Deep mechanistic data via omics, pathway analysis, and AOP networks | Transcriptomics reveals specific pathways perturbed by chemical exposure [45] |
| Uncertainty Management | High uncertainty from methodological limitations | Reduced uncertainty through weight-of-evidence and convergent findings | EFSA guidance emphasizes uncertainty assessment in read-across using NAMs data [2] |
| Regulatory Acceptance | Established for validated single methods | Growing through systematic case studies and OECD guidelines | OECD TG 467 adoption of Defined Approaches for eye irritation [49] |
| Hypothesis Testing Strength | Limited to narrow biological questions | Robust hypothesis testing through orthogonal verification | Read-across supported by in vitro, in silico, and in chemico data [3] |
| Data Gap Addressing | Limited to specific data gaps | Comprehensive data gap filling through predictive modeling | QSAR, PBPK modeling, and ToxCast data fill kinetic and dynamic data gaps [47] [48] |
This protocol outlines a systematic approach for strengthening read-across hypotheses using complementary NAMs, based on EFSA's guidance for chemical safety assessment in food and feed [2] [3].
Workflow Overview:
Step-by-Step Methodology:
Problem Formulation: Define the specific regulatory endpoint and data requirements. Identify the knowledge gaps in the target substance and formulate testable hypotheses about similarity to potential source substances [2].
Target Substance Characterization: Conduct comprehensive characterization of the target substance using:
Source Substance Identification: Identify candidate source substances using:
NAMs Data Collection: Generate complementary experimental data using:
Similarity Assessment: Evaluate the weight-of-evidence for similarity using:
Uncertainty Analysis: Systematically evaluate uncertainty using EFSA's uncertainty template [2]. Identify key uncertainty sources (structural analogs, metabolic differences, assay limitations) and use additional NAMs data to reduce uncertainty to tolerable levels.
Conclusion and Reporting: Document the hypothesis, all data sources, similarity justification, uncertainty analysis, and final conclusion in a transparent report suitable for regulatory submission [2].
This specialized protocol details the experimental methodology for the DASF approach that demonstrated superior performance in surfactant eye hazard identification [49].
Workflow Overview:
Step-by-Step Methodology:
Test System Preparation:
Test Article Application:
Endpoint Measurement:
Data Integration and Classification:
Successful implementation of NAMs-enhanced read-across requires access to specialized reagents, tools, and platforms. The following table details essential research solutions with their specific functions in hypothesis-driven safety assessment.
Table 3: Essential Research Reagent Solutions for NAMs-Enhanced Read-Across
| Tool/Reagent Category | Specific Examples | Function in Read-Across Hypothesis Testing | Key Features |
|---|---|---|---|
| Computational Chemistry Tools | OECD QSAR Toolbox, EPA CompTox Dashboard, AIM Tool, Chemical Transformation Simulator | Structural similarity assessment, property prediction, metabolite identification | Chemical category formation, read-across analogue identification, metabolic pathway prediction [47] [3] |
| In Vitro Tissue Models | EpiOcular EIT, SkinEthic HCE, Organ-on-Chip systems, 3D organoids | Human-relevant tissue response assessment, mechanism of action studies | Replicates complex tissue architecture and function, species-specific responses [45] [49] |
| Bioactivity Screening Platforms | ToxCast, Tox21, invitroDB | High-throughput bioactivity profiling, pathway perturbation assessment | Screening across hundreds of pathways and targets, concentration-response data [47] |
| Omics Technologies | Transcriptomics, proteomics, metabolomics platforms | Mechanistic similarity assessment, adverse outcome pathway evaluation | Identifies molecular initiating events and key events in toxicity pathways [45] |
| Toxicokinetic Tools | httk R package, PBPK modeling, SHEDS-HT | Absorption, distribution, metabolism, excretion prediction | Estimates internal dose, species extrapolation, exposure assessment [47] |
| Data Integration & Analysis | SeqAPASS, ECOTOX Knowledgebase, AOP Wiki | Cross-species extrapolation, pathway analysis, data integration | Supports weight-of-evidence assessment, uncertainty reduction [47] |
| Specialized Assay Kits | DPRA (Direct Peptide Reactivity Assay), KeratinoSens, h-CLAT | Specific endpoint assessment (skin sensitization, etc.) | Standardized protocols, OECD validation, high predictivity for specific endpoints [45] |
| Tenifatecan | Tenifatecan, CAS:850728-18-6, MF:C55H72N2O9, MW:905.2 g/mol | Chemical Reagent | Bench Chemicals |
| Butobendine | Butobendine (CAS 55769-65-8) - RUO Antiarrhythmic Agent | Butobendine is a research compound with antiarrhythmic properties. This product is for Research Use Only (RUO). Not for human or veterinary diagnostic or therapeutic use. | Bench Chemicals |
The integration of New Approach Methodologies represents a transformative advancement in read-across-based chemical safety assessment. As demonstrated through comparative performance data and detailed experimental protocols, strategically combined NAMs provide a powerful framework for strengthening hypotheses through convergent lines of evidence, human-relevant mechanistic data, and systematic uncertainty reduction. The superior performance of defined approaches like DASF for challenging chemical classes such as surfactants underscores that methodological integrationânot merely replacement of animal testsâdelivers the most scientifically robust and regulatory-ready solutions.
The ongoing development of standardized protocols, coupled with increasing regulatory acceptance and the growing toolkit of research solutions, positions NAMs-enhanced read-across as the future paradigm for efficient, ethical, and human-relevant chemical safety evaluation. Success in this evolving landscape will depend on continued pre-competitive data sharing, validation against human biological responses, and the development of integrated testing strategies that leverage the complementary strengths of multiple NAMs platforms [50].
The assessment of chemical safety is undergoing a fundamental transformation, moving away from traditional animal studies toward a new paradigm centered on New Approach Methodologies (NAMs). This modern framework integrates in vitro data, omics technologies, and computational tools to enable faster, more human-relevant safety decisions [51]. Within this framework, the read-across approach has emerged as a pivotal methodology. Read-across is a technique used in chemical risk assessment to predict the toxicological properties of a target substance by using data from structurally and mechanistically similar substances, known as source substances [2]. The integration of omics and computational tools is crucial for building scientific confidence in read-across, as it provides a mechanistic understanding that supports the hypothesis of similarity between source and target chemicals, thereby reducing uncertainty in the assessment [2].
Omics methodologies represent cutting-edge molecular techniques that provide comprehensive insights into biological systems by analyzing all components of a particular biological domain simultaneously [52]. They offer a holistic, top-down approach to investigating biological systems, enabling the systematic interrogation of complex disorders and chemical effects through multi-layer modifications at genomic, transcriptomic, proteomic, and metabolic levels [53] [54].
Omics technologies can be broadly classified into two categories: technology-based and knowledge-based omics. The foundational, technology-based omics follow the "central dogma" of biology and can be further divided into three groups [54]:
Knowledge-based omics, such as immunomics and microbiomics, are developed to understand a particular knowledge domain by integrating multiple omics information [54]. The diagram below illustrates the hierarchy and relationships between these different omics fields.
The generation of multi-omics data relies on a wide array of techniques specific to each omics level. The table below compares the key high-throughput platforms for genomics, transcriptomics, proteomics, and metabolomics.
Table 1: Comparison of High-Throughput Omics Platforms [53] [52] [54]
| Omics Field | Core Technology | Example Platforms/Methods | Key Advantages | Key Limitations |
|---|---|---|---|---|
| Genomics | Sequencing | DNA Microarray, Sanger Sequencing, Next-Generation Sequencing (NGS, e.g., Illumina), Third-Generation Sequencing (TGS, e.g., PacBio, Oxford Nanopore) | TGS provides long reads for resolving complex genomic regions; NGS offers high throughput at lower cost [52] [54]. | Microarrays cannot detect de novo transcripts; NGS has short read lengths; TGS can have higher error rates [54]. |
| Transcriptomics | Sequencing | RNA Microarray, RNA-Seq, Single-Cell RNA-Seq (e.g., CEL-seq2, Drop-seq) | RNA-Seq allows detection of novel transcripts and alternative splicing; single-cell provides resolution at cellular level [53] [54]. | Microarrays rely on predefined probes; tag-based methods can be prone to batch effects [54]. |
| Proteomics | Mass Spectrometry (MS) | High-Resolution MS (Orbitrap, FT-ICR), Tandem MS (CID, ECD, ETD) | High resolution and mass accuracy (FT-ICR); can identify post-translational modifications (ETD/ECD) [53] [55]. | High cost and maintenance (FT-ICR); low scan speeds; can struggle with unstable modifications (CID) [53]. |
| Metabolomics | Spectroscopy / MS | NMR Spectroscopy, FT-IR Spectroscopy, GC/MS or LC/MS | Simple sample prep, highly reproducible (NMR); high sensitivity (LC/MS/GC/MS) [53]. | Lower sensitivity than MS (NMR); long preparation may lead to errors (FT-IR) [53]. |
The vast and complex datasets generated by omics technologies necessitate advanced computational tools for analysis, integration, and interpretation. These tools are essential for extracting biologically meaningful insights and building predictive models for chemical safety.
With hundreds of computational omics methods available, systematic benchmarking is critical for guiding researchers to select the best tools for their specific analytical tasks and data types [56]. A robust benchmarking study uses gold standard data sets as ground truth and well-defined scoring metrics to assess the performance and accuracy of each tool [56]. Key principles for rigorous benchmarking include:
Multi-omics integration is a prevailing trend for constructing a comprehensive causal relationship between molecular signatures and phenotypic manifestations [52] [54]. Advanced computational tools are pivotal in cancer research, complex brain disorders, and metabolic diseases by unraveling molecular pathways and identifying biomarkers [52]. These tools leverage machine learning and artificial intelligence to integrate diverse datatypes, such as genomic, transcriptomic, proteomic, and clinical data, to distinguish distinct patient cohorts and foster personalized treatment approaches [52].
In the context of chemical safety and read-across, computational tools also extend to process simulation and modeling. These tools help in understanding chemical properties and process-related hazards.
Table 2: Comparison of Selected Chemical Process Simulation Tools [57]
| Tool Name | Best For | Standout Feature | Pros | Cons |
|---|---|---|---|---|
| Aspen Plus | Large-scale industrial applications (petrochemicals, chemicals) | Advanced thermodynamics for highly accurate simulations | Highly accurate and reliable results; extensive data library [57]. | Expensive; steep learning curve; high computational resource demand [57]. |
| COMSOL Multiphysics | Complex, multiphysics problems (R&D) | Simulates multiple physical phenomena (heat transfer, fluid dynamics, reactions) | Highly versatile and customizable; ideal for research [57]. | Expensive; requires advanced knowledge; heavy computational needs [57]. |
| CHEMCAD | Pharmaceutical, energy, and petrochemical process design | User-friendly interface suitable for non-experts | Affordable compared to high-end alternatives; fast learning curve [57]. | Limited features for complex multi-phase simulations; basic thermodynamics [57]. |
| DWSIM | Educational and professional use with budget constraints | Open-source and highly customizable via Python | Free and accessible; user-friendly; customizable [57]. | Limited advanced features; lacks industry-standard support; performance issues with large models [57]. |
The true power of in vitro data, omics, and computational tools is realized when they are integrated into a cohesive workflow for chemical safety assessment. This is particularly relevant for strengthening the scientific basis of read-across. The following diagram outlines a robust experimental and computational workflow that leverages these modern methodologies.
This workflow aligns with regulatory guidance, which emphasizes a step-by-step approach to read-across, including problem formulation, target and source substance characterization, and a particular emphasis on uncertainty analysis [2]. Data from New Approach Methodologies (NAMs), including omics, can be integrated to lower the overall uncertainty to tolerable levels [2].
To generate data that supports a read-across hypothesis, a typical transcriptomics experiment following the workflow above would involve these key steps:
Successful execution of integrated testing strategies requires specific reagents and platforms. The following table details key solutions used in the field.
Table 3: Essential Research Reagent Solutions for Omics-Based Chemical Safety Assessment
| Item / Solution | Function / Application | Examples / Specifications |
|---|---|---|
| Cell Culture Systems | Providing biologically relevant in vitro models for chemical exposure. | Primary hepatocytes (e.g., HepaRG), induced pluripotent stem cell (iPSC)-derived cells, 3D organoids [51]. |
| Nucleic Acid Extraction Kits | Isolving high-quality RNA/DNA for downstream sequencing applications. | Qiagen RNeasy Kit, Thermo Fisher MagMAX Kit. Must provide RNA free of genomic DNA with RIN > 8.5 [58]. |
| Library Prep Kits | Preparing sequencing libraries from nucleic acids for NGS/TGS platforms. | Illumina TruSeq (RNA-Seq), Takara Bio SMART-Seq (single-cell), PacBio SMRTbell kits (long-read) [52] [54]. |
| Mass Spectrometry Reagents | Enabling proteomic and metabolomic analysis, including sample prep and separation. | Trypsin (for protein digestion), TMT/Isobaric tags (for multiplexed quantitation), stable isotope-labeled internal standards [53]. |
| Pathway Analysis Software | Interpreting dysregulated gene/protein lists in the context of biological pathways. | Ingenuity Pathway Analysis (IPA), MetaboAnalyst, clusterProfiler [52]. |
| Process Simulation Software | Modeling chemical processes and properties to understand physicochemical behavior. | Aspen Plus, COMSOL Multiphysics, CHEMCAD [57]. |
The convergence of in vitro data, omics technologies, and advanced computational tools is reshaping the landscape of chemical safety assessment. This integrated approach provides a powerful, mechanistic foundation for read-across and other New Approach Methodologies, moving the field toward more human-relevant, efficient, and informative risk assessments. For researchers, the critical steps are the careful selection of appropriate omics platforms based on the biological question, the application of rigorously benchmarked computational tools for data analysis, and the transparent integration of all data streams within a structured assessment framework like that outlined for read-across. As these technologies continue to evolve, they promise to further reduce uncertainty and build greater confidence in the safety decisions that protect human health and the environment.
In modern chemical safety assessment, tiered testing strategies provide a structured framework for efficiently and ethically evaluating substance toxicity. By integrating data from multiple sources, these strategies enable a weight-of-evidence determination that supports robust regulatory decisions. This guide compares the performance of a tiered testing approach against traditional, standalone testing methods, with a specific focus on its application within read-across assessments for filling data gaps. Experimental data and case studies demonstrate that a systematic, tiered methodology enhances predictivity, reduces reliance on animal testing, and accelerates the safety evaluation process for researchers and drug development professionals.
A tiered testing strategy is a sequential approach to chemical safety assessment that begins with simple, rapid, and cost-effective methods and progresses to more complex testing only as needed. This process is intrinsically linked to the weight-of-evidence framework, a systematic approach for making decisions by integrating, reconciling, and interpreting all available data to reach a conclusion that is greater than the sum of its parts [59].
In the context of chemical safety research, particularly for the evaluation of skin corrosion and irritation, a tiered approach might start with the determination of a substance's pH and acid/alkaline reserve. Substances with extreme pH values (â¤2 or â¥11.5) warrant further investigation. The subsequent tiers can incorporate in vitro methods, such as the EpiDerm skin corrosion/irritation test and the Hen's Egg Test-Chorioallantoic Membrane (HET-CAM), to build a sufficient body of evidence for classification and labeling under systems like the Globally Harmonized System of Classification and Labelling of Chemicals (GHS) [59]. This methodology is not only more efficient but also aligns with the global push towards New Approach Methodologies (NAMs) that reduce animal testing [3].
Read-across is a powerful data gap-filling technique within chemical safety assessment. It involves predicting the properties of a target substance with limited or no data by using information from one or more source substances that are considered structurally and mechanistically similar [2]. The reliability of a read-across prediction is highly dependent on the strength of the evidence establishing similarity and the plausibility of the prediction.
A tiered testing strategy provides the ideal framework for building this robust evidence base. It allows researchers to systematically gather data to support the hypothesized similarity between the source and target substances. The European Food Safety Authority (EFSA) has developed comprehensive guidance for using read-across in food and feed safety assessment, outlining a step-by-step workflow that includes problem formulation, substance characterization, source identification, and uncertainty assessment [2]. This structured process ensures clarity, impartiality, and quality, leading to transparent and reliable read-across conclusions.
The integration of data from NAMsâincluding in chemico, in vitro, and in silico methodsâat various tiers of testing is crucial for strengthening read-across justifications. These data can provide mechanistic evidence (e.g., on metabolism or biological activity) that bolsters the argument for similarity beyond mere structural appearance, thereby increasing regulatory confidence [3].
The following table summarizes the key performance differences between a tiered, WoE-based strategy and a traditional, checklist-based testing approach.
Table 1: Performance Comparison of Testing Strategies
| Feature | Tiered Testing & Weight-of-Evidence | Traditional Linear Testing |
|---|---|---|
| Testing Philosophy | Sequential, hypothesis-driven; progresses based on interim results [59]. | Fixed, checklist-based; often follows a prescribed battery of tests. |
| Data Integration | Holistic; integrates all available data (e.g., physicochemical, in silico, in vitro) into a unified conclusion [59] [2]. | Siloed; data from different tests may be considered in isolation. |
| Animal Testing | Significantly reduced by prioritizing non-animal methods and avoiding unnecessary tests [3]. | Typically high reliance, as animal studies are often the default for regulatory requirements. |
| Regulatory Confidence | High, when supported by transparent documentation and mechanistic data [2] [3]. | Variable; can be high for standard data sets but may lack flexibility for novel substances. |
| Efficiency & Cost | Higher initial planning overhead, but lower overall cost and time due to targeted testing [3]. | Predictable but often higher overall cost and resource use due to comprehensive testing requirements. |
| Adaptability | Highly adaptable to novel substances and new scientific knowledge [3]. | Low adaptability; struggles with substances that do not fit standard testing paradigms. |
| Uncertainty Handling | Explicitly assessed and documented at each stage; guides further testing needs [2]. | Often implicit; may not be systematically evaluated or used to guide the testing process. |
A 2011 study applied a tiered testing strategy to classify 20 industrial products with extreme pH. The experimental protocol was as follows [59]:
Results: The strategy successfully classified all 20 products and nine of their dilutions without the need for animal testing. The study demonstrated that by combining data from these tiers in a WoE approach, reliable classification and labeling decisions could be made, showcasing a practical application of the methodology summarized in Table 1 [59].
To ensure reproducibility and regulatory acceptance, clearly defined experimental protocols are essential for each tier. Below are detailed methodologies for key tests commonly employed in a tiered strategy for skin and eye irritation/corrosion.
Objective: To identify substances with extreme pH that have a high potential to be corrosive.
Methodology:
Data Interpretation: A substance with pH ⤠2 and acid reserve > 0.1 mmol/g, or pH ⥠11.5 and alkaline reserve > 0.1 mmol/g, is considered to have a high corrosive potential and may be classified as such, or proceed to Tier 2 for confirmation.
Objective: To identify substances that cause reversible skin damage (irritation) or irreversible skin damage (corrosion).
Methodology:
Prediction Model:
Objective: To assess the potential of a substance to cause eye irritation.
Methodology:
Prediction Model: The IS is used to classify substances into categories such as severe irritant, moderate irritant, or non-irritant, which can be extrapolated to the GHS eye irritation categories.
The following diagram illustrates the logical flow and decision-making process within a generalized tiered testing strategy for skin and eye irritation/corrosion assessment.
The successful implementation of a tiered testing strategy relies on a suite of reliable reagents and models. The table below details key materials and their functions in the featured experimental protocols.
Table 2: Key Research Reagents and Materials for Tiered Testing
| Item | Function in Experimental Protocol | Application Tier |
|---|---|---|
| EpiDerm Model | A reconstructed human epidermis model used to assess skin corrosion and irritation by measuring cell viability post-exposure (e.g., via MTT assay) [59]. | Tier 2 (Skin) |
| HET-CAM Assay | The Hen's Egg Test on the Chorioallantoic Membrane; used to evaluate eye irritation potential by observing vascular damage (hemorrhage, lysis, coagulation) [59]. | Tier 3 (Eye) |
| MTT Reagent | (3-[4,5-Dimethylthiazol-2-yl]-2,5-diphenyltetrazolium bromide); a yellow tetrazole that is reduced to purple formazan in viable cells, allowing for quantitative measurement of cell viability [59]. | Tier 2 |
| OECD QSAR Toolbox | A software application used to fill data gaps by grouping chemicals into categories and predicting properties from structurally similar substances (source substances) [3]. | Read-Across |
| CEFIC AMBIT Tool | An open-source software for chemical structure management and similarity searching, supporting the identification of suitable source substances for read-across [3]. | Read-Across |
| Tox21/ ToxCast Data | High-throughput screening data from US federal agencies providing bioactivity profiles for thousands of chemicals, useful for mechanistic support in read-across [3]. | WoE Analysis |
The adoption of tiered testing strategies, firmly grounded in a weight-of-evidence framework, represents a paradigm shift in chemical safety assessment. As demonstrated through comparative analysis and experimental data, this approach offers a more efficient, ethical, and scientifically robust pathway to hazard identification and classification compared to traditional linear testing. Its synergy with read-across methodologies is particularly powerful, providing a structured means to leverage existing data and reduce uncertainty. For researchers and drug development professionals, mastering these strategies is no longer optional but essential for navigating the evolving landscape of regulatory science, which increasingly prioritizes the principles of the 3Rs (Replacement, Reduction, and Refinement of animal testing) and the integration of New Approach Methodologies.
In chemical safety assessments, particularly for the evaluation of food and feed, read-across has emerged as a pivotal New Approach Methodology (NAM). It enables the prediction of a target substance's properties by using data from structurally and biologically similar source substances [2] [3]. The regulatory landscape is evolving to support this approach, with the European Food Safety Authority (EFSA) issuing definitive guidance to standardize its application. This guide outlines the best practices for documenting read-across comparisons, ensuring they are transparent, robust, and regulator-ready.
Read-across is a data gap-filling strategy founded on the principle that chemically similar substances are expected to exhibit similar biological properties and toxicological effects [3]. Its application is gaining global momentum, driven by the goal of reducing reliance on animal testing while maintaining high safety standards [2] [3].
The regulatory expectation for transparency and impartiality is paramount. EFSA's guidance emphasizes a structured workflow that must be clearly documented to justify the similarity between the target and source substances and to account for any uncertainties [2]. Furthermore, recent policy shifts, such as the one from the U.S. National Institutes of Health, underscore a broader movement to prioritize non-animal methodologies, making the mastery of read-across documentation increasingly essential for researchers and regulatory affairs professionals [3].
Adhering to the following core principles in your documentation is critical for building regulatory confidence and facilitating acceptance.
When creating comparison guides that pit a target substance against source analogues, a systematic and well-documented approach is necessary. The following workflow, adapted from regulatory guidance, ensures a thorough evaluation.
Effective presentation of complex data is crucial for transparency. Adhering to visualization best practices ensures that your comparisons are clear, accessible, and honest.
Structured tables are ideal for presenting precise quantitative comparisons. The table below exemplifies how to clearly compare experimental data for a target substance and its analogues.
Table: Comparative Subacute Toxicity Data for Target Substance X and Source Analogues
| Substance | Molecular Weight (g/mol) | Log P | LD50 (mg/kg) | NOAEL (mg/kg/day) | Key Target Organ |
|---|---|---|---|---|---|
| Target X | 245.3 | 2.1 | Data Gap | Data Gap | Data Gap |
| Source A | 231.2 | 1.9 | 550 | 25 | Liver |
| Source B | 259.4 | 2.3 | 620 | 30 | Liver |
| Source C | 248.1 | 3.5 | 480 | 15 | Kidney |
A visualized workflow diagram helps readers quickly understand the complex, multi-step experimental methodology.
A robust read-across assessment relies on specific databases and tools to identify analogues and gather supporting data.
Table: Essential Research Tools for Read-Across Assessments
| Tool Name | Type | Primary Function in Read-Across |
|---|---|---|
| OECD QSAR Toolbox | Software | Automates the identification of structural analogues and metabolic pathways; profiles chemicals for potential toxicological effects [3]. |
| EPA AIM Tool | Database/Algorithm | Implements a systematic methodology to identify and rank chemical analogues based on structure and properties [3]. |
| eChemPortal | Database | Provides a single point of access to chemical properties and toxicity data collected by various international agencies [3]. |
| CompTox Chemicals Dashboard | Database | Provides access to a wealth of EPA-curated data, including physicochemical properties, in vitro bioassay data, and in vivo toxicity data [3]. |
| Tox21/ToxCast | Database | Provides high-throughput screening in vitro data for thousands of chemicals, useful for generating mechanistic evidence to support similarity [3]. |
Mastering the documentation best practices for read-across is fundamental for its successful application in chemical safety assessment. By adhering to a structured workflow, presenting data with clarity and transparency, and leveraging the available research tools, scientists can build compelling, defensible, and regulator-accepted cases. This not only advances the adoption of New Approach Methodologies but also contributes to a more efficient and humane safety evaluation ecosystem for food, feed, and drug development.
Read-across is a cornerstone technique in modern chemical safety assessment, used to predict the properties of a target substance by using data from similar, well-characterized source substances [2]. As regulatory bodies increasingly accept this approach to reduce animal testing, the focus has shifted to establishing robust validation frameworks that can objectively measure its prediction accuracy. A successfully validated read-across hypothesis must demonstrate that the chemical and toxicological similarities between source and target substances are sufficient to provide reliable predictions for regulatory decision-making [3].
The validation of read-across is inherently complex because it requires evaluating not just the structural similarity between chemicals, but also their mechanistic biological properties and the uncertainty associated with extrapolating data across substances. Leading regulatory bodies have developed structured frameworks to guide this process, including the European Food Safety Authority's (EFSA) 2025 guidance for food and feed safety and the European Chemicals Agency's (ECHA) Read-Across Assessment Framework (RAAF) for industrial chemicals [33]. These frameworks provide systematic approaches for demonstrating the scientific validity of read-across predictions, though they differ in their specific requirements and implementation strategies.
The landscape of read-across validation is shaped by several influential frameworks that establish standards for measuring prediction accuracy. The EFSA 2025 guidance, ECHA's RAAF, and the community-driven Good Read-Across Practice (GRAP) principles represent the most comprehensive approaches currently available [33]. Each framework brings distinct priorities and methodologies to the validation challenge, reflecting their specific regulatory contexts and scientific philosophies.
Table 1: Comparison of Major Read-Across Validation Frameworks
| Framework | Primary Regulatory Context | Core Structure | Uncertainty Assessment | NAM Integration |
|---|---|---|---|---|
| EFSA 2025 Guidance | Food and feed safety | Seven-step, uncertainty-anchored workflow | Systematic uncertainty analysis with tolerance evaluation | Active embedding of NAMs and AOP reasoning |
| ECHA RAAF | Industrial chemicals (REACH) | Six scenario-based assessment elements | Standardized regulatory scrutiny for evidence requirements | Evaluator's rubric focusing on evidence delivery |
| GRAP Principles | Cross-domain application | Conceptual best practices | Emphasis on explicit uncertainty characterization | Strategic use of NAMs and mechanistic plausibility |
The EFSA framework offers a transparent "how-to" template that guides applicants through a seven-step workflow, actively embedding New Approach Methodologies (NAMs) and adverse outcome pathway (AOP) reasoning to improve robustness [2] [33]. In contrast, ECHA's RAAF operates as an evaluator's rubric that delineates what evidence must be delivered without prescribing how to construct the dossier. GRAP supplies the conceptual foundation for both, emphasizing mechanistic plausibility, exhaustive analogue selection, and explicit uncertainty characterization [33].
Measuring the accuracy of read-across predictions requires both qualitative and quantitative metrics that can be consistently applied across different chemical categories and toxicological endpoints. Regulatory experience under REACH demonstrates that dossier quality and acceptance rates rise markedly when RAAF criteria are met, providing one indirect metric of validation success [33]. However, more direct measures of prediction accuracy depend on the specific type of read-across being performed.
Table 2: Validation Metrics Across Read-Across Approaches
| Validation Aspect | Analogue Approach | Grouping Approach | Mechanistic Read-Across |
|---|---|---|---|
| Structural Validation | Pairwise similarity metrics | Category consistency evaluation | Structural alerts for mechanism |
| Toxicological Concordance | Endpoint-specific bridging | Trend analysis across category | AOP key event concordance |
| Uncertainty Quantification | Source-to-target extrapolation | Intracategory variability | Mechanistic coverage assessment |
| NAM Integration | Targeted in vitro assays | Battery testing across category | Pathway-based testing systems |
For analogue approaches, which use data from one or a few source substances, validation focuses on demonstrating sufficient similarity for the specific endpoint being assessed [3]. This requires careful documentation of structural differences and their potential toxicological significance. In grouping approaches, which apply data from a larger set of related substances, validation involves showing that the known toxicological properties follow a predictable trend that can be used to infer the properties of the target substance [3]. The emerging paradigm of mechanistic read-across places the greatest emphasis on biological pathway similarity, using AOP frameworks to validate predictions based on shared modes of action [33].
The EFSA 2025 guidance establishes a comprehensive seven-step workflow designed to ensure transparent and scientifically justified read-across within a weight-of-evidence framework [2] [33]. This protocol provides a standardized methodology for validating read-across predictions in food and feed safety assessments.
Step 1: Problem Formulation - Clearly define the data gap being addressed, the specific endpoint requiring prediction, and the purpose of the read-across within the broader risk assessment context. This includes specifying the uncertainty tolerance for the assessment [2].
Step 2: Target Substance Characterization - Thoroughly characterize the target substance's chemical structure, physicochemical properties, and potential metabolic pathways. This establishes the basis for identifying appropriate source substances [2].
Step 3: Source Substance Identification - Systematically identify potential source substances using structural similarity searches, database mining, and category definition. EFSA emphasizes using established tools like the OECD QSAR Toolbox, eChemPortal, and EPA's Analog Identification Methodology (AIM) Tool [3] [33].
Step 4: Source Substance Evaluation - Critically evaluate the quality and relevance of data available for source substances, considering factors such as test method reliability, dose-response relationships, and mechanistic information [2].
Step 5: Data Gap Filling - Use data from source substances to predict the target substance's properties, providing clear justification for the extrapolation. This may involve qualitative predictions, quantitative interpolation, or trend analysis [2].
Step 6: Uncertainty Assessment - Systematically evaluate uncertainties arising from chemical similarities, mechanistic understanding, data quality, and methodological approaches. EFSA provides specific templates for uncertainty assessment [2] [33].
Step 7: Conclusion and Reporting - Document the read-across hypothesis, supporting evidence, uncertainty characterization, and final conclusion in a transparent and reproducible manner [2].
The following diagram illustrates EFSA's structured workflow for validating read-across predictions:
EFSA 7-Step Workflow Diagram
Measuring the predictive accuracy of read-across requires rigorous statistical approaches that account for variability in chemical structures, biological responses, and experimental systems. Cross-validation (CV) procedures are particularly valuable for assessing model performance, especially when dealing with small-to-medium-sized datasets common in toxicology [63].
The fundamental challenge in statistical validation lies in distinguishing true predictive superiority from random variation or methodological artifacts. A robust framework for comparing model accuracy must control for multiple factors that can influence outcomes, including the number of CV folds (K), the number of repetitions (M), sample size, and intrinsic data properties [63].
Recommended Statistical Validation Protocol:
Dataset Preparation - Ensure balanced representation across chemical categories and toxicological endpoints. Stratify data to maintain consistent proportions of different activity classes across training and test sets.
Cross-Validation Setup - Select appropriate K-fold structure based on dataset size. For small datasets (N<100), use leave-one-out or leave-many-out approaches to minimize variance. For larger datasets, 5-fold or 10-fold CV typically provides stable estimates.
Model Training - Train read-across models using consistent parameters across all folds. Document all assumptions, similarity metrics, and weighting schemes applied.
Performance Evaluation - Calculate accuracy, sensitivity, specificity, and concordance metrics for each fold. Use multiple performance indicators to capture different aspects of predictive capability.
Significance Testing - Apply appropriate statistical tests that account for dependencies in CV results. Avoid commonly misused procedures like simple paired t-tests on K Ã M accuracy scores, as these can produce misleading p-values due to violation of independence assumptions [63].
Uncertainty Quantification - Calculate confidence intervals for performance metrics using bootstrapping or other resampling techniques. Document sources of uncertainty and their potential impact on predictions.
This protocol helps mitigate the reproducibility crisis in machine learning-based toxicology by providing standardized procedures for comparing read-across models and quantifying their predictive accuracy [63].
Implementing robust validation frameworks for read-across requires access to specialized databases, software tools, and experimental resources. The following table compiles essential research reagents and their applications in measuring prediction accuracy.
Table 3: Essential Research Reagents for Read-Across Validation
| Resource Category | Specific Tools/Platforms | Primary Function in Validation | Regulatory Recognition |
|---|---|---|---|
| Chemical Database | OECD QSAR Toolbox, eChemPortal, CompTox Chemical Dashboard | Structural similarity assessment, category formation, analogue identification | High (referenced in EFSA guidance) |
| In Silico Tools | CEFIC AMBIT tool, EPA AIM Tool, OECD QSAR Toolbox | Automated analogue identification, chemical category development | Medium to High (accepted with justification) |
| Experimental Data Platforms | Tox21, ToxCast, PubChem | Access to high-throughput screening data for mechanistic support | Growing (particularly for NAMs) |
| Toxicogenomics Resources | CEBS, Comparative Toxicogenomics Database | Pathway-based similarity assessment, mechanistic reasoning | Emerging (for advanced read-across) |
| Uncertainty Assessment | EFSA uncertainty template, RAAF assessment elements | Systematic evaluation of uncertainty sources | High (required in submissions) |
These resources provide the foundational infrastructure for constructing and validating read-across hypotheses. The OECD QSAR Toolbox is particularly valuable for identifying structurally similar compounds and forming chemical categories, while platforms like Tox21 and ToxCast provide mechanistic data from high-throughput screening assays that can strengthen read-across justifications [3]. The EFSA uncertainty template offers a standardized approach for documenting and evaluating uncertainties, which is crucial for regulatory acceptance [2].
The performance of read-across predictions varies significantly across different chemical classes and toxicological endpoints. Retrospective analyses of regulatory decisions provide valuable insights into the factors that influence prediction accuracy and regulatory acceptance.
A comprehensive review of 72 ECHA Final Decisions on Compliance Checks and Testing Proposal Evaluations covering 24 major surfactant groups identified key drivers of regulatory acceptance or rejection [18]. The presence or absence of detailed composition information emerged as a critical factor, with complete characterization significantly increasing acceptance rates. Structural similarity considerations and the availability of appropriate bridging studies also strongly influenced outcomes [18].
Notably, this analysis found no examples of read-across acceptance based solely on non-animal New Approach Methodologies (NAMs), highlighting the ongoing challenge of validating these emerging approaches for regulatory purposes [18]. This suggests that while NAMs show great promise for enhancing read-across predictions, their validation as standalone tools for regulatory decision-making requires further development and standardization.
All read-across predictions contain inherent uncertainties that must be characterized and evaluated as part of the validation process. The EFSA guidance emphasizes assessing whether overall uncertainty can be reduced to tolerable levels through standardized approaches and additional data from NAMs [2].
Table 4: Uncertainty Sources in Read-Across Validation
| Uncertainty Category | Impact on Prediction Accuracy | Common Mitigation Strategies |
|---|---|---|
| Structural Uncertainty | Small structural differences can significantly impact toxicological behavior | Use multiple similarity metrics, consider functional group equivalence |
| Mechanistic Uncertainty | Similar structures may act through different biological pathways | Incorporate pathway-based assays, ADME comparison |
| Data Quality Uncertainty | Inconsistent test methods or reporting affect reliability | Apply Klimisch scoring, use standardized protocols |
| Extrapolation Uncertainty | Quantitative differences despite qualitative similarity | Use trend analysis, establish quantitative structure-activity relationships |
| Coverage Uncertainty | Gaps in available data for critical endpoints | Implement tiered testing strategies, read-across within categories |
The convergence of regulatory frameworks from EFSA, ECHA, and GRAP principles signals an emerging international consensus on what constitutes defensible read-across [33]. This harmonization enables more consistent validation approaches across regulatory domains and facilitates the development of standardized metrics for assessing prediction accuracy.
The validation of read-across predictions has evolved from expert-driven judgment based largely on structural analogy to a rigorously documented, mechanistically informed process supported by structured frameworks [33]. The EFSA 2025 guidance, ECHA's RAAF, and GRAP principles collectively provide comprehensive approaches for measuring and demonstrating prediction accuracy, though challenges remain in standardizing validation metrics across chemical categories and regulatory jurisdictions.
The future of read-across validation lies in further developing and standardizing New Approach Methodologies that can reduce uncertainty and improve prediction accuracy. As noted in the comparative appraisal of frameworks, harmonizing EFSA's procedural roadmap with RAAF's evaluative rigor and GRAP's best-practice ethos can mainstream reliable, animal-saving read-across across regulatory domains [33]. This convergence, reinforced by OECD initiatives and NAM-enhanced case studies, points toward increasingly sophisticated validation approaches that leverage artificial intelligence, pathway-based reasoning, and integrated testing strategies to ensure chemical safety while reducing animal testing.
The scientific community continues to develop more robust statistical methods for quantifying prediction accuracy, addressing issues such as cross-validation variability and significance testing limitations that have historically complicated model comparison [63]. Through continued refinement of validation frameworks and their application across diverse chemical spaces, read-across will solidify its position as a scientifically rigorous and regulatory-accepted approach for chemical safety assessment.
Within the paradigm of modern chemical safety research, the reliance on New Approach Methodologies (NAMs) has become imperative to overcome the limitations of traditional animal testing, including ethical concerns, high costs, and prolonged timelines [64] [14]. Read-across and Quantitative Structure-Activity Relationship (QSAR) models are two pivotal in silico NAMs used for predicting the toxicological properties of data-poor chemicals. This guide provides a comparative analysis of these methodologies, grounded in experimental data and their application within integrated chemical safety assessments.
Read-across is a data gap-filling technique used to predict the toxicological properties of a target substance by using existing information from structurally and mechanistically similar source substances [65] [2]. The core hypothesis is that similarity in chemical structure implies similarity in biological activity and toxicological effects. Its application is central to regulatory submissions under frameworks like the EU's REACH regulation [66] [67].
QSAR is a model-dependent methodology that relates a quantitative numerical description of a chemical's structure (descriptors) to a specific toxicological or biological endpoint through a mathematical model [68] [69]. The robustness of a QSAR model is governed by the ratio of training compounds to modeling descriptors, ideally not lower than 5:1, to avoid overfitting [68].
While read-across and QSAR are foundational, the ecosystem of NAMs is broad and includes:
Table 1: Fundamental Characteristics of Read-Across and QSAR
| Feature | Read-Across | QSAR |
|---|---|---|
| Core Principle | Infers properties from similar, data-rich "source" analogues [65] [2] | Derives properties from a mathematical model based on structural descriptors [68] |
| Basis of Prediction | Chemical and biological similarity [70] [67] | Statistical correlation between descriptors and an endpoint [68] |
| Typical Output | Qualitative or semi-quantitative prediction; can fill multiple endpoints simultaneously [2] | Quantitative prediction for a single, specific endpoint [68] |
| Key Strength | Does not require a formal training model; applicable to small datasets and multiple endpoints [68] [2] | Provides a transparent, quantitative relationship between structure and activity [68] |
| Key Limitation | Can be subjective; quantitative interpretation of feature contributions is challenging [68] [66] | Risk of overfitting, especially with small datasets ("curse of dimensionality") [68] |
The fundamental difference in approach leads to distinct workflows for read-across and QSAR, which are increasingly integrated into consolidated frameworks.
Diagram 1: Comparative Workflows of QSAR and Read-Across. The workflows often converge in modern hybrid approaches.
A standard protocol for developing a validated QSAR model, as exemplified in carcinogenicity prediction studies [68], involves:
A systematic read-across assessment, as outlined by EFSA and the US EPA, follows these steps [2] [70]:
A 2025 study directly compared QSAR and advanced read-across-derived models for predicting carcinogenicity potency (Oral Slope Factor and Inhalation Slope Factor) [68]. The results demonstrate the evolution and integration of these methodologies.
Table 2: Model Performance in Predicting Carcinogenicity Potency [68]
| Model Type | Key Characteristics | Reported Performance (External Validation) |
|---|---|---|
| Conventional QSAR | Relies solely on structural/physicochemical descriptors. | Baseline performance (Reference) |
| q-RASAR | Integrates QSAR descriptors with read-across-derived similarity information. | Enhanced external predictivity compared to QSAR. |
| Hybrid ARKA | Uses a supervised dimensionality reduction technique (ARKA) on QSAR descriptors. | Improved robustness and predictivity. |
| ARKA-RASAR | Combines ARKA framework with read-across (RASAR) descriptors. | Best performance: Enhanced internal validation and external predictivity. |
| Stacking Regression | Combines predictions from multiple model types using machine learning. | Highest overall performance and reliability. |
The study concluded that the ARKA-RASAR approach mitigated the slight lowering of internal validation performance sometimes associated with conventional q-RASAR models, achieving superior results [68].
The Generalized Read-Across (GenRA) tool was used to investigate the impact of different similarity measures on predicting repeat-dose toxicity from the ToxRefDB database [67].
This evidence strongly indicates that integrating biological data with chemical similarity enhances the performance and confidence in read-across predictions, particularly for organ-specific toxicities.
The application and advancement of these in silico methods rely on specific software tools and databases.
Table 3: Key Research Tools and Resources
| Tool / Resource | Function | Relevance to Method |
|---|---|---|
| VEGA Platform | A freely available software platform integrating multiple (Q)SAR models for various endpoints like persistence, bioaccumulation, and toxicity [69]. | QSAR, Read-Across (via KNN-Read Across models) |
| OECD QSAR Toolbox | A software designed to fill data gaps by grouping chemicals into categories and supporting read-across predictions. It integrates multiple data sources and methodologies [64]. | Read-Across, Category Formation |
| EPI Suite | A suite of physical/chemical property and environmental fate estimation programs, often used for initial chemical profiling [69]. | QSAR |
| GenRA (genra-py) | A Python package for performing automated, generalized read-across predictions based on chemical and biological similarities [67]. | Read-Across |
| iRA (intelligent Read Across) | A Python-based tool for similarity-based read-across predictions, including optimization and feature importance analysis. Validated on nanotoxicity data [34]. | Read-Across |
| ToxCast/Tox21 Database | A large repository of high-throughput screening bioactivity data for thousands of chemicals, used to inform biological similarity [64] [67]. | Read-Across, NAM Integration |
Both QSAR and read-across are indispensable tools in the NAMs toolkit for chemical safety assessment. The choice between them is not mutually exclusive. QSAR provides a robust, quantitative framework for endpoint prediction when sufficient training data exists, while read-across offers flexibility for data-poor chemicals and complex endpoints. The most powerful and modern approach, as evidenced by recent experimental data, is their integration. Frameworks like ARKA-RASAR and tools that incorporate biological data like GenRA demonstrate that hybrid models leverage the strengths of both methodologies, leading to more predictive, reliable, and regulatory-acceptable outcomes for protecting human health and the environment.
This guide objectively compares the regulatory benchmarks for chemical safety assessment, with a specific focus on the application of read-across approaches by major international agencies: the European Food Safety Authority (EFSA), the U.S. Environmental Protection Agency (EPA), and counterparts in key Asian markets. The comparison is framed within the context of advancing read-across methodologies in chemical safety research.
The read-across approach is a method used in chemical risk assessment to predict the toxicological properties of a data-poor target substance by using known information from one or more data-rich source substances that are structurally and mechanistically similar [9]. It remains one of the most common alternatives to animal testing for addressing data gaps.
EFSA: In 2025, EFSA's Scientific Committee published comprehensive guidance on the use of read-across for chemical safety assessment in food and feed [9] [2]. This guidance provides a structured workflow and emphasizes the integration of New Approach Methodologies (NAMs) to improve the robustness of the assessment. The framework is designed to be systematic and transparent, focusing on reducing uncertainty.
EPA: The EPA utilizes read-across and related approaches within its broader risk assessment paradigm. While the search results do not detail a standalone EPA read-across guidance document comparable to EFSA's 2025 release, the agency employs benchmark dose (BMD) modeling as a preferred approach for analyzing toxicological dose-response data [71]. The EPA's software, BMDS, is a key tool in this process, and efforts have been noted to harmonize methods with European agencies [71].
Asian Agencies: While Japan implements GHS through Japanese Industrial Standards (JIS), which align closely with GHS principles in a more voluntary framework [72], China's implementation through its GB standards system is mandatory and integrates with broader chemical registration and notification requirements [72]. The "building block" nature of GHS has led to significant regional variations in implementation, creating a complex regulatory landscape for multinational research and development [72].
Table 1: Comparison of Key Regulatory Frameworks and Read-Across Implementation
| Agency / Region | Primary Regulatory Framework | Formal Read-Across Guidance | Approach to New Methodologies (NAMs) |
|---|---|---|---|
| EFSA (European Union) | CLP Regulation, REACH [72] [9] | Yes (2025 Comprehensive Guidance) [9] [2] | Explicitly encourages integration of NAMs to support read-across and reduce uncertainty [9]. |
| EPA (United States) | OSHA Hazard Communication Standard (HCS) [72] | Not specified in search results | Utilizes advanced tools like Benchmark Dose Software (BMDS); focus on workplace hazards [71] [72]. |
| Japan | Japanese Industrial Standards (JIS Z 7252/7253) [72] | Not specified in search results | Voluntary adoption framework; balances international alignment with national priorities [72]. |
| China | GB (Guobiao) National Standards [72] | Not specified in search results | Mandatory implementation; often requires additional data beyond basic GHS for complex mixtures [72]. |
Table 2: Technical and Operational Benchmarks in Risk Assessment
| Benchmark Category | EFSA | EPA | Asian Agencies (Examples) |
|---|---|---|---|
| Dermal Absorption Default Value | 10% (under specific conditions), or tiered approach [73] | Tiered approach; "Triple Pack" method (in vitro human/rat & in vivo rat) [73] | Varies by country; often relies on international guidelines (OECD, EPA, EFSA) [73]. |
| Key Software Tools | PROAST (for BMD modeling) [71] | Benchmark Dose Software (BMDS), Electronic Reporting Tool (ERT) [71] [74] | Not specified in search results. |
| Emission Performance Evaluation | Not applicable (Food Safety focus) | National PM2.5 Performance Evaluation Program [75] | Not applicable (Food Safety focus) |
| GHS Implementation Specificity | Unique EU Hazard (EUH) statements; comprehensive environmental hazards [72]. | Excludes environmental hazards from OSHA HCS; focuses on workplace safety [72]. | China: Different flash point thresholds [72].Canada (WHMIS): Bilingual (En/Fr) SDS & labels; unique hazard classes [72]. |
EFSA's 2025 guidance details a structured, step-by-step workflow for conducting a read-across assessment [9]. The methodology is designed to be transparent and systematic, ensuring reliable conclusions.
Workflow Steps:
For assessing dermal absorption, a key endpoint in pesticide and chemical risk assessment, international guidelines recommend specific experimental methodologies [73].
Core Methodology:
Table 3: Key Reagents and Materials for Regulatory Safety Experiments
| Item/Solution | Functional Role in Experiment |
|---|---|
| Diffusion Cell | Core apparatus to measure skin penetration rates; consists of donor and receptor chambers [73]. |
| Excised Human/Rat Skin | The biological membrane used as a barrier to study the percutaneous absorption of test substances [73]. |
| Physiological Receptor Solution | Aqueous solution (e.g., saline pH 7.4) in the receptor chamber to maintain tissue viability and dissolve penetrated test material [73]. |
| Polyethylene Glycol Oleyl Ether Solution | Used as a receptor fluid for non-polar test substances to increase their solubility and ensure accurate measurement [73]. |
| Benchmark Dose Software (BMDS) | EPA-developed software for performing benchmark dose modeling on toxicological dose-response data [71]. |
| PROAST Software | Software developed by the Dutch National Institute for Public Health and the Environment (RIVM), used for BMD modeling, particularly in the European context [71]. |
Read-across is a foundational methodology in chemical risk assessment used to predict the toxicological properties of a data-poor target substance by using known information from one or more data-rich source substances that are structurally and mechanistically similar [9]. It remains one of the most common alternatives to animal testing for addressing data gaps in chemical safety assessments [9]. The scientific justification for read-across rests on the principle that substances sharing similar chemical structures and metabolic pathways can be expected to elicit similar biological effects [9] [76].
The critical importance of metabolic and mechanistic similarity has been emphasized in recent regulatory frameworks. Assessments must demonstrate common kinetic elements, including similar patterns of metabolic activation and transformation, to establish acceptable read-across justifications [76] [27]. This analysis explores how metabolic precursor relationships and shared mechanistic pathways provide a robust scientific basis for read-across predictions in chemical safety assessment, offering case studies and methodological frameworks applicable to pharmaceutical development and regulatory science.
The European Food Safety Authority (EFSA) has established a structured workflow for read-across applications, emphasizing transparency, systematic assessment, and comprehensive uncertainty analysis [9]. This process is particularly crucial when utilizing metabolic relationships for read-across justification.
The standardized read-across workflow comprises several critical stages [9]:
Metabolic considerations are paramount in establishing valid read-across relationships. Xenobiotic metabolism typically occurs in three stages: Phase I (oxidation, reduction, hydrolysis) increases electrophilicity; Phase II (conjugation with water-soluble groups) enhances excretion; and Phase III involves further processing for elimination [76]. Understanding these pathways is essential because while metabolism generally detoxifies compounds, in some cases it can activate substances to more toxic metabolites [76].
The fundamental premise for using metabolic information in read-across is that if a target substance and its source analogue share common biotransformation pathways and produce similar metabolic intermediates, they are likely to exhibit comparable toxicological profiles [76] [27]. This principle is particularly robust when the metabolic relationship involves precursor-product relationships where the source chemical is metabolized to the target compound.
The assessment of pentamethylphosphoramide (PMPA) and N,N,N',N"-tetramethylphosphoramide (TMPA) demonstrates a robust metabolic precursor approach [27]. Hexamethylphosphoramide (HMPA) was identified as the sole candidate analogue based on structural similarity and existing toxicity data. The metabolic relationship proved pivotal to the read-across justification.
Metabolic Pathway: HMPA undergoes sequential demethylation via cytochrome P450 (CYP450), producing PMPA and TMPA as primary intermediate metabolites [27]. Each demethylation step generates formaldehyde as a byproduct. This metabolic relationship established HMPA as a * metabolic precursor* of both target compounds.
Toxicological Significance: Both HMPA and its metabolic byproduct formaldehyde target the upper respiratory tract, causing degenerative nasal lesions in rats [27]. HMPA-induced nasal toxicity results from preferential deposition in nasal tissue and local metabolism. Since PMPA and TMPA share the same bioactivation pathway, the mechanism of HMPA-induced nasal toxicity is considered plausible for both target compounds.
Read-Across Application: Based on this metabolic relationship, HMPA served as the source chemical for deriving screening-level oral toxicity values for both PMPA and TMPA [27]. This case exemplifies how a metabolic precursor can function as a suitable analogue for its metabolites in read-across assessment.
The assessment of 4-methyl-2-pentanol (methyl isobutyl carbinol, MIBC) illustrates read-across based on bidirectional metabolic relationships [27]. Multiple structural analogues were identified, including its ketone derivative methyl isobutyl ketone (MIBK) and related aliphatic alcohol/ketone pairs.
Metabolic Pathway: MIBC and MIBK undergo bidirectional metabolism, converging to form 4-methyl-4-hydroxy-2-pentanone (HMP) as a major metabolite with comparable pharmacokinetics [27]. Similar bidirectional metabolism was observed in other candidate analogues (2-propanol/2-propanone, 2-butanol/2-butanone).
Toxicokinetic Similarity: All candidate analogues displayed rapid absorption, wide tissue distribution, and generally low acute toxicity in rodent studies [27]. The shared metabolic pathways and similar toxicokinetic profiles supported the category formation and read-across justification.
Category Approach: The bidirectional metabolism between alcohol-ketone pairs allowed formation of a category based on structural similarity and common metabolic fate. This approach enabled read-across predictions for MIBC based on data from multiple source analogues.
Table 1: Comparative Analysis of Metabolic Read-Across Case Studies
| Case Study | Target Chemical | Source Chemical | Metabolic Relationship | Key Metabolic Pathway | Toxicological Endpoint |
|---|---|---|---|---|---|
| Phosphoramide Compounds | Pentamethylphosphoramide (PMPA) | Hexamethylphosphoramide (HMPA) | Metabolic precursor | Sequential demethylation via CYP450 | Nasal toxicity (degenerative lesions) |
| Phosphoramide Compounds | N,N,N',N"-tetramethylphosphoramide (TMPA) | Hexamethylphosphoramide (HMPA) | Metabolic precursor | Sequential demethylation via CYP450 | Nasal toxicity (degenerative lesions) |
| Aliphatic Alcohol-Ketone Pairs | 4-Methyl-2-pentanol (MIBC) | Methyl isobutyl ketone (MIBK) | Bidirectional metabolism | Oxidation/reduction to 4-methyl-4-hydroxy-2-pentanone | Systemic toxicity (low acute toxicity) |
Table 2: Experimental Evidence Supporting Metabolic Read-Across
| Case Study | Metabolic Evidence | Experimental Support | Toxicological Concordance | Regulatory Application |
|---|---|---|---|---|
| Phosphoramide Compounds | CYP450-mediated demethylation | In vitro metabolism studies | Nasal lesions from parent and metabolite (formaldehyde) | U.S. EPA PPRTV assessment |
| Aliphatic Alcohol-Ketone Pairs | Bidirectional metabolism | Pharmacokinetic studies in animals | Consistent low acute toxicity profile across category | Screening-level risk assessment |
Objective: To characterize the metabolic stability of test compounds and identify major metabolites using hepatocyte-based systems [76].
Protocol Details:
Application to Read-Across: Metabolic half-life (tâ/â) and intrinsic clearance (CLint) values provide quantitative measures of metabolic similarity. Shared major metabolites indicate common biotransformation pathways [76].
Objective: To comprehensively identify and characterize metabolites formed from target and source compounds.
Protocol Details:
Application to Read-Across: Confirmed structural identity of shared metabolites provides strong evidence for metabolic similarity and supports mechanistic plausibility [76].
Objective: To predict potential metabolic pathways using in silico tools.
Protocol Details:
Application to Read-Across: Computational predictions guide experimental design and provide supporting evidence for metabolic similarity in read-across justifications [76].
Recent advances in genome-scale metabolic models (GEMs) enable sophisticated analysis of drug-induced metabolic changes [78]. The TIDE (Tasks Inferred from Differential Expression) algorithm provides a framework for inferring pathway activity changes from transcriptomic data, allowing researchers to identify specific metabolic processes altered by chemical exposures [78].
Application to Read-Across: By comparing metabolic pathway activities between target and source compounds, researchers can identify shared metabolic vulnerabilities and functional similarities that extend beyond structural analogies [78].
A novel approach combining untargeted metabolomics with mechanistic modeling enables comprehensive metabolic characterization without predefined metabolite selection [79]. This methodology uses elementary flux modes (EFMs) and column generation techniques to identify and simulate underlying metabolic pathways.
Application to Read-Across: This unbiased approach can reveal unexpected metabolic connections and provide objective evidence for metabolic similarity between chemically related compounds [79].
Table 3: Essential Research Reagents and Methods for Metabolic Read-Across
| Tool/Reagent | Function | Application in Read-Across | Key Features |
|---|---|---|---|
| Cryopreserved Hepatocytes | In vitro metabolism studies | Metabolic stability assessment and metabolite profiling | Species-relevant metabolism, maintain cytochrome P450 activity |
| Absolute IDQ p180 Kit | Targeted metabolomics | Quantitative analysis of 188 metabolites | Simultaneous quantification of amino acids, biogenic amines, lipids |
| UPLC-MS/MS Systems | Metabolite separation and detection | High-resolution metabolite identification | High sensitivity, wide dynamic range, structural characterization capability |
| OECD QSAR Toolbox | Computational metabolism prediction | In silico simulation of metabolic pathways | Rule-based transformations, documented metabolic maps |
| TIMES System | Tissue metabolism simulation | Prediction of organ-specific metabolism | Quantitative estimates of metabolite formation |
| Authentic Metabolite Standards | Metabolite identification and quantification | Confirmation of shared metabolites | Reference materials for structural verification |
The EFSA guidance emphasizes thorough uncertainty assessment as an essential component of read-across [9]. Key uncertainties specific to metabolic read-across include:
Regulatory frameworks emphasize that structural similarity alone is insufficient for read-across justification [9] [24]. Additional evidence must include:
The Read-across Assessment Framework (RAAF) developed by ECHA specifically highlights the importance of demonstrating common kinetic elements for acceptable read-across [76].
Metabolic precursor relationships and mechanistic similarity provide a robust scientific foundation for read-across assessments in chemical safety evaluation. The case studies presented demonstrate how understanding metabolic pathwaysâparticularly precursor-product relationships and bidirectional metabolismâcan support scientifically justified read-across predictions that meet regulatory standards.
As new approach methodologies (NAMs) continue to evolve, including advanced in vitro systems, computational models, and high-resolution metabolomics, the ability to characterize metabolic relationships with greater precision will further enhance the reliability and regulatory acceptance of metabolic read-across approaches. The integration of these advanced methodologies with the established frameworks discussed provides a pathway toward more efficient and scientifically rigorous chemical safety assessment while reducing reliance on animal testing.
Read-across is a fundamental method in chemical risk assessment used to predict the toxicological properties of a data-poor target substance by using known information from one or more data-rich source substances that are structurally and mechanistically similar [9]. It remains one of the most common alternatives to animal testing for addressing data gaps in chemical safety assessments [9]. The technique is applied through two primary chemical grouping approaches: the analogue approach, which compares a target substance with a limited number of closely related chemicals, and the category approach, which relies on patterns or trends among several source substances to predict the target substance's properties [9]. The fundamental tenet of read-across is that substances sharing similar chemical structures can be expected to elicit similar biological effects, though this principle extends beyond simple structural similarity to include mechanistic understanding and biological activity [9] [80].
The evolving regulatory landscape for chemicals, including the EU Chemicals Strategy for Sustainability and updates to US TSCA requirements, is increasing pressure on industries to provide robust safety data for more chemicals with greater efficiency [81]. This has accelerated the need for reliable, standardized read-across protocols that can provide defensible hazard assessments while reducing animal testing. Contemporary read-across approaches are increasingly incorporating New Approach Methodologies (NAMs), including in vitro assays and in silico tools, to strengthen scientific justification and reduce uncertainty in predictions [9] [2]. This comparison guide examines the current state of standardized protocols and performance metrics for read-across, providing researchers with a framework for evaluating and implementing these approaches in chemical safety assessment.
Major regulatory bodies have developed structured workflows to standardize read-across applications. The European Food Safety Authority (EFSA) outlines a comprehensive workflow including problem formulation, target substance characterization, source substance identification and evaluation, data gap filling, uncertainty assessment, and conclusion and reporting [9]. Similarly, the European Chemicals Agency (ECHA) Read-Across Assessment Framework (RAAF) provides detailed guidance on building scientifically valid read-across cases [82]. These frameworks emphasize transparency, systematic documentation, and critical uncertainty assessment as essential components of reliable read-across assessments [9].
The Organization for Economic Cooperation and Development (OECD) also provides guidance on grouping chemicals, establishing a standardized approach for developing chemical categories that can be used for read-across [82]. These frameworks share common elements despite differing in specific terminology and emphasis, particularly regarding the need for rigorous hypothesis-driven approaches and comprehensive uncertainty characterization.
The following diagram illustrates the generalized workflow for conducting read-across assessment, integrating elements from major regulatory frameworks:
Figure 1: Generalized Read-Across Assessment Workflow
Modern read-across increasingly relies on biological data to substantiate similarity hypotheses. The following experimental protocols represent key approaches for generating supporting biological evidence:
ToxCast High-Throughput Screening Protocol
Embryonic Stem Cell Test (EST) Protocol
Cross-Omics Similarity Assessment Protocol
The performance of different read-across approaches can be evaluated using standardized metrics that assess both predictive accuracy and uncertainty. The following table summarizes key performance indicators for major read-across methodologies:
Table 1: Performance Metrics for Read-Across Approaches
| Methodology | Predictive Accuracy Range | Uncertainty Quantification | Applicability Domain | Regulatory Acceptance |
|---|---|---|---|---|
| Structural Similarity-Based | 65-75% | Qualitative assessment | Broad chemical space | Limited as standalone |
| ToxCast Bioactivity Profiling | 78-85% | Similarity-weighted confidence scores | Chemicals with complete HTS data | Emerging acceptance |
| GenRA (EPA) | 75-82% | Quantitative uncertainty metrics | Defined by training set | Pilot acceptance phase [12] |
| Integrated WoE Framework | 80-90% | Systematic confidence scoring | Case-specific | High for documented cases [82] |
| Omics-Based Similarity | 70-80% | Multivariate statistical confidence | Limited to available omics data | Pre-regulatory research |
A critical component of read-across performance is comprehensive uncertainty assessment. The EFSA guidance emphasizes analyzing whether overall uncertainty can be reduced to tolerable levels through standardized approaches and additional data from NAMs [9]. The key uncertainty metrics include:
Systematic Weight of Evidence (WoE) approaches provide structured methodology for weighing and integrating diverse types of evidence, ranging from structural attributes to toxicokinetic data and mechanistic understanding [82]. This approach determines both the conclusion and confidence in that conclusion based on the accumulated evidence weights [82].
Implementing robust read-across requires specialized tools and databases. The following table details key resources for conducting read-across assessments:
Table 2: Essential Research Tools for Read-Across Assessment
| Tool/Resource | Function | Key Features | Access |
|---|---|---|---|
| OECD QSAR Toolbox | Chemical categorization and analogue identification | Structural alerts, metabolite prediction, database integration | Commercial license |
| EPA CompTox Chemicals Dashboard | Chemical data aggregation and GenRA access | ~900,000 chemical records, property data, bioactivity links [12] | Public |
| ToxCast/Tox21 Database | Bioactivity profiling for similarity assessment | ~1,700 chemicals screened in ~700 assays [80] | Public |
| REACH Dossier Information | Source substance data for read-across | Comprehensive study summaries for registered substances | Limited public access |
| Adverse Outcome Pathway (AOP) Wiki | Mechanistic framework for WoE assessment | Structured toxicity pathway knowledge | Public |
The following diagram illustrates the integration of different data types in a comprehensive read-across assessment, highlighting how standardized protocols apply across evidence streams:
Figure 2: Integrated Read-Across Assessment Pathway
The future direction of read-across in chemical safety assessment points toward increasingly standardized protocols and performance metrics that enhance reliability and regulatory acceptance. The key developments include greater integration of NAMs to support read-across hypotheses, more systematic uncertainty quantification, and the development of computational tools like GenRA that provide objective, reproducible read-across predictions [9] [12]. The movement toward standardized performance metrics, particularly those quantifying predictive accuracy and uncertainty, provides researchers with clearer benchmarks for method evaluation and selection.
As regulatory frameworks continue to evolve globally, with particular emphasis on reducing animal testing while maintaining rigorous safety standards, the role of well-validated read-across approaches will continue to expand [81]. The ongoing development of Good Read-Across Practice guidance and the strategic integration of big data approaches position read-across as a cornerstone of next-generation chemical safety assessment [80]. For researchers and drug development professionals, mastering these standardized protocols and performance metrics is becoming essential for navigating the future landscape of chemical regulation and safety evaluation.
Read-across has evolved from a simple chemical similarity-based approach to a sophisticated, evidence-driven methodology that integrates structural, metabolic, and mechanistic data. The successful implementation of read-across requires rigorous assessment of similarity and thorough characterization of uncertainty, increasingly supported by New Approach Methodologies. As regulatory guidance continues to develop globally, particularly with EFSA's upcoming framework and growing U.S. agency engagement, read-across is poised to play an increasingly central role in chemical safety assessment. Future advancements will likely focus on standardizing validation approaches, expanding the integration of high-throughput screening data, and developing internationally harmonized protocols to further enhance regulatory acceptance and scientific confidence in these powerful predictive methods for biomedical and chemical research.