Background Information

Molecular modelling and chemoinformatics have been used for decades for “drug discovery” purposes, that is, the selection and optimization of new chemical entities with improved therapeutic properties. The application of these computational methods in predictive toxicology is more recent, and they are experiencing an increasingly interest because of the new legal requirements imposed by the European Union regulations, such as REACH or the BPR (Biocides Product Regulation). A huge amount of animal testing is needed under these rules to demonstrate the safety of new compounds subjected to registration, and these trials can be significantly reduced by using alternative in vitro and in silico methods, provided they meet specific conditions clearly defined by the OECD and ECHA to ensure their quality and predictive power.

Computational toxicology is a subdiscipline of toxicology that aims to use mathematics, statistics, chemistry and computer modelling tools to predict the toxic effects of chemicals on human health and/or environment. In vivo experiments require much time for preparation and implementation, and are expensive and ethically questionable. In opposite, computer models have the ability to predict the physical, chemical or biological properties of compounds without necessarily carrying out chemical synthesis in the laboratory. The use of in silico approaches represents then significant savings in time, resources and money, and is also characterized by the applicability of the models resulting from easy and immediate way to new structures. Additionally, computation studies can help also to better understand the mechanisms by which a given chemical induces harm.

The most performant and robust application of computational toxicology is without doubt the prediction of toxicity based on chemical structure through the so-called QSAR models. This technique involves the construction of a mathematical model relating by means of statistical tools the chemical structure of a previously-characterized series of molecules (through a sets of molecular numerical descriptors) with a (eco)toxicological parameter. Once this correlation is established, it can be used to predict this toxicological feature in new molecules whose chemical structures are known.

Different pieces of software and web applications have been developed to help end users to make choices regarding the registration and use of substances under international regulatory norms. Nevertheless, when considering the ecotoxicological prediction of biocides and the substitution of the toxic ones by other compounds without ecotoxicological effects, these tools present several important drawbacks:

  • None of these tools has been developed from databases exclusively composed by biocides, and thus the domain of applicability is large. In fact, most of them are REACH-oriented and were not developed with the biocide products in mind. This is a very significant problem, since a correct domain of applicability is one of the OECD-ECHA conditions for a valid QSAR model, indispensable to be considered for regulatory purposes.
  • Most of the endpoints predicted by these tools are physicochemical parameters or human toxicity (especially carcinogenicity and mutagenicity values), but the number of ecotoxicological endpoints is very reduced, which is very limiting when considering the high impact of these products in the environment.
  • No experimental assays were conducted in order to assess the real prediction performance of the QSAR models implemented in these platforms.

The consequence of all of these limitations is that sometimes computational approaches only work modestly and their interest may be questioned.

The LIFE-COMBASE project can be classified as a full-scale application project, taking into consideration the development of an advanced tool to support the implementation of BPR and the replacement of biocides of high concern.