I would like to show a bit the Conceptual and Tool Development which took place in this project over the course of five years. You really have to understand this was a lot of learning on the road. Learning on the road, required a forum in which we could exchange our ideas, describe what we're doing, and these are a few articles, because we converted the journal ALTEX into such a platform. Which was good for us and was good for the journal. Which in the meantime is one of the highest impact factor journals in the area of toxicology was in current impact factor of 5.8. The concept of Pathways of Toxicity was covered for example, by a dedicated workshop, published in ALTEX. This workshop which took place here in Baltimore, tried to figure out what would be the nature of the Pathways of Toxicity which we want to include in the Human Toxome Database. How much is it different from existing databases of pathways? What are the contents from molecular biology, biochemistry, the signatures of toxicity of the Omics technologies or toxicological mechanism which has to fit it? What are other uses scenarios? What are regulators needing? What is required to allow for example a probabilistic risk assessment, and systems toxicology or even very ambitious projects on virtual patients, virtual organs which would be making use of the information? We started to define what is a Pathway for Toxicity actually. Another example of a Tool Development, is we created the Toxome Collaboratorium. This is actually an Amazon Cloud server solution, in which various type of Omics technology data can be uploaded, and they're combined with an electronic laboratory notebook so that the experiments can be described, and a suite of at this moment 47 different algorithms and programs to analyze these data. This Cloud server solution allowed us to work in the consortium with the data all at one place, and proper documentation as to any data manipulation which took place, an in normalization any change could be traced and tracked and also reverted if necessary. This led itself to publication and this suite of methodology is actually available also now for other consortium. The Toxome Collaboratorium is the solution for us to connect servers for the Toxome analysis. It is commercial and open source software which is combined here. It is a compendium of consortium generated relevant public domain data sets. It is portable and reusable. It is a reference infrastructure. As I said it can easily be redeployed, and can be used by future efforts, by other consortium as we have already started. But Agilent also developed in the context of our project a variety of tools I do not want to, or do not have the time to discuss going too much detail. Here's a list of some of the programs which have been impacted by our project. Now, additional tools available which are specifically targeting pathway identification and typical problems in the area of toxicology. Now, we needed an assay. Something to study our Pathway of Toxicity and we did choose an assay for endocrine disruption which is known as the MCF-7 cell assay. MCF-7 cells are a breast cancer cell line, which is responsive to estrogen and is proliferating faster in the presence of estrogens. It is a standard tool in the identification of the endocrine disruptors. At the time of conceiving of project, this assay had been successfully pre-validated by the U. S. validation body ICCVAM, MASTIUM, and Robust protocols available. It was considered the good cell model warranting international validation and ring trials which started around the same time as our project. It also had the advantage that reference substance had been chosen by ICCVAM, the Interagency Coordinating Committee for the Validation of Alternative Methods. We did pick from this. We started, in establishing this assay into laboratories and we started doing, some transcriptomics experiments. Then the disappointment started. We did not see a lot of clear signatures of effects of estrogen on the level of transcriptomics and especially we had tremendous problems of reproducing findings between the two laboratories. Still there was always a glimpse of hope. As you can see on the next slide where, the heat map of effects is shown. Here you see gene alterations in response to estrogen, for around 100 genes which were picked because they were known to be estrogen-responsive. You can see that, each line represents a different experiment. Those which are assigned the label BU were done at Brown University. Those which are designed as JHU were done at Johns Hopkins. You can see there is certainly some correspondence. There's areas of intense coloration or no coloration which I have circled here which are similar throughout the different experiments. But at the same time, there's other areas where we saw reactions only into the Hopkins cells and not in the cells at Brown University. We are speculating at the time that there is a problem with genomic instability and we will come back to this in a second. That we had problems between the two laboratories, and that the cells are not actually the same. Another set of tools are the software tools. First of all, we made use of a series of available pathway tools, Cytoscape, SMPDB, Wikipedia, WikiPathways, and KEGG, a large scale Japanese metabolomic and genetic database. Using such tools we could actually see again that there is activity. You see here the estrogen pathways, and both our estrogen experiments and those using PPT, Propyl Pyrazole Triol, selective estrogen receptor alpha agonist, we could map changes to the estrogen receptors pathways. There was something happening in these cells, but this was very much covered by a lot of noise, a lot of differences between cells systems, which we observed in the large number of experiments. The pathways we could deduce from these tools, they were not so super exciting. These were the pathways which obviously linked to growth, to energy metabolism. But we could actually identify that some of these pathways, as we would expect, are linked to estrogen treatment on the level of transcriptomics. Overall, the disappointment was stronger than the excitement which we produced with this data set. At the same time, we found publications which we also then confirmed with our own data, which showed very convincingly that experiments on transcriptomics doing exactly the same experiment as we did, led to very little correspondence in the overall transcriptomics changes. This paper by Ochsner el al. for example studied only experiments done in different laboratories seven in total. Where MCF-7 cells had been treated for 24 hours with 1nM estradiol, and where a whole genome micro array had been conducted. And over more than 2,000 genes which were altered statistically significant in the first laboratory. But the overlap with additional laboratories was very, very low and there was not a single gene which was significantly different in its expression in more than four laboratories. And this shows the dramatic problem of transcriptomics. You keep in mind, I call this the most standardized of the Omics technologies. Interestingly, the MCF- 7 test failed the parallel validation study by ICCVAM. It was not reproducible also from the literature as you have seen here. Obviously, we have a problem with this test system. We went a little bit deeper and looked into the MCF-7 cells. You have to imagine, this is a cell line from the 70s, which has been used in 23,000 scientific articles. It is one of the rare causes in vitro work. If we do, a Karyotype, so look into the chromosomal apparatus of the cell, you see that a lot of the chromosomes are not found in the typical two chromosomes that are mum, but some of them are in three, or four copies, some are not identifiable anymore. A lot of changes took place were additions and deletions took place in these chromosomes. In order to quantify this better, we carried out a comparative genome hybridization, and we observed that 10% of the genome was actually lost. 50% of the genome was there in less than two copies, regular two copies. And 30 % on the contrary, were found in more than two copies. Up to 30 copies were measured. So this is not only a deck of cards which has been reshuffled completely, it's a deck of cards where some cards have been eliminated and others have been duplicated several times. You can say that this is actually a Frankencell. This has nothing to do with normal physiology. The problem went even further. We were able to publish in Nature's online journal Scientific Reports most recently that the genetic instability is so far that even cells in a single frozen batch which was never show dramatic differences in six chromosomes which had impact on transcriptomics metabolomics growth behavior as well as morphology of the cells. We have a tremendous problem here. This was part of a recent press release. This is, for me, a moment to recall that quality assurance not only of metabolomics but of the cell system is as important. You have the opportunity to hear in our subsequent lecture series more about good self-culture practice, and initiative to quality assurance. Cell culture which are driving. Which has been most recently activated again, with two workshops of our Transatlantic Think Tank with a variety of agencies and organizations. And we created the International Good Cell Cultural Practice Collaboration in order to promote quality assurance standards in cell culture. So, if you take-home messages from this part, cell models do not become better by adding a sophisticated endpoint! Putting Omics technologies to a cell culture, only shows the weaknesses because we're now getting a much fuller picture of how the cells are not reproducible, how they're varying. Any type of 21st century toxicology starts with 21st century cell culture.