How are the data used?

The texts are initially provided as digital images, transcribed by the double-key-technique, and finally transformed to digital texts enriched by extensive (TEI-compliant) metadata. This metadata originates initially from hermeneutical analysis of the corpus, in cooperation with experts in information technology tools are developed for the task of automatically capture extensive metadata. Metadata therefor begin with simple bibliographical and structural data, but reach way beyond.

The corpus is integrated in virtual research infrastructures such as TextGrid, DARIAH and CLARIN to be linked among themselves and among other referential texts – i.e. the works of Goethe and Schiller cited in the poetics, which are available via the TextGridRepository and of which variants of different spelling can be covered due toFuzzy String Matching’. In Addition, trainable interactive tools for analysis and annotation are being developed and appended to these infrastructures, so that the edited corpus and the developed tools can be provided to the whole scientific community for further studies.