TedEval: An Architecture for Cross-Experiment Parse Evaluation

TedEval is a software package for evaluating parsing experiments which can transparently convert syntactic parse trees between representation types while retaining their linguistic content. TedEval provides a set of formal tools and efficient algorithms for comparing the linguistic content of parse trees of different representation types. Based on the compiler engine and algorithms, TedEval delivers an effective and robust platform for cross-parser and cross-annotation evaluation.

TedEval is developed by Reut Tsarfaty, Evelina Andersson and Joakim Nivre at Uppsala University, Sweden.

The first version of TedEval can be downloaded from the TedEval download page.