[epe-users] towards a high-level summary of participating parsers

Stephan Oepen oe at ifi.uio.no
Tue Aug 8 12:20:42 CEST 2017


colleagues,

in preparing our task summary, and to support everyone in trying to
interpret our empirical results, we would like to gather some
high-level information about all participating systems, viz. for each
run:

(0) representation: brief characterization of the type of
dependencies, e.g. LTH, DM, UD, etc.
(1) training: source (nature) of your training data, e.g. PTB
converted by CoreNLP 3.3.1.
(2) tokens: size of the training data, in tokens (according to your
tokenization scheme).
(3) input: whether the parser operated on ‘raw’ texts or our
pre-processed input files.
(4) reference: a pointer (web link or bibliographic reference) to more
background.

—we have added these columns to our master spreadsheet; see:

  http://goo.gl/ZTZxXW

in some cases, i was able to make informed guesses from the README
files provided with each run.  but there is a lot of information
missing that i would like to ask each team to fill in, preferably over
the next few days.

could one representative per team please quickly email
‘epe-organizers’ what you would like to appear for your submissions
for questions (0) to (4) above?

semi-relatedly, thanks to everyone who has filled in our short
questionnaire about how to best wrap up EPE 2017 and plan for the
future.  a few teams are still missing, and we would like to share a
summary with everyone at the end of this week.

if you have not done so already, please take a minute and provide your
perspective to us:

  http://goo.gl/tpDK3o

best wishes, oe



More information about the epe-users mailing list