Task
The general task for participants is the following one: Given one or several RDF dataset(s) and natural language questions, return the correct answers or a SPARQL query that retrieves these answers.
Benchmark
The QALD-3 to QALD-5 question sets for multilingual question answering over DBpedia have a DOI. Please use this when citing the data.QALD-3: doi:10.4119/unibi/citec.2013.6
- Reference: DBpedia 3.8
- Languages: English, German, Spanish, Italian, French, Dutch
- 200 questions (train: 1-100, test: 101-200)
QALD-4: doi:10.4119/unibi/2687439
- Reference: DBpedia 3.9
- Languages: English, German, Spanish, Italian, French, Dutch, Romanian
- 250 questions (train: 1-200, test: 201-250)
QALD-5: doi:10.4119/unibi/2900686
- Reference: DBpedia 2014
- Languages: English, German, Spanish, Italian, French, Dutch, Romanian
- 420 questions (multilingual QA - train: 1-340, test: 341-390, hybrid QA - train: 391-410, test: 411-420)