A New AI Evaluation Cosmos: Ready to Play the Game?

Authors

  • José Hérnandez-Orallo Universitat Politècnica de València
  • Marco Baroni Facebook
  • Jordi Bieger Reykjavik University
  • Nader Chmait Monash University
  • David L. Dowe Monash University
  • Katja Hofmann Microsoft Research
  • Fernando Martínez-Plumed Universitat Politècnica de València
  • Claes Strannegård Chalmers University of Technology
  • Kristinn R. Thórisson Reykjavik Universit

DOI:

https://doi.org/10.1609/aimag.v38i3.2748

Abstract

We report on a series of new platforms and events dealing with AI evaluation that may change the way in which AI systems are compared and their progress is measured. The introduction of a more diverse and challenging set of tasks in these platforms can feed AI research in the years to come, shaping the notion of success and the directions of the field. However, the playground of tasks and challenges presented there may misdirect the field without some meaningful structure and systematic guidelines for its organization and use. Anticipating this issue, we also report on several initiatives and workshops that are putting the focus on analyzing the similarity and dependencies between tasks, their difficulty, what capabilities they really measure and – ultimately – on elaborating new concepts and tools that can arrange tasks and benchmarks into a meaningful taxonomy.

Author Biography

Marco Baroni, Facebook

Artificial Intelligence Research Laboratory

Downloads

Published

2017-10-02

How to Cite

Hérnandez-Orallo, J., Baroni, M., Bieger, J., Chmait, N., Dowe, D. L., Hofmann, K., Martínez-Plumed, F., Strannegård, C., & Thórisson, K. R. (2017). A New AI Evaluation Cosmos: Ready to Play the Game?. AI Magazine, 38(3), 66-69. https://doi.org/10.1609/aimag.v38i3.2748

Issue

Section

Workshop Reports