We present the results of the Workshop on Multilingual Information Access (MIA) 2022 Shared Task, evaluating cross-lingual open-retrieval question answering (QA) systems in 16 typologically diverse languages. In this task, we adapted two large-scale cross-lingual open-retrieval QA datasets in 14 typologically diverse languages, and newly annotated open-retrieval QA data in 2 underrepresented languages: Tagalog and Tamil. Four teams submitted their systems. The best constrained system uses entity-aware contextualized representations for document retrieval, thereby achieving an average F1 score of 31.6, which is 4.1 F1 absolute higher than the challenging baseline. The best system obtains particularly significant improvements in Tamil (20.8 F1), whereas most of the other systems yield nearly zero scores. The best unconstrained system achieves 32.2 F1, outperforming our baseline by 4.5 points.
Odunayo OgundepoTajuddeen GwadabeClara RiveraJonathan ClarkSebastian RuderDavid Ifeoluwa AdelaniBonaventure F. P. DossouAbdou Khadre DiopClaytone SikasoteGilles HachemeHappy BuzaabaIgnatius EzeaniRooweither MabuyaSalomey OseiChris Chinenye EmezueAlbert Njoroge KahiraShamsuddeen Hassan MuhammadAkintunde OladipoAbraham OwodunniAtnafu Lambebo TonjaIyanuoluwa ShodeAkari AsaiAnuoluwapo AremuAyodele AwokoyaBernard OpokuChiamaka ChukwunekeChristine MwaseClemencia SiroStephen M. ArthurTunde AjayiVerrah OtiendeAndre Niyongabo RubungoBoyd SinkalaDaniel AjisafeEmeka OnwuegbuziaFalalu Ibrahim LawanIbrahim AhmadJesujoba O. AlabiChinedu Emmanuel MbonuMofetoluwa AdeyemiMofya PhiriOrevaoghene AhiaRuqayya Nasir IroSonia Adhiambo
Andreas RückléKrishnkant SwarnkarIryna Gurevych
Shahrzad HajiAminShiraziSaeedeh Momtazi