JOURNAL ARTICLE

MEWL: Few-shot multimodal word learning with referential uncertainty

Abstract

Dataset Release for MEWL: Few-shot multimodal word learning with referential uncertainty (ICML 2023) GitHub: https://github.com/jianggy/MEWL Abstract: Without explicit feedback, humans can rapidly learn the meaning of words. Children can acquire a new word after just a few passive exposures, a process known as fast mapping. This word learning capability is believed to be the most fundamental building block of multimodal understanding and reasoning. Despite recent advancements in multimodal learning, a systematic and rigorous evaluation is still missing for human-like word learning in machines. To fill in this gap, we introduce the MachinE Word Learning (MEWL) benchmark to assess how machines learn word meaning in grounded visual scenes. MEWL covers human's core cognitive toolkits in word learning: cross-situational reasoning, bootstrapping, and pragmatic learning. Specifically, MEWL is a few-shot benchmark suite consisting of nine tasks for probing various word learning capabilities. These tasks are carefully designed to be aligned with the children's core abilities in word learning and echo the theories in the developmental literature. By evaluating multimodal and unimodal agents' performance with a comparative analysis of human performance, we notice a sharp divergence in human and machine word learning. We further discuss these differences between humans and machines and call for human-like few-shot word learning in machines.

Keywords:
Word (group theory) Benchmark (surveying) Meaning (existential) Process (computing) Word embedding Suite Chunking (psychology)

Metrics

0
Cited By
0.00
FWCI (Field Weighted Citation Impact)
0
Refs
0.33
Citation Normalized Percentile
Is in top 1%
Is in top 10%

Topics

Multimodal Machine Learning Applications
Physical Sciences →  Computer Science →  Computer Vision and Pattern Recognition
Child and Animal Learning Development
Social Sciences →  Psychology →  Developmental and Educational Psychology
Domain Adaptation and Few-Shot Learning
Physical Sciences →  Computer Science →  Artificial Intelligence

Related Documents

JOURNAL ARTICLE

Dynamic Uncertainty-Aware Fusion for Few-Shot Multimodal Learning (S)

Xu ZhiqianruCheng ZengAoyu WangChao Zeng

Journal:   Proceedings/Proceedings of the ... International Conference on Software Engineering and Knowledge Engineering Year: 2025 Vol: 2025 Pages: 281-284
JOURNAL ARTICLE

Multimodal Few-Shot Learning for Gait Recognition

Jucheol MoonNhat Anh LeNelson Hebert MinayaSang‐Il Choi

Journal:   Applied Sciences Year: 2020 Vol: 10 (21)Pages: 7619-7619
© 2026 ScienceGate Book Chapters — All rights reserved.