The BabyView Project

about_bv.png

The BabyView Project is a research initiative dedicated to capturing children’s everyday experiences via high-resolution videos taken from the infant perspective, and to using the resulting video data to better understand cognitive development. Our dataset is currently the largest open video dataset of children’s everyday experience to date, both in terms of the number of hours and the diversity of the participating families; data collection is currently ongoing.

You can find information on our current publications, how to access the releases of the dataset, and the specifics about the build of the BabyView camera.

Data Snapshot


Funding

The BabyView project acknowledges generous support from:

  • The Stanford Human-Centered AI Initiative (HAI) Hoffman-Yee grant program
  • Schmidt Futures
  • Meta, Inc.
  • The Stanford Center for the Study of Language and Information John Crosby Olney Fund
  • Amazon, Inc.
  • Compute support from the Microsoft Accelerating Foundation Models Research (AFMR) program
  • NIH K99HD108386 to BLL

Selected Publications

  1. tan2025.png
    Assessing the alignment between infants’ visual and linguistic experience using multimodal language models
    Alvin Wei Ming Tan, Jane Yang, Tarun Sepuri, Khai Loong Aw, Robert Z Sparks, Zi Yin, Virginia A Marchman, Michael C Frank, and Bria Long
    arXiv preprint arXiv:2511.18824, 2025
  2. sepuri2025.png
    Characterizing young children’s everyday activities using video question-answering models
    Tarun Sepuri, Khai Aw, Alvin Tan, Robert Sparks, Virginia Marchman, Michael C Frank, and Bria Long
    2025
  3. long2025.png
    The BabyView dataset: High-resolution egocentric videos of infants’ and young children’s everyday experiences
    Bria Lorelle Long, Robert Z. Sparks, Violet Xiang, Stefan Stojanov, ZiYin, Grace Keene, Alvin Wei Ming Tan, Steven Y. Feng, Auddithio Nag, Chengxu Zhuang, Virginia A. Marchman, Daniel LK Yamins, and Michael Frank
    In Proceedings of Cognitive Computational Neuroscience Conference (8 page track), 2025