GuideData: Handler-Guide dog Interaction dataset

1University of Massachusetts Amherst, 2Daegu Gyeongbuk Institute of Science and Technology, 3University of Maine, 4The University of Texas at Austin,

We introduce the GuideData Dataset, a collection of qualitative data, focusing on the interactions between guide dog trainers, visually impaired (BLV) individuals and their guide dogs. The dataset captures a variety of real-world scenarios, including navigating sidewalks, climbing stairs, crossing streets, and avoiding obstacles. By providing this comprehensive dataset, the project aims to advance research in areas such as assistive technologies, robotics, and human-robot interaction, ultimately improving the mobility and safety of visually impaired people.

Abstract

Research on mobile assistive systems for blind and low-vision (BLV) individuals spans nearly five decades. While commendable progress has been made in user-centric research, references that directly inform robot navigation design remain rare. To bridge this gap, we conducted a comprehensive human study involving interviews with 26 guide dog handlers, 4 white cane users, 9 guide dog trainers, and 1 O\&M trainer, along with 15+ hours of observing guide dog–assisted walking. After de-identification, we open-sourced the dataset to promote human-centered development and informed decision-making for assistive systems for BLV people. Building on insights from this formative study, we developed GuideNav, a vision-only, teach-and-repeat navigation system. Inspired by how guide dogs are trained and assist their handlers, GuideNav autonomously repeats a path demonstrated by a sighted person using a robot. The system constructs a topological representation of the taught route, integrates visual place recognition with temporal filtering, and employs a learned relative pose estimator to compute navigation actions—all without relying on costly, heavy, power-hungry sensors such as LiDAR. In field tests, GuideNav consistently achieved kilometer-scale route following across five outdoor environments, maintaining reliability despite substantial scene variations between teach and repeat runs. A user study with 3 guide dog handlers and 1 guide dog trainer further confirmed the system's feasibility, marking (to our knowledge) the first demonstration of a quadruped mobile system retrieving a path in a manner comparable to guide dogs.

Dataset Description

GuideData dataset captures how blind and low-vision (BLV) individuals work with guide dogs in real-world contexts.

The dataset is composed of five key elements:

  • Handler & Trainer Interviews: Transcripts from semi-structured interviews on the daily use, training, and deployment of guide dogs.
  • Navigation Observations: Videos of experienced handlers and their guide dogs navigating familiar, real-world environments.
  • Matching Training Observations: Videos documenting the initial familiarization process between a handler and a new guide dog.
  • Author Blindfold Walks: Video recordings of supervised, blindfolded walking sessions to directly experience and document handler-dog interactions and navigational cues.
  • Supplementary Data: Includes feedback on robotic guide dog systems and force measurements quantifying the pull from a guide dog's harness.

Participants & Processing

The study includes 40 anonymized participants, including blind and low-vision (BLV) individuals (guide dog and cane users), professional guide dog trainers, and one O&M specialist. Participants had an average of over 24 years of experience using navigation aids.

To protect privacy, all videos were processed to blur faces and license plates, and all personally identifiable information was removed from transcripts. To our knowledge, this is the first comprehensive dataset of its kind. It is available on Kaggle under a CC0 license to support the development of future assistive navigation systems.

Dataset Skeleton

The uploaded dataset on Kaggle is organized in a folder structure. The structure is visualized below:

              
                guidedata-upload/
                ├── IRB-1(74)/
                │   ├── Blindfold in Fidelco (5-6)/
                │   │   ├── images/  - Photos from blindfolded walking sessions at Fidelco
                │   │   └── videos/  - Video recordings from blindfolded walking sessions at Fidelco
                │   ├── Blindfold in Rochester(21-18)/
                │   │   ├── images/  - Photos from blindfolded walking sessions in Rochester
                │   │   └── videos/  - Video recordings from blindfolded walking sessions in Rochester
                │   ├── Transcript/
                │   │   ├── Interview_w_GDT/     - Interview transcripts with guide dog trainers
                │   │   └── Interview_w_subject/ - Interview transcripts with guide dog handlers
                │   └── interview_media(24-7)/
                │       ├── images/  - Photos captured during interview sessions
                │       └── videos/  - Video recordings from interview sessions
                └── IRB-5(76)/
                    ├── S01-Day01-Library/       - Observation videos from library navigation
                    ├── S01-Day02-Northampton/   - Observation videos from Northampton navigation
                    ├── S01-Day03/               - Observation videos from Day 3 sessions
                    ├── S01-Day03-LibraryIndoors/  - Indoor library navigation observations
                    └── S01-Day10-LibraryHouse/  - Library and house navigation observations
              
            

Guide-Dog Trainer Interview

Interviews from Guide-dog trainers are included to provide a comprehensive understanding of the guide dog training process and other guide dog/handler related information.

Guide Dog Handler Interview

We also include semi-structured interview sessions with guide dog handlers to give insights into the human-centric development of future guide dog robots.

BibTeX

@article{hwang2025guidenav,
      title={GuideNav: User-Informed Development of a Vision-Only Robotic Navigation Assistant For Blind Travelers},
      author={Hwang, Hochul and Yang, Soowan and Monon, Jahir Sadik and Giudice, Nicholas A and Lee, Sunghoon Ivan and Biswas, Joydeep and Kim, Donghyun},
      booktitle={2026 21st ACM/IEEE International Conference on Human-Robot Interaction (HRI)},
      year={2026}
    }