CodeVoyager: Integrating Interactive Visual Aids with LLMs for Code Comprehension

A person wearing a gray hoodie sits at a desk viewing a computer monitor displaying colorful code syntax in what appears to be a code editor or IDE, photographed from an over-the-shoulder angle in a modern office setting with windows in the background.
Image Credit: Photo by cottonbro studio on Pexels (SourceLicense)

AI Summary of Scholarly Research

This page presents an AI-generated summary of a published research paper. The original authors did not write or review this article. See full disclosure ↓

Publication Signals show what we were able to verify about where this research was published.MODERATECore publication signals for this source were verified. Publication Signals reflect the source’s verifiable credentials, not the quality of the research.
  • ✔ Published in indexed journal
  • ✔ No retraction or integrity flags

Overview

CodeVoyager integrates large language models with interactive visual aids to address limitations in code comprehension. Existing visual representations such as call graphs and control flow graphs provide spatial context but risk cognitive overload and restricted interactivity, while LLM-based code assistants offer accessible natural language explanations yet lack spatial reference for navigation. The tool synthesizes these complementary modalities to enable more effective understanding of unfamiliar codebases through coordinated textual and visual interaction.

Methods and approach

The research employed a two-phase empirical study design. An exploratory phase with 11 participants assessed the tool's feasibility and identified refinement opportunities. Following iterative development, a within-subjects evaluation with 16 participants compared CodeVoyager against an established chat-based code assistant baseline. The integration approach was designed to mirror natural code discussion patterns, enabling seamless transitions between textual explanations and visual code exploration.

Key Findings

CodeVoyager demonstrated measurable improvements in code comprehension relative to the chat-based assistant baseline. Quantitative assessment of comprehension outcomes and qualitative measures of user trust both favored the integrated approach. The tool's effectiveness stemmed from its capacity to maintain synchronization between linguistic explanations and visual representations, supporting parallel cognitive processes involved in code understanding.

Implications

The work establishes empirical support for integrating visual and natural language modalities in code comprehension tools. Design principles emerging from the study indicate that effective multimodal systems should privilege coherence between textual and visual information, enabling users to navigate code through discussion-oriented interaction patterns rather than treating modalities as independent. Future development of developer tools should consider how visual and linguistic representations can be orchestrated to reduce cognitive friction and amplify spatial understanding.

Disclosure

  • Research title: CodeVoyager: Integrating Interactive Visual Aids with LLMs for Code Comprehension
  • Authors: Haneol Lee, Kyochul Jang, Hyungwoo Song, Minjeong Shin, Bongwon Suh
  • Institutions: Seoul National University
  • Publication date: 2026-03-03
  • DOI: https://doi.org/10.1145/3742413.3789057
  • OpenAlex record: View
  • Image credit: Photo by cottonbro studio on Pexels (SourceLicense)
  • Disclosure: This post was generated by Claude (Anthropic). The original authors did not write or review this post.

Get the weekly research newsletter

Stay current with peer-reviewed research without reading academic papers — one filtered digest, every Friday.

More posts