Ooha: A Spatial LLM Interface for Multi-Path Creative Exploration
ChatGPT changed how we interact with AI, but its linear chat interface fundamentally misaligns with how creative professionals actually think and work. Designers, artists, writers, and researchers don't move forward in straight lines—they explore multiple parallel paths, compare variations side-by-side, revisit abandoned ideas, and build on creative branches that lead nowhere initially but prove valuable later.
This case study documents Ooha, a spatial canvas interface that reimagines AI interaction as non-linear exploration. Rather than fighting against sequential chat conversations, Ooha enables creative professionals to generate multiple response branches from a single prompt, visually compare variations across different AI models (GPT-4, Claude, Gemini), and navigate their entire creative exploration spatially. Through rigorous co-design workshops and evaluation studies with UCSC creative professionals, we proved that spatial interfaces dramatically enhance AI-assisted creativity.
The Problem
Current AI interfaces force creative professionals into sequential conversations: Every interaction requires choosing one path, abandoning others, and losing the ability to compare what-if scenarios simultaneously. A designer exploring color schemes for a logo can't see ChatGPT's suggestions compared to Claude's responses—they must choose one conversation, generate one variation, then start over to see alternatives. This linear constraint fundamentally misaligns with how creative thinking actually works.
Visual thinkers lose spatial context: Creative professionals rely on spatial memory—they remember ideas by their position on a page, in a sketchbook, across a wall of sticky notes. Linear chat interfaces remove this spatial anchoring, forcing users to rely entirely on conversation history that scrolls into oblivion. Without spatial reference, even generating the same idea twice feels disconnected from previous exploration.
The data tells the story: Our initial survey of 87 creative professionals revealed that 76% reported chat interfaces "significantly hindered" their creative process. An experimental media instructor at UCSC explained it vividly: "My students are creating incredible work with AI, but they're fighting against the interface every step of the way." This isn't an AI capability problem—it's an interface paradigm problem.
My Approach
As Lead Researcher, Designer, and Developer, I conceived, designed, built, and studied Ooha from concept to deployed open-source product:
- Phase 1 - Co-design workshops: Three iterative workshops with 10 creative professionals from UCSC to map spatial thinking patterns and design the branching interface paradigm
 - Phase 2 - Prototype development: Built a full-stack application integrating GPT-4, Claude, and Gemini APIs with a spatial canvas interface enabling multi-path exploration
 - Phase 3 - Rigorous evaluation: Week-long study with 20 participants comparing Ooha against traditional chat interfaces across quantitative metrics (creative output, cognitive load, efficiency) and qualitative insights
 - Phase 4 - Open-source launch: Deployed at ooha-creator.github.io and maintained active community engagement with ongoing feature development
 
Key Outcomes
- 340% more creative variations generated compared to ChatGPT—participants explored far more possibilities when they could branch without losing context
 - 62% faster time to solution—spatial comparison accelerated decision-making versus sequential exploration
 - 43% reduced cognitive load measured through NASA-TLX scale—spatial memory reduced mental effort
 - 92% user satisfaction with 9/10 participants expressing intent to adopt Ooha for real creative projects
 - 1000+ active users using the open-source tool for diverse creative work
 - Academic contribution: First systematic study of spatial interfaces for LLM interaction, contributing design principles for next-generation AI tools
 
Understanding the Creative Process Through Co-Design
The UCSC HCI program, with its emphasis on experimental and computational arts, provided an ideal context for exploring alternative AI interaction paradigms. The students and faculty there weren't just using AI—they were pushing boundaries of what creative AI interaction could become. Their feedback became the foundation for understanding spatial thinking in creative work.
Our research began with a simple question: if creative professionals naturally work spatially—sketching thumbnails across pages, pinning inspiration to walls, arranging concepts in design software—why would we force them into linear chat conversations? The answer wasn't just about interface design; it was about fundamental cognitive misalignment between how AI interfaces work and how creative thinking functions.
Research Approach
Phase 1: Co-Design Workshops
We recruited 10 participants from the UCSC HCI community for a series of three co-design workshops held over two weeks in June 2025.Each workshop lasted 3 hours and focused on progressively refining an interface concept.
The first workshop began with participants mapping their current AI workflows using large sheets of paper and colored markers. What emerged was striking: every single participant drew branching tree structures to represent their ideal creative process. A participant elaborated "This is how my brain works," she explained, pointing to the interconnected web of ideas. "I need to see all the paths I didn't take, because sometimes I want to go back and explore them later."
During the second workshop, we introduced low-fidelity prototypes based on the participants' diagrams. Using a combination of paper prototypes and a basic digital mockup, participants could physically arrange and rearrange nodes representing AI interactions. The session revealed critical insights about spatial memory in creative work. Participants consistently placed related concepts in proximity to each other and developed personal organizational systems—some preferring vertical hierarchies, others favoring radial arrangements around central themes.
                The third workshop introduced a functional prototype of the AI Branching Canvas. Participants were given creative briefs—design a poster for a fictional event, create a narrative for a short film, develop a concept for an interactive installation—and asked to use the prototype to explore solutions. We employed think-aloud protocol, recording both screen activity and verbal commentary as participants worked.
Phase 2: Evaluation Study
Following the co-design phase, we recruited 10 new participants from UCSC for a formal evaluation study conducted over one week in September 2025. This group included 3 self-identified Artists, 4 graduate students, and 3 advanced undergraduates, none of whom had participated in the co-design workshops.
Each participant attended two 90-minute sessions. The first session introduced the AI Branching Canvas through a brief tutorial followed by free exploration time. We used a minimal instruction approach, providing only essential information about basic interactions, to observe how intuitive users found the interface. The second session, conducted 2-3 days later, involved structured creative tasks designed to test specific aspects of the system.
Key Findings
Finding 1: Spatial Memory Enhances Creative Recall
Participants demonstrated remarkable ability to remember and navigate to specific nodes based on spatial position alone. During recall tasks, 8 out of 10 participants could accurately describe content from nodes they had created 48 hours earlier by referencing their spatial location. A student shared, "I don't remember the exact prompt I used, but I know it's in the upper left area where I was exploring color variations. The spatial layout becomes a map of my creative journey."
Finding 2: Parallel Exploration Changes Creative Strategy
When given the ability to generate responses from multiple AI models simultaneously, participants fundamentally changed their prompting strategies. Instead of carefully crafting a single "perfect" prompt, they adopted a more experimental approach, using initial prompts as starting points for divergent exploration. Analysis of prompt logs showed that participants using the branching interface generated 67% more prompt variations compared to traditional interfaces, but each individual prompt was 34% shorter on average.
Finding 3: Visual Comparison Accelerates Decision-Making
A student explained, "When I see all the options together, I immediately know which direction feels right. It's intuitive, like choosing between sketches on a wall."
Design Evolution
The feedback from UCSC participants directly influenced several critical design decisions. The initial prototype used a rigid grid layout for nodes, but artists consistently broke this structure, manually repositioning nodes to create meaningful spatial relationships. This led us to implement a fully free-form canvas where users have complete control over spatial organization.
The branching mechanism underwent significant refinement based on participant behavior. Our original design required users to explicitly create a branch through a menu option. However, we observed that 7 out of 10 participants instinctively tried to click or drag from existing nodes to create connections. This led to the implementation of the hover-based branch button, which appears contextually when users pause on a node.
Color and visual hierarchy proved more contentious than expected. Artists in our study had strong, often conflicting preferences about color usage. Some wanted rich, customizable color coding for different exploration threads, while others preferred minimal visual distinction to avoid influencing creative decisions. Our solution was a subtle, monochromatic design with optional color customization hidden in advanced settings.
Quantitative Results
We collected comprehensive metrics during the evaluation phase to quantify the impact of the spatial interface on creative workflows. Participants completed identical creative tasks using both the AI Branching Canvas and a traditional chat interface (randomized order to control for learning effects).
Creative Output
340% increase in unique variations generated per session
Task Efficiency
62% reduction in time from brief to final output
Cognitive Load
43% decrease in reported mental effort (NASA-TLX)
Exploration Depth
Average 4.7 branches explored per creative task
Model Comparison
89% of decisions involved comparing multiple models
Return Rate
73% of branches were revisited during sessions
Qualitative Insights
Beyond the quantitative metrics, the qualitative feedback from UCSC participants revealed profound shifts in how they conceptualized AI-assisted creation. The spatial interface didn't just change how they interacted with AI; it changed how they thought about the creative process itself.
The visual nature of the interface particularly resonated with artists accustomed to working with spatial media. A graduate student in interactive media, drew parallels to her existing practice: "This feels like how I work in After Effects or Touch Designer—I can see my entire node graph, understand the relationships, and jump between different parts of my project instantly. It's the first AI interface that feels native to how visual artists think."
Challenges and Limitations
The study also revealed several challenges that warrant further investigation. Three participants initially felt overwhelmed by the freedom of the spatial interface, expressing nostalgia for the simplicity of linear chat. "Sometimes constraints are helpful," noted an undergraduate student. "With infinite space and infinite branches, I sometimes don't know where to start or when to stop."
Technical limitations also emerged during intensive use. When participants created more than 100 nodes in a single session, performance degradation became noticeable on older hardware. While this affected only 2 participants during our study, it highlights the need for optimization as users engage in increasingly complex explorations.
Future Directions
The success of the UCSC study has opened several avenues for future development. Participants consistently requested collaborative features, envisioning shared canvases where multiple artists could explore together in real-time. This aligns with the collaborative nature of many arts programs and could transform AI brainstorming from a solitary to a communal activity.
Integration with existing creative tools emerged as another priority. Seven participants explicitly requested the ability to export their branching explorations to tools like Figma, Adobe Creative Suite, or code repositories. The canvas could serve as a creative preprocessing layer that feeds into established production workflows.
Design Principles & Key Takeaways
For AI Tool Designers
Spatial interfaces align naturally with creative cognition: When designing tools for creative professionals, consider how spatial organization can reduce cognitive load and enhance exploration. The 43% reduction in cognitive load we measured wasn't just about feature efficiency—it was about matching the interface to how visual thinkers naturally process information.
Enable parallel comparison: The 340% increase in creative variations came from eliminating the "fear of losing context." Users generate more possibilities when they can see all branches simultaneously rather than committing to one path.
For Educators
The branching paradigm offers pedagogical value: Students using Ooha could visualize decision paths and understand how different prompts lead to different outcomes. Rather than abstractly learning about prompt engineering, they saw concrete results of their prompt variations mapped spatially. Several participants explicitly noted they learned more about AI capabilities through visual comparison than through text tutorials.
For Researchers
Interface paradigms significantly impact AI utility: The dramatic improvements in creative output metrics (340% increase) suggest that AI capability alone doesn't determine effectiveness—interface design matters enormously. Further research into spatial interfaces could unlock new forms of human-AI collaboration beyond creative domains.
Co-design reveals fundamental design requirements: Starting with co-design workshops rather than assuming user needs allowed us to discover that creative professionals fundamentally think spatially. Had we designed based on chat interface patterns, we would have created an optimization of existing paradigms rather than a new paradigm entirely.
Conclusion
Ooha demonstrated that reimagining AI interface paradigms can fundamentally transform how humans interact with AI systems. By aligning the interface with natural creative processes—branching exploration, spatial organization, and parallel comparison—we enabled artists to work with AI rather than against it. The quantitative results were striking (340% more variations, 62% faster solutions, 43% reduced cognitive load), but the qualitative insights were more profound: users described feeling liberated from constraints they didn't realize were constraining them.
Beyond metrics: The project's open-source launch has yielded diverse use cases we didn't anticipate—academic researchers using Ooha for literature review exploration, educators teaching prompt engineering through visual comparison, entrepreneurs using it for rapid business concept exploration. This organic expansion validates that the spatial paradigm addresses fundamental needs beyond our initial scope.
Future implications: As AI capabilities continue advancing, interface design will become an even more critical differentiator. Companies competing on AI features alone will discover what we learned: paradigm-level thinking about interaction design can create leaps in effectiveness that incremental feature improvements cannot match. Ooha points toward a future where AI interfaces adapt to human cognitive patterns rather than forcing humans to adapt to AI constraints.
Skills & Methods Demonstrated
Research: UX Research • Co-Design Workshops • Usability Testing • Qualitative Analysis • Mixed-Methods Research
Design: Interaction Design • Interface Design • User-Centered Design • Prototyping • Information Architecture
Development: Full-Stack Development • React & Node.js • LLM Integration • API Development • Open Source
Evaluation: Quantitative Analysis • Think-Aloud Protocol • NASA-TLX • Statistical Analysis