Cocar

Connecting students for shared trips between campus and surrounding suburbs

Project overview

CoCar is a ride-sharing platform designed specifically for university students seeking more accessible, affordable, and social ways to commute. Our project centred on designing "Coco," a friendly and inclusive chatbot that helps users register, estimate ride costs, find compatible student drivers, and get support. The aim was to replace clunky form-based flows with natural, contextual, and responsive interactions. Coco was developed through a rigorous, user-centred, iterative design process that prioritised accessibility and conversational UX. Our work spanned across Figma wireframes, Voiceflow prototypes, Wizard of Oz testing, accessibility research, A/B testing, and chatbot personality crafting.

My Role

As project coordinator and conversation design lead I:
- Directed and defined our design process, ideation, scope, testing and prototyping pipeline.
- Organised group meetings to stay on track and make sure we'll deliver on time.
- Presented our work in progress to gather feedback.
- Broke down the project into smaller, simplified tasks to make it more digestable.
- Led user testing (incl. A/B testing, Wizard of Oz, desktop walkthroughs).
- Built out the first draft for our Voiceflow prototype and identified areas where JavaScript logic and API setup would be    beneficial.
- Worked with the team to establish the chatbot tone and flow using conversation design principles.
- Designed our app's Figma wireframes with accessibility in mind.
- Created the first conversation flow draft and all custom components to categorise and organise flows.
- Conducted research and identified sources to inform our design decision and to justify them in our report.

Ideation

Using 10+10 sketching and HMW framing, we generated and refined dozens of ideas in response to our user challenges. We used priority matrices to decide what features made sense in a chatbot context and mapped them against the user journey. This ideation process helped us zero in on a chatbot that would assist with registration, fare estimates, driver matching, and FAQs; all in a friendly, guided manner.

Reseach

We began with secondary research into student transport behaviours, accessibility needs, and conversational design best practices. Using insights from real-world reports and data from the University of Melbourne and organisations like PWDA and NDIS, we identified critical gaps in mobility, affordability, and inclusion. Personas were used to represent a range of users, including students with vision impairments, mobility limitations, and sensory needs, which grounded our direction in real human stories.

Prototyping

We prototyped early with low-fidelity Figma flow diagrams, storyboards, and physical props (including a Lego-assisted walkthrough). As ideas solidified, we created high-fidelity conversation flows using Figma and built semi-functional versions in Voiceflow to test logic, tone, and usability. We explored happy paths and edge cases, especially considering accessibility in edge cases.

User testing

We tested frequently and deliberately. Wizard of Oz sessions gave us early insight into how users interacted with chatbot personas, while A/B tests allowed us to optimise flow clarity and tone. Some of our key tests compared open-ended vs. button inputs, friendly vs. directive tone, and auto-progression vs. confirmation prompts. Users preferred contextual, friendly messaging with progressive guidance, leading to higher CSAT scores and lower error rates.

Development

The final chatbot was built in Voiceflow using a modular approach. We implemented real-time APIs (e.g., Google Maps for cost estimation), JavaScript logic for fare calculations, and structured fallback paths for error handling. I handled the build logic, tested variable flow stability, and iterated the conversation model using real user input and feedback. Key development challenges included API integration, Voiceflow limitations, and making intent detection as seamless and flexible as possible.

Reflection

This was one of the most rewarding design projects I’ve worked on. It combined accessibility, conversational design, and system thinking in a way that pushed me technically and empathetically. I’ve gained confidence in designing with real constraints, advocating for universal design, and leading end-to-end testing. Looking ahead, I’d love to explore ethical frameworks for conversational AI and dive deeper into cross-cultural UX for multilingual systems. This project showed me how inclusive, conversational products can bridge real gaps in access and agency.

DEep dive

Designing COCO

Coco wasn’t just designed to function, she was designed to feel like a peer. That meant every detail of Coco’s tone, flow, and visual design had to be intentionally crafted. We wanted to avoid robotic or overly chirpy tones, aiming instead for a voice that was helpful and human without pretending to be one.

Her flows were modular but seamlessly connected, so users could move between FAQs, registration, cost estimates, or driver matching naturally, without the user needing to restart or feel lost. For example, if a user suddenly asked about cost mid-registration, Coco would recognise the intent and route them accordingly, before returning them to where they left off. We also embedded confirmation prompts at key points, like checking a user’s address or ride preference, to reduce errors and help users feel in control. Microcopy mattered too. Every message was written with purpose, they were short, clear, and friendly. We conducted A/B testing to see how wording influenced user confidence and trust.

Visually, the Figma prototype mirrored Coco’s personality. Spacious layouts, high-contrast buttons, and bright tones made the interface feel approachable and usable across devices. In Voiceflow, we layered in multiple response options to reduce repetition and make interactions feel more human. We also made sure Coco had boundaries, when something was out of her scope, she was honest, redirecting gently without overpromising. Coco grew and changed as we tested. Each round of feedback whether from drop-off points, misunderstood inputs, or emotional tone mismatches helped us sharpen her flows and expand her capabilities.

Wireframes & access points

Our Figma wireframes imagined Coco embedded in a broader student-facing ride-sharing interface, including mobile, tablet, and desktop views. We paid special attention to accessibility: maintaining strong visual hierarchy, spacing elements for touch accessibility, and adhering to WCAG AAA colour contrast ratios.Headings were structured for screen reader compatibility, and button size and spacing met tap target minimums for users with limited dexterity. Each version was responsive and designed to maintain layout and functionality across high magnification or low vision modes.

I also mocked up several access points to A/B test to understand where users were most likely to notice and use the chatbot and which placements were most accessable. We found that the square button labelled "Ask Coco" as well as the circular button with CoCar's logo situated in the bottom right of the screen captured the most user attention and was easier to reach as it complimented the user's organic holding position of a phone.

Acess point mapping on wireframes

Conversation FLows

After user testing and feedback, we iterated on our initial chatbot prototype and designed two high-fidelity conversation flows in Figma: one for user registration and account setup, and another for finding a suitable driver. These flows helped us clearly visualise the user journey and anticipate a wide range of user responses, including unexpected behaviours. We paid particular attention to crafting intuitive happy paths while building in support for ambiguity and error recovery. We designed with our four personas in mind, especially Chloe, who has mobility impairments. This helped us realise that long text input fields could present a challenge, so we prioritised buttons and structured options wherever possible to reduce typing. We refined how questions were asked, how the chatbot confirmed user intent, and how it offered gentle re-prompts when input was unclear. Throughout, we adhered to accessibility best practice which include using a legible typeface, accessible font sizes, and maintaining high colour contrast ratios (to WCAG AAA standards) in our Figma prototypes. This attention to both interaction logic and visual design ensured that our flows would not only be functional, but genuinely inclusive.

Conversation flow diagram

Voiceflow

n Voiceflow, we constructed a modular, intent-based system using a mix of capture blocks, choice blocks, conditions, and API calls to simulate real-world functionalities. We designed Coco’s flows to be flexible but contained, avoiding scope creep by clearly segmenting the chatbot’s four core capabilities: driver matching, registration, ride cost estimation, and FAQ handling. Each flow was built with intent classification in mind, so Coco could reroute users when they changed topics mid-flow. To simulate realistic responses, we integrated Google’s Distance Matrix and Address Validation APIs to estimate ride costs and ensure proper location formatting. Our design also relied on reusable components to minimise logic duplication, and fallback pathways were built into each flow to gracefully recover from user errors. We used randomised variants for key responses to reduce fatigue and improve perceived personality. Reflecting on this build, Voiceflow allowed us to strike a good balance between natural conversation and guided structure, though we were limited by some of its features like incomplete voice integration and lack of deeper context memory.

Screenshot of Voiceflow

Accessibility by Design

We designed for inclusivity from the start:
- Favouring button-based interactions over free text to support users with mobility impairments.
- Using readable fonts, WCAG AAA colour contrast, and responsive layouts.
- Ensuring compatibility with screen readers and keyboard navigation.
- Supporting voice-to-voice interaction in Voiceflow for blind/low-vision users.
- Designing fallback flows and progressive prompts to reduce cognitive load.

Each flow was tested with personas with diverse accessibility requirements such as Alex (who experiences low vision) and Chloe (who has cerebral palsy) in mind. We treated accessibility not as a constraint but as a driver of better UX for all.

Challenges and Limitations

One of the main challenges we faced was working within the technical constraints of the Voiceflow platform. While Voiceflow allowed for quick prototyping and intent mapping, its voice-calling feature was unstable and introduced conversational loops that we could not resolve. This required us to re-scope certain parts of the user experience and focus more heavily on text-based interaction. Integrating university login (Okta) directly into the chatbot was also outside the scope of the tools available to us. We opted to simulate the login process in our prototype to maintain flow realism. Another subtle but recurring issue was balancing fallback robustness with flow simplicity—if we added too many fallback paths, the design became hard to maintain; too few, and users could fall through the cracks. Designing for unpredictable user phrasing and intent-switching proved complex, requiring extensive user testing to iteratively fine-tune each edge case. Lastly, API responses (e.g., address verification, cost estimates) had to be mimicked or delayed, limiting real-time realism. These constraints pushed us to think more creatively and empathetically about what "good enough" looks like for prototyping while staying true to user needs.