Use for free
Unimelb AI Agent
AI Agent for the University of Melbourne to streamline student services through proactive support

STOP 1 AI Assistant

Challenge

Designing Scalable, Trustworthy AI Support for University Student Services

The challenge was to design an AI-powered student services agent that could meaningfully reduce friction and response times for thousands of time-sensitive enquiries, while operating within the complexity of university systems, policies, and privacy constraints. The solution needed to deliver accurate, empathetic, and accessible support across a wide range of student needs, many of them high-stress or sensitive, without overstepping institutional boundaries or undermining trust in human services. At the same time, the agent had to responsibly recognise its limits, clearly escalate complex or high-risk situations to staff, and complement rather than replace existing support structures, ultimately reimagining how students access help in a scalable, equitable, and trustworthy way.

My Role

In this role, I worked closely with stakeholders, designers, and developers to understand user needs, workflows, and business requirements, translating insights into clear user stories and acceptance criteria in Jira. I analysed user enquiries and operational data to identify pain points and prioritisation gaps, and created user flows, journey maps, and access point maps using tools such as TheyDo and Miro to inform UX and onboarding improvements. I collaborated on wireframes and interface concepts in Figma, and built custom test bots using Gemini’s Gems to simulate real user scenarios and validate behaviour. I also supported backlog refinement and sprint planning, prepared test scenarios for QA, and documented workflows and system behaviour to enable consistent delivery and knowledge sharing.
Process

Design Process

The design process began with extensive user research to understand students’ needs, behaviours, language, and pain points. We conducted interviews, co-design sessions, and developed detailed personas, capturing diverse perspectives including those with varying accessibility requirements. Mapping journey flows, user flows, and access point maps in Miro allowed us to visualise how students discover and interact with support services, revealing friction points, seasonal pressures, and key opportunities for timely intervention. Complementary research; analysing support tickets, emails, and call logs in Excel; highlighted recurring gaps and high-volume enquiries, while design precedent studies, conversational design reviews, response time analysis, naming research, and minor iterative improvement research in Google Docs and Pinterest guided interaction patterns, terminology, and interface cues.

These insights informed the knowledge base and conversational design of the AI. Knowledge sources, drawn from structured systems and web-scraped content, were carefully fact-checked, simplified, and tagged with metadata to ensure clarity, consistency, and accessibility. Layered prompts, behavioural rules, and contextual grounding were developed to guide the AI’s responses, while guardrails, refusal patterns, and escalation triggers ensured sensitive topics were handled appropriately. Visual and interaction design was prototyped in Figma, TheyDo, and Miro, and a custom test bot allowed real-time simulation of conversations. Iterative testing, including Wizard of Oz sessions and A/B comparisons, refined tone, flow, and usability, reinforcing an empathetic, approachable experience that built trust with users while addressing real gaps in guidance and access.
Results

Improved Student Support at Scale

The project resulted in a human centred AI agent that significantly improved accessibility, efficiency, and trust in student services. Previously, students often fell through the cracks because support staff were overwhelmed with high volumes of non-urgent enquiries, leaving limited capacity to address critical or time-sensitive cases. Many students could not access guidance outside office hours, and urgent needs sometimes went unnoticed. The AI agent addressed this by providing 24/7 access to accurate, reliable support for common enquiries, simplifying complex institutional information into clear, actionable steps, and ensuring students with urgent needs were escalated to human advisors without delay. Staff were able to focus on high priority cases, improving triage quality and reducing the risk of important enquiries being missed. The system also built trust through transparent behaviour, policy aligned responses, and responsible handover to human support, demonstrating how AI can enhance rather than replace human services at scale.

Through this process, I was struck by how closely AI processing mirrors aspects of human cognition—interpreting context, recognising patterns, and navigating decisions in ways that resemble human problem solving. Conversational design, user research, and iterative prototyping reinforced the importance of empathetic language, recovery flows, and carefully designed guardrails. Working with institutional content highlighted the critical role of quality and structure in AI performance. Overall, the project strengthened my understanding of responsible, human centred AI design as an iterative, multidisciplinary practice that addresses real student needs, institutional constraints, and the fascinating parallels between human and artificial cognition.

For more details regarding this project such as figma files etc. please contact me at miaicasey1971@gmail.com