Explainable
Explainable bridges the AI Literacy gap by offering a hands-on, visual learning experience for learners, builders, and curious minds seeking to explore generative AI with greater clarity.
Client
Personal Project | Renaiya
Season
Summer 2025
Timeline
2 weeks | ongoing
project overview
project summary
Explainable is an interactive educational Kaggle notebook designed to demystify generative AI capabilities and behaviors by presenting technical concepts as engaging, modular explanations. Its goal is to empower users to build trust in AI through understanding, rather than blind adoption.
This project was developed as the capstone submission for the Gen AI Intensive Course with Google, held from March 31 to April 4, 2025. The original Kaggle notebook was completed over two weeks, from April 4 to April 20, 2025. Explainable is now evolving into a web-based learning platform beyond the Kaggle environment.
Explainable breaks down foundational generative AI concepts into clear, digestible modules. It covers core capabilities like prompting, structured output, embeddings, retrieval-augmented generation (RAG), and GenAI evaluation. Each section includes interactive examples and visual explanations designed to make complex AI behaviors approachable for learners, builders, and curious minds alike.
team + role
I served as the sole designer and developer, supported by AI collaboration tools including Gemini, GPT, and Claude. These AI assistants helped me brainstorm ideas and troubleshoot technical challenges during development and integration phases.
This project also drew extensively on published studies, reports, whitepapers, and open-source tools. Special thanks to the Google course team and contributors of these invaluable resources. A full list of citations can be found here: <link>.
challenges
Complexity Barrier: Generative AI concepts are often steeped in jargon and technical detail, making them inaccessible to non-experts.
Trust Deficit: Users tend to either blindly trust AI or reject it due to misunderstanding how it works and its limitations.
Lack of Interactive Learning Tools: Few resources offer hands-on, modular, and visual approaches to learning AI concepts that adapt to different learning styles.
goals
Simplify & Clarify: Break down core GenAI capabilities into approachable, bite-sized modules that anyone can grasp.
Build Trust Through Understanding: Empower users to critically engage with AI by making its inner workings transparent and comprehensible.
Create Interactive Experiences: Develop engaging demos and examples that encourage exploration and experimentation rather than passive consumption.
outcomes
Modular Educational Toolkit: Delivered an interactive Kaggle notebook covering essential GenAI topics—prompting, embeddings, RAG, structured output, and evaluation.
Positive User Engagement: Early feedback (informal testing and peer review) highlighted increased confidence and clarity among learners.
Foundation for Growth: Established a scalable framework that is now evolving into a web-based learning platform to reach a broader audience.
discovery highlights
Understanding Scope
This project focuses on creating an accessible, hands-on educational tool to help non-technical users grasp core generative AI concepts. While it meets the requirements of the Google Gen AI Intensive Course Capstone, it intentionally limits scope to foundational capabilities and practical learning experiences, leaving out advanced technical content and production-level features. The project is evolving toward a more interactive web-based platform to broaden its impact and accessibility.
in the scope
competition requirements
Build a publicly viewable, fully functional Kaggle notebook
Demonstrate at least three generative AI capabilities
Provide clear documentation explaining the use case and AI implementations
audience requirements
Design for non-technical users seeking approachable GenAI education
Include interactive, modular demos to build foundational understanding
out of scope
resource constraints
Limited time and tooling restricted development to Kaggle notebook environment
No extensive user testing or video content included within the competition timeframe
not aligned
Not intended to replace formal AI education or serve advanced learners seeking deep technical mastery.
Does not provide exhaustive technical documentation or in-depth exploration of advanced GenAI capabilities.
Unsuitable for developers needing production-ready AI tools, real-time deployment, or MLOps workflows.
looking ahead
in progress
Expanding to a web-based learning platform to improve accessibility and interactivity
Enhancing UI/UX design to better serve non-technical audiences
future thoughts
Incorporate broader AI literacy topics and additional capabilities
Add personalized learning pathways and community engagement features
Generative AI has experienced explosive growth, rapidly embedding itself into healthcare, education, media, retail, and daily life. Despite this ubiquity, public understanding of how AI works, its capabilities, limitations, and risks lags significantly behind adoption rates. Understanding the context and landscape around the problem I intended to address verified my assumptions and illustrated value.
40%
of U.S. adults used generative AI in 2024, but only 30% could correctly identify six basic AI examples (Pew Research, 2023).
86%
of people say AI outputs need to be more transparent for them to trust the results (Keragon, 2025).
71%
of organizations report regular use of GenAI in at least one business function as of early 2025; up from 65% in early 2024 (McKinsey & Company, 2025).
genai literacy
Awareness Levels: A 2025 Pew Research Center study indicates that while awareness of AI is growing, understanding remains limited among the general public .
Trust and Adoption: Understanding AI processes enhances trust. Users are more likely to trust AI outputs when they comprehend how data is collected and processed d and processed . SAGE Journals
Equity and Inclusion: Addressing the AI literacy gap is crucial to prevent exacerbating existing inequalities, as underrepresented groups may be disproportionately affected by AI-driven changes . UNESCO
existing solutions
Current Resources: While some resources aim to demystify AI, there is a scarcity of materials specifically designed for non-technical audiences that combine plain language explanations with interactive learning.
Elements of AI: This free online course (a collaboration between the University of Helsinki and Reaktor, launched in 2018) was one of the pioneers in plain-language AI education. It requires no programming and uses everyday metaphors to explain AI concepts. It has seen massive uptake – over 1,000,000 people from 170+ countries have taken the Elements of AI course to learn the basics fcai.fi.
outlining opportunities
Opportunities for Development: Address material and resource gap for non-technical audiences gap by providing accessible, interactive content tailored to the general public, thereby fostering greater understanding and trust in GenAI technologies.
Informed Decision-Making and Participation: As AI influences everything from what news we see to what medical treatments we’re offered, people need knowledge to make informed choices. Public awareness is “a first step toward engagement in debates about AI’s appropriate role and boundaries”, notes Pew pewresearch.org.
understanding
our audience
Explainable is designed to meet the needs of diverse users interested in generative AI, whether they are just starting to build foundational knowledge, exploring AI behavior without technical jargon, or simply curious about what AI can and cannot do. The platform guides learners through key concepts using interactive demos and clear explanations, encouraging hands-on experimentation and critical reflection.
process highlights
Ideation + Conceptualization
The ideation and conceptualization phase focused on transforming complex generative AI topics into clear, engaging learning experiences for non-technical users. This involved carefully mapping out a progressive learning journey, synthesizing accessible content through AI-assisted research, and designing hands-on interactive modules. By grounding abstract concepts in relatable examples and practical demos, the project aimed to foster deep understanding and curiosity.
01
Defining the Learning Journey
I started by breaking down complex generative AI concepts into manageable, digestible modules designed to build understanding progressively. The goal was to create approachable explanations with minimal jargon, supplemented by relatable, real-world examples that resonate with non-technical audiences.
02
Research & Content Generation
Using tools like NotebookLM and Google’s GenAI whitepapers, I synthesized accurate and accessible content. This research phase was iterative, involving AI-assisted drafts followed by manual refinement to ensure clarity and consistent tone for diverse learners.
03
Designing Interactive Experiences
To foster engagement, I developed practical demos such as puppy naming for few-shot prompting and calendar generation from meeting notes for structured output. Each module was designed as a mini learning journey encouraging exploration, experimentation, and reflection.
Development + Implementation
The development of Explainable involved integrating advanced GenAI technologies within the unique constraints of the Kaggle environment. This required a balance of robust technical implementation and creative problem-solving to overcome platform limitations. Throughout, the focus remained on delivering a seamless, interactive experience that enables users to explore complex AI capabilities with clarity and confidence.
01
Technical Integration
I built Explainable as an interactive Kaggle notebook, leveraging Python and powerful GenAI tools including LangChain, ChromaDB, Google Gemini API, and Hugging Face embeddings. This setup allowed me to showcase advanced capabilities like Retrieval-Augmented Generation and Structured Output within a single, unified experience.
02
Overcoming Platform Constraints
Developing on Kaggle presented unique challenges such as slow dependency loading, minimal error feedback, and limited UI flexibility. I overcame these with iterative debugging, AI-assisted troubleshooting, and creative UI enhancements like a toast message system for user feedback.
03
Enhancing User Experience
To guide learners through complex AI workflows, I implemented clear progress and error messages in plain language, scaffolded content flow, and visual cues to support diverse learning styles. This ensured the notebook remained accessible, transparent, and engaging despite technical complexity
UX Design Enhancements
To make Explainable both accessible and engaging, I prioritized user experience enhancements that promote clarity, transparency, and personalization within the Kaggle notebook environment. These design improvements ensure learners receive timely feedback, navigate complex AI concepts with ease, and interact with content tailored to their inputs. By balancing technical sophistication with thoughtful UX design, the project supports a broad audience in building confidence and understanding around generative AI.
Real-Time Feedback + User Guidance
Toast Notification System:
Designed lightweight toast messages to provide users with immediate feedback on progress, success, or errors.
Plain Language Communication:
Messages use accessible, jargon-free language to keep users informed without confusion.
Process Transparency:
This feedback loop helps users stay aligned and reduces uncertainty when interacting with complex AI workflows.
Structured, Scaffolded Learning Experience
Modular Content Organization:
The notebook is divided into clear, progressive sections that build foundational knowledge step-by-step.
Interactive Code Blocks:
Users can experiment safely with live examples that reinforce each concept.
support for Diverse Learning Styles:
Scaffolded flow encourages exploration and active engagement, making it easier for learners of varying backgrounds.
Personalization + Visual Clarity
Dynamic Content Based on User Input:
The notebook detects uploaded files (e.g., meeting notes) and adjusts demos accordingly, increasing relevance.
Visual Emphasis and Consistent Formatting:
Important code, outputs, and instructions are highlighted and clearly separated for easier navigation.
use of Icons and Clear UI Elements:
Visual cues help users quickly identify content types and understand the learning path.
Delivery + Launch
Within a tight two-week capstone timeframe, I successfully delivered and publicly launched a fully interactive Kaggle notebook demonstrating core generative AI concepts through approachable, real-world examples.
What Was Delivered and Launched
Key accomplishments include:
Personalized onboarding with user name, timezone, and dataset uploads.
Integration of Gemini API with retry logic for robust model interaction.
Hugging Face embeddings combined with ChromaDB for effective document retrieval.
Polished, digestible modules covering Few-Shot Prompting, Structured Output, Embeddings, and Retrieval-Augmented Generation (RAG).
A live RAG assistant that accurately answers questions from user-provided study notes.
Friendly, accessible UI messaging providing clear progress updates and error handling.
Comprehensive instructional content grounded in real-world relevance.
What Didn’t Launch
Due to time and platform constraints, some planned features and content were not included in the initial release, such as:
Video tutorials and walkthroughs demonstrating key concepts visually.
Additional advanced modules covering GenAI evaluation and ethical considerations.
More extensive user testing and analytics instrumentation to gather detailed usage data.
Feedback and Future Iterations
To enable continuous improvement, I embedded multiple feedback channels within the notebook:
Clear, inviting prompts encouraging users to leave feedback on clarity, usability, and content via an external Google Form.
A detailed feedback form that captures user familiarity, difficulty, favorite sections, and suggestions for improvement.
Options for users to opt-in to updates on Explainable’s progress and future development.
This setup ensures that future versions can evolve responsively based on real user insights, driving enhancements in accessibility, interactivity, and content depth beyond the initial launch.
What’s next
The next phase of the project focuses on evolving the platform into a richer, more personalized, and scalable learning experience that meets the needs of a diverse audience. Moving beyond the Kaggle notebook environment, the project will be redeveloped as a dedicated web-based learning platform and source for demystifying generative AI, empowering users to critically understand, trust, and effectively apply AI technologies in their personal and professional lives.
Richer interactivity:
Enhanced UI elements, smoother navigation, and real-time responsiveness.
Broader accessibility:
Compatibility across devices and browsers with improved user experience.
Expanded content delivery:
Incorporation of multimedia elements such as video tutorials, animations, and guided walkthroughs.
Expanding Module Coverage
Additional advanced topics will be developed to deepen AI literacy, including: Expanded labs for GenAI Evaluation and Embeddings, Ethical AI and Multimodal AI.
feature highlights
prompting
The Few-Shot Prompting module in Explainable demonstrates how providing just a few examples can significantly guide AI behavior without the need for extensive model training. Using a relatable, step-by-step example of naming a new puppy, this module walks users through zero-shot, one-shot, and few-shot prompting techniques. Users interact with live code blocks powered by Gemini models, observing firsthand how example prompts influence the quality, consistency, and relevance of AI-generated responses.
By comparing outputs generated with no examples versus those with carefully chosen examples, the module concretely shows how few-shot prompting helps the AI understand desired formats and categories, resulting in more accurate and reliable outputs.
Challenges Addressed
Breaking Down Complexity:
Few-shot prompting can be abstract and technical. This module distills the concept into a fun, concrete use case—puppy naming—which makes the underlying mechanics easier to grasp for non-experts.
Building User Trust:
By allowing users to experiment with different numbers and types of examples, the module reveals how AI reasoning adapts, helping learners appreciate the “why” behind AI behavior rather than blindly trusting outputs.
Encouraging Active Learning:
The interactive code cells invite users to try their own prompts and examples, supporting hands-on experimentation rather than passive consumption.
goals achieved
Simplify & Clarify:
se minimal jargon and relatable examples to explain the difference between zero-shot, one-shot, and few-shot prompting clearly.
Promote Transparency:
Make AI’s response variability understandable by linking prompt design choices to output differences users can observe directly.
Engage Through Interactivity:
Provide live, editable code examples that users can run and modify, deepening comprehension through exploration.
Outcomes + Impact
Effective Concept Demonstration:
The puppy naming example resonated with users during informal testing, making an abstract AI technique tangible and memorable.
Increased User Confidence:
Seeing how prompt examples affect AI outputs helped users feel more in control and less intimidated by generative AI.
Strong Foundation for Expansion:
This module’s clear structure and engaging interactivity serve as a blueprint for subsequent modules, reinforcing Explainable’s mission to democratize AI literacy.
structured output
The Structured Output module teaches users how to transform AI responses from free-form text into predictable, machine-readable formats such as JSON or calendar .ics files. Using a practical example of converting messy meeting notes into a clean, structured calendar schedule, the module guides learners through the importance and mechanics of controlled output formats.
Interactive code cells let users upload their own meeting notes or use sample data, then see how the AI processes unstructured input into a standardized format. This hands-on experience makes the abstract concept of structured output tangible and directly relevant.
challenges addressed
Demystifying Technical Complexity:
Structured output can seem technical and abstract. By grounding it in a real-world task—building a personalized calendar from raw notes—the module makes the concept concrete and accessible.
Bridging the Trust Gap:
Showing consistent, parseable AI output builds user confidence in AI reliability and applicability, reducing skepticism toward generative models.
Facilitating Interactive Learning:
Allowing users to upload their own notes and observe customized outputs fosters active participation and deeper understanding.
goals achieved
Simplify & Clarify:
Explain structured output in plain language, emphasizing how it turns “messy” AI responses into usable data formats.
Show Practical Impact:
Use a relatable use case—calendar generation from meeting notes—to demonstrate real-world value.
Encourage Exploration:
Provide editable, runnable code blocks so users can experiment with their own data and see immediate results.
Outcomes + Impact
Tangible Learning Experience:
Users gain direct insight into how structured output enhances AI usefulness beyond just text generation.
Positive Engagement:
The ability to upload personal notes and receive a formatted calendar schedule was praised during early feedback for its clarity and relevance.
Foundation for Growth:
This module’s success in translating structured output principles into an interactive demo reinforces Explainable’s approach to hands-on AI education.
Embeddings
The Embeddings module introduces users to the concept of converting text into numerical vectors that capture semantic meaning. While the original plan included an interactive grocery list clustering demo to visually illustrate semantic similarity, this was not launched. Instead, embeddings play a crucial foundational role in supporting the Study Notes Assistant interactive demo within the Retrieval-Augmented Generation (RAG) module.
In the notebook, embeddings enable the AI to understand the relationships and similarities between user-uploaded study notes and user queries, facilitating accurate and contextually relevant responses.
challenges addressed
Making Abstract Concepts Concrete:
Embeddings involve complex vector mathematics that can be intimidating. Although the grocery list visualization was not implemented, embeddings’ role in improving semantic search was demonstrated through the RAG assistant, making the concept accessible via practical application.
Building User Trust:
Demonstrating how embeddings underpin effective information retrieval helps users appreciate AI’s deeper understanding beyond keyword matching, increasing trust in AI outputs.
Supporting Active Exploration:
Through the RAG demo, users engage with embeddings indirectly by querying their own notes and seeing meaningful, semantically relevant answers.
goals achieved
Simplify & Clarify:
Explain embeddings as the mathematical representation of meaning that enables semantic search and similarity detection.
Demonstrate Real-World Utility:
Highlight embeddings’ role in enabling precise document retrieval and contextual AI responses within the Study Notes Assistant.
Engage Through Interaction:
Provide an experiential understanding of embeddings via the RAG demo’s question-answer workflow rather than a standalone clustering visualization.
Outcomes + Impact
Enhanced Conceptual Understanding:
Users gained insight into embeddings’ importance as the backbone of semantic retrieval, reinforcing the notebook’s layered learning approach.
Increased Confidence:
Seeing embeddings’ practical impact in delivering accurate study note answers improved users’ confidence in AI’s contextual reasoning.
Solid Foundation for Advanced Learning:
This module’s focus supports and strengthens subsequent understanding of RAG and AI evaluation techniques.
Retrieval-Augmented Generation (RAG)
The RAG module introduces users to a powerful technique that enhances AI responses by combining language generation with external knowledge retrieval. Rather than relying solely on the model’s training data, RAG allows AI to search relevant documents in real time, improving accuracy and contextual relevance.
In the notebook, users interact with a live study notes assistant that retrieves information from uploaded documents and generates precise, grounded answers. This hands-on experience helps users understand how RAG mitigates common AI issues like hallucination and outdated knowledge.
challenges addressed
Demystifying Complex AI Processes:
RAG involves multiple components—retrieval, augmentation, generation—which can be overwhelming. This module breaks down each step clearly and demonstrates the workflow interactively.
Building Trust Through Transparency:
By showing the documents used to generate responses, users gain confidence in AI outputs, reducing blind trust and skepticism alike.
Encouraging Active Engagement:
Allowing users to upload their own notes and query them directly promotes exploration and personal relevance.
goals achieved
Simplify & Clarify:
Explain the multi-step RAG process in plain language with visual and interactive aids.
Show Real-World Value:
Use practical demos like a study notes assistant to highlight RAG’s application in knowledge-intensive tasks.
Promote User Experimentation:
Enable users to test queries and observe how retrieval improves AI answers in real time.
Outcomes + Impact
Concrete Understanding of RAG:
Users reported increased clarity on how AI leverages external data sources to enhance responses.
Improved User Confidence:
Transparency about data retrieval and grounding helped build trust in AI outputs.
Strengthened Educational Framework:
This module complements previous topics and paves the way for advanced discussions on AI evaluation and ethical use.
genai evaluation
Originally, this module was planned to include a gamified poker-style bluffing game designed to help users identify AI hallucinations in a fun and engaging way. However, due to time constraints, the evaluation section was simplified to focus on clear explanations and straightforward interactive examples.
This module teaches users how to critically assess AI outputs for quality, accuracy, bias, and reliability. Through practical descriptions and simple interactive exercises, users learn the importance of continuous AI evaluation to build trustworthy and safe generative AI systems.
challenges addressed
Making Complex Evaluation Accessible:
AI evaluation often involves technical metrics and concepts. The module breaks these down into plain language with relatable examples, avoiding jargon to reach non-expert learners.
Building Critical Awareness and Trust:
Users learn why AI can produce errors (hallucinations), biased responses, or inappropriate content, and how evaluation helps detect and mitigate these issues.
Encouraging Active Reflection:
Clear guidance promote thoughtful consideration of AI strengths and limitations, empowering users to engage responsibly with AI.
goals achieved
Simplify & Clarify:
Provide an easy-to-understand overview of key evaluation methods like human review, automated checks, and AI self-assessment.
Promote Critical Thinking:
Highlight common pitfalls such as bias, hallucinations, and toxicity, and teach users how to recognize and respond to them.
Deliver Practical Value:
Emphasize the real-world importance of evaluation for safe, reliable AI applications across domains like healthcare, cybersecurity, and content creation.
Outcomes + Impact
Increased User Awareness:
Learners gain a foundational understanding of why AI evaluation is essential and what challenges it addresses.
Improved Confidence:
Users feel more equipped to question and interpret AI outputs critically, rather than accepting them at face value.
Completion of Learning Journey:
This module caps the Explainable series by tying together technical knowledge and ethical considerations, fostering responsible AI use.
user feedback + error handling
Explainable incorporates a thoughtful system for managing user feedback and error handling that enhances both the learning experience and the long-term sustainability of the project. The design centers on clear communication, proactive issue detection, and iterative improvement, ensuring users feel supported and engaged throughout their AI exploration journey.
Systems Thinking Approach
Integrated Feedback Loops:
Embedded feedback prompts and a dedicated Google Form collect qualitative and quantitative user insights, allowing continuous refinement based on real-world experiences.
Proactive Error Detection:
Comprehensive try-except blocks around API calls and file operations capture issues early and prevent disruptive crashes.
Resilient Design:
Fallback mechanisms—like default sample data when user uploads fail—ensure uninterrupted learning even when errors occur.
User Experience Considerations
Clear, Contextual Messaging:
Real-time toast notifications provide non-technical, jargon-free updates on processing states, successes, and errors, minimizing confusion and user frustration.
Guided Recovery:
When errors occur, the system offers actionable feedback, empowering users to retry, correct inputs, or understand limitations without feeling lost.
Encouraging Engagement:
Feedback requests are framed positively to motivate users to share experiences, fostering a collaborative community around the tool.
Sustainability & Future-Proofing
Data-Driven Iteration:
The structured feedback collection enables evidence-based decisions for prioritizing enhancements, ensuring the tool evolves in alignment with user needs.
Maintainable Codebase:
Centralized error handling and messaging functions streamline updates and reduce technical debt, supporting long-term maintenance.
Scalable User Support:
Automated status updates and self-guided error responses reduce reliance on direct support, making the tool scalable for larger audiences.
status + reflection
project status
Explainable’s initial version—a fully interactive Kaggle notebook—was completed within a two-week capstone project timeframe. The notebook successfully demonstrates core generative AI concepts through modular, hands-on examples.
Currently, the project is transitioning from a notebook format to a dedicated web-based learning platform. This ongoing development phase focuses on enhancing user experience, accessibility, and scalability to reach a wider and more diverse audience.
Future plans include incorporating user feedback, expanding content coverage, and adding personalized learning pathways.
reflection
This project was my first deep dive into building an interactive GenAI educational tool. It challenged me to learn and integrate advanced AI models and APIs while keeping user clarity front and center. Though the Kaggle notebook format limits some interactivity for non-technical users, the ongoing web platform development aims to broaden accessibility and engagement.
I’m proud to have translated complex AI concepts into approachable, hands-on learning experiences, balancing technical rigor with design empathy.