Build Your Own EdTech: Why the Future is Local

mac studio computer and a large display on a white desk

Think back to 2023, 2024, and 2025, the EdTech market was flooded with what we now affectionately call “AI wrappers.” These were sleek at first sight, user-friendly web apps that essentially put a nice interface over a standard API call to generate lesson plans, differentiated reading passages, worksheets, and quizzes for the classroom. They felt magical at first, but they came with subscription costs, heavy enterprise costs for districts and schools, and lingering questions about student data privacy. Fast forward to today, and the script is entirely flipping. We are entering an era where the power to build, run, and maintain these tools is shifting directly into the hands of teachers and schools, and it is all happening locally.

The Rise of Desktop Power: Gemma 4 and Local LLMs

The catalyst for this shift is twofold: the explosive evolution of local language models and the rapid rise of agentic coding tools. We now have access to models like the new Gemma 4 family that are shockingly capable, yet lightweight enough to run smoothly on a wide range of standard consumer devices. Combine that hardware accessibility with intuitive hosting programs like Ollama or LM Studio, and suddenly, running a powerful, uncensored, and completely private AI on your own computer isn’t a pipe dream for developers; it’s a simple download and many of the models you can access work on a wide array of laptop and desktop computers.

Yet, the real magic happens when we introduce AI coding assistants like Claude Code, Codex, and Antigravity. You no longer need a computer science degree to build an application. These tools allow you to describe the exact app you want, say, a custom slide-deck generator tailored specifically to your state’s science standards, and the AI acts as your personal engineering team, writing and deploying the code for you. This will allow you to build your own customized wrapper, which we will walk through now.

Building the “Wrapper” Yourself

What this means in practice is that the barrier to entry for creating specialized EdTech tools has effectively collapsed. We are rapidly approaching a reality: one I suspect will become completely mainstream within the next year or so, where a teacher won’t need to ask their district administrators to approve a $20-a-month subscription for a proprietary quiz or instructional content maker for lessons.

Instead, they will simply hop onto GitHub, download an open-source template for an education app, and point it directly to the local model running on their machine. Everything stays on the device. No student data is sent to the cloud, no ongoing subscriptions are required, and as a result, teachers retain total, uncompromising control over the output. It is a wild, exciting time. We can take the core concepts behind the hottest EdTech apps from just a few years ago and prototype them ourselves over a weekend. Below is a prompt you can customize to begin this work:

Prompt for Claude Code, Antigravity, or Codex

Project Overview: Build a full-stack EdTech web application designed for teachers. This app will act as a centralized dashboard offering multiple AI-powered instructional design tools (essentially a wrapper around a local LLM). The goal is to provide a clean, modern, and intuitive interface where educators can quickly generate classroom materials while keeping all data processing local for privacy.

Pedagogical Alignment (Cognitive Science Principles): To ensure the tools we are building actually improve educational outcomes, deeply integrate the following cognitive science principles into both the UI design and the system prompts sent to the LLM:

  • Retrieval Practice: The act of pulling information out of your brain is far more effective for long-term retention than putting information in (like rereading a text). Any app we build should prioritize active recall such as low-stakes self-testing, flashcards, or brain dumps rather than passive review. Instruct the LLM to design outputs accordingly.
  • Spaced Repetition: The human brain naturally forgets information over time. Apps should be designed to combat the “forgetting curve” by reintroducing concepts over gradually increasing intervals. Instead of “massing” all practice into one session, the AI should seamlessly weave older topics into new assignments.
  • Cognitive Load Management: A student’s working memory is strictly limited. When building custom interfaces, we must relentlessly eliminate “extraneous load”—confusing navigation, cluttered screens, or distracting animations. The UI should be invisible so the student’s cognitive energy is spent entirely on the learning task.
  • Dual Coding: People learn better from words and pictures than from words alone, provided they are aligned. When designing tools like slide generators, the AI shouldn’t just spit out bullet points and decorative clip art; it should purposefully pair text with suggested diagrams or visuals that specifically illustrate the concept being taught.
  • Actionable, Scaffolded Feedback: A simple “Correct” or “Incorrect” does very little for learning. Assessment tools must be prompted to provide immediate, explanatory feedback that gently guides the student toward the correct answer step-by-step, rather than just giving it away.
  • Worked Examples: Show step by step on how to solve a problem. Do this by illustrating examples vs. non-examples in the UI and outputs produced.
  • Frontend: Next.js (App Router), React, Tailwind CSS for styling, and Framer Motion for simple micro-interactions.
  • Backend/Integration: Next.js API routes to handle requests and forward them to a local LLM endpoint (specifically, set it up to target a standard Ollama localhost port: http://localhost:11434/api/generate using the gemma:latest model).
  • Icons: Lucide React.

Core Application Features (The “Tools”): The main dashboard should feature a grid of clickable “Tool Cards.” When a teacher clicks a card, they are taken to a form specific to that tool. The tools to build are:

  1. The Lesson Planner: > * Inputs: Grade level, Subject, Standard/Topic, and Duration.
    • Output: A structured table or formatted markdown document outlining the objective, direct instruction, guided practice, independent practice, and closure.
  2. The Quiz Generator:
    • Inputs: Topic, Number of Questions (dropdown: 5, 10, 15), Question Type (Multiple Choice, True/False, Short Answer).
    • Output: A formatted quiz with a separate, toggleable “Answer Key” section.
  3. The Differentiator (Reading Level Adjuster):
    • Inputs: Paste original text, Target Grade Level.
    • Output: The rewritten text tailored to the requested reading level.
  4. The Rubric Maker:
    • Inputs: Assignment Description, Grade Level, Grading Scale (e.g., 1-4).
    • Output: A formatted markdown table displaying grading criteria and performance levels.

UI/UX Requirements:

  • Dashboard: A clean sidebar navigation and a main content area. Use a professional, inviting color palette (think soft blues, whites, and slate grays).
  • Loading States: Implement a clear loading state (like a spinner or skeleton loader) when the app is fetching responses from the local LLM.
  • Output Display: The generated content must be displayed in a clean, readable format (parse markdown properly) with a “Copy to Clipboard” button and a “Download as PDF/Text” button.

Task List for the Agent:

  1. Implementation Plan: Before writing code, generate an Implementation Plan artifact detailing the component structure and API routing.
  2. Scaffold Project: Initialize the Next.js project with Tailwind CSS and install necessary dependencies (e.g., lucide-react, react-markdown).
  3. Build Layout & Navigation: Create the global layout, sidebar, and dashboard grid.
  4. Build Tool Pages: Implement the specific forms for each of the four tools listed above.
  5. Build API Logic: Create a reusable utility or API route to handle the fetch calls to the local Ollama instance, passing the appropriate system prompts based on which tool is being used. Ensure it handles timeout or “connection refused” errors gracefully (prompting the user to ensure their local model is running).
  6. Browser Verification: Use your Browser Subagent to open localhost:3000, navigate through each of the four tools, and verify the UI renders correctly. Provide a screenshot artifact of the dashboard.

A prompt like this after some refining, can develop a prototype like I did below with a project I created called OpenSchool Local. The goal of the project was to get it working by completing simple tasks like AI wrappers were able to do so in 2023 and 2024:

Grounding Tech in the Science of Learning

However, as we democratize the creation of these tools, we have to ground ourselves in a fundamental reality: technology is just the vehicle. Whether we are buying a slick enterprise application or cobbling together a local app using Claude Code, the design must be deeply rooted in pedagogy and the science of how we learn.

An AI that can instantly generate a 50-question multiple-choice quiz isn’t inherently valuable if that quiz doesn’t align with best practices for retrieval practice, spaced repetition, or formative assessment. As we gain the unprecedented power to build anything we want, we must take on the responsibility of building things that actually improve educational outcomes, rather than just finding faster ways to automate bad instructional habits. The end goal is student learning, not just technological novelty.

To ensure the tools we build or adopt actually move the needle, they must integrate these five core cognitive science strategies:

  • Retrieval Practice: The act of pulling information out of your brain is far more effective for long-term retention than putting information in (like rereading a text). Any app we build should prioritize active recall—such as low-stakes self-testing, flashcards, or brain dumps—rather than passive review.
  • Spaced Repetition: The human brain naturally forgets information over time. Apps should be designed to combat the “forgetting curve” by reintroducing concepts over gradually increasing intervals. Instead of “massing” all practice into one session, the AI should seamlessly weave older topics into new assignments.
  • Cognitive Load Management: A student’s working memory is strictly limited. When building custom interfaces, we must relentlessly eliminate “extraneous load”—confusing navigation, cluttered screens, or distracting animations. The UI should be invisible so the student’s cognitive energy is spent entirely on the learning task.
  • Dual Coding: People learn better from words and pictures than from words alone, provided they are aligned. When designing tools like slide generators, the AI shouldn’t just spit out bullet points and decorative clip art; it should purposefully pair text with diagrams or visuals that specifically illustrate the concept being taught.
  • Worked Examples: Show step by step on how to solve a problem. Do this by illustrating examples vs. non-examples in the UI and outputs produced.
  • Actionable, Scaffolded Feedback: A simple “Correct” or “Incorrect” does very little for learning. One of the greatest strengths of local AI is the ability to act as a personalized tutor. Assessment tools must be prompted to provide immediate, explanatory feedback that gently guides the student toward the correct answer step-by-step, rather than just giving it away.

The Future of the EdTech Enterprise

So, where does this leave traditional EdTech companies? I see a massive opportunity for districts to cut costs, which will likely force a significant market correction. Niche, single-purpose AI wrappers are going to struggle to justify their enterprise price tags when a teacher can build a functional equivalent for free. With hat said, heavy-duty infrastructure like Learning Management Systems (LMS), Student Information Systems (SIS), and identity management networks, which require a level of security, compliance, and complex integration that local, DIY apps simply cannot handle. Those core systems will remain. But outside of that foundational stack and perhaps one primary curriculum platform, the need for third-party subscriptions will plummet.

Furthermore, as agentic AI becomes increasingly sophisticated, schools will likely lean toward building in-house solutions or consulting with small development teams to create bespoke, district-specific tools as well as manage them. Once these custom platforms are built, agentic AI will be able to handle the bulk of the necessary updates, bug fixes, and maintenance, drastically reducing overhead.

Looking Ahead

We are currently in a transitional phase. Yes, the prototypes we are spinning up today with Antigravity, Claude Code, Codex, and local models might not quite have the perfectly polished sheen of top-tier enterprise apps at this time. But they are closing the gap every single day. The trajectory is undeniable. We are moving rapidly toward a future where enterprise-level, highly customized applications can be built easily, integrated seamlessly into classroom instruction within weeks to days, and run entirely on local LLMs to guarantee absolute privacy. The future of EdTech isn’t just about buying better tools; it’s about having the power to build exactly what our students need.

    Published by Matthew Rhoads, Ed.D.

    Innovator, EdTech Trainer and Leader, University Lecturer & Teacher Candidate Supervisor, Consultant, Author, and Podcaster

    Leave a Reply

    Discover more from Dr. Matt Rhoads — AI, Instruction, and the Science of Learning

    Subscribe now to keep reading and get access to the full archive.

    Continue reading