$6 Million for a Chatbot That Lasted 3 Months. Then the FBI Showed Up.

Yesterday the FBI raided the home and office of Alberto Carvalho, superintendent of the nation's second-largest school district. They also hit LAUSD headquarters and a property in Florida. The investigation is reportedly tied to a $6 million AI chatbot that never worked.

I want to talk about what happened. Because this isn't just a story about one district or one superintendent. It's a story about what goes wrong when districts buy AI instead of building it.

Here's the timeline.

In 2023, LAUSD signed a $6.2 million contract with a Boston-based startup called AllHere to build "Ed," an AI chatbot for students and families. Ed was supposed to be a personal assistant. Track grades. Surface mental health resources. Nudge students who were falling behind. Wake them up in the morning. Connect to cafeteria menus. The promises were enormous. The superintendent gave a 20-minute speech about it at ASU+GSV.

Ed launched in March 2024. By June, AllHere had furloughed most of its staff. The CEO left. The chatbot was shut down. LAUSD had already paid $3 million.

Then it got worse. A whistleblower inside AllHere, their senior director of software engineering, reported that the chatbot was sending student data to servers in Japan, Sweden, the UK, France, Switzerland, Australia, and Canada. Student PII was included in chatbot prompts even when it wasn't relevant. Data was shared with third-party companies unnecessarily. The tool was violating LAUSD's own privacy policies from the inside.

The CEO, Joanna Smith-Griffin, was later arrested and charged with securities fraud, wire fraud, and identity theft. Prosecutors allege she took nearly $10 million from investors and used it for a house down payment and her wedding.

And here's a detail that should make every district leader pause: AllHere's actual product before this contract was a text messaging system for weather alerts and school announcements. LAUSD handed them $6 million and access to 540,000 students' data to build something they'd never built before.

Yesterday, the FBI showed up.

I've been thinking about this all day. Not because the fraud is surprising. Fraud happens. But because the conditions that made it possible are everywhere.

Most districts don't have a formal process for vetting AI vendors. Most don't have data governance roles. Most don't have a framework for evaluating whether a vendor can actually deliver what they're promising, or whether the data architecture protects students if something goes wrong. The procurement process that approved a $6 million AI contract looked, by all accounts, like the same process you'd use to approve a new textbook.

But AI isn't a textbook. When you hand a vendor access to student records, attendance data, grades, and behavioral information, and that vendor processes it through models on servers you don't control, you're accepting a fundamentally different kind of risk. And most districts don't have the internal expertise to evaluate that risk before the contract gets signed.

LAUSD's own spokesperson said something revealing when the chatbot first launched. They said what they wanted to build "did not readily exist as an off-the-shelf product, and we needed to build this from the ground up." They knew they needed to build. So they outsourced the building to a startup that had never done it.

That's the gap.

The instinct was right. The execution was the opposite of what the instinct called for. Building from the ground up doesn't mean handing the blueprints to someone who's never built. It means getting in the room with the people who understand the problem, the teachers who know the workflows, the administrators who know the constraints, and building the thing together.

At Navigator, when we build AI tools, we start in the classroom. We identify the workflow that's eating teachers' time. We map the real constraints: what data is safe to use, what isn't, where it lives, who controls it. We prototype fast, test with real students and real teachers, and iterate based on what actually happens. The teachers aren't reviewing a vendor demo. They're shaping the tool. They're defining what good looks like.

NaviGrade didn't cost $6 million. It cost a fraction of that. And 75% of students voluntarily revised their work without being asked. The Restorative Practice Generator cut a 45-minute documentation process to 3 minutes. Both tools were built with teachers, inside classrooms, with data that never left the district's control.

I'm not saying this to pitch. I'm saying it because the LAUSD story isn't an anomaly. It's the logical outcome of a system where districts are expected to adopt AI quickly, vendors are incentivized to overpromise, and nobody in the room has the technical knowledge to ask the hard questions before the purchase order goes through.

If you're a superintendent or a tech director reading this, here's what I'd ask you to do this week.

Can you name every AI tool currently in use across your district? Do you know where the student data goes? Do you have a vetting process that's specific to AI, not just a repurposed procurement checklist? If a vendor collapsed tomorrow, do you know what happens to the data they're holding?

If you can't answer those questions, you're not behind. Most districts can't. But now you know where to start.

We built a free K-12 AI Readiness Checklist that covers six dimensions: policy and governance, data privacy, teacher readiness, student-facing AI, tool governance, and leadership vision. It's the same framework I use when I start working with a new district. It won't prevent fraud. But it will help you ask the right questions before the contract gets signed.

— Dan

P.S. The LAUSD board is meeting behind closed doors today to discuss the superintendent's future. This story isn't over. But the lesson is already clear: the districts that will get AI right are the ones building with their educators, not buying from the conference floor.

Keep Reading