Redesigning a data validation tool

A tool for humans to review and correct AI predictions — redesigned after two years of organic growth pushed it past its limits.

I initially designed a visualization tool for our image recognition predictions — a way for users to see them on our web app. Then we started seeing the need for users to correct the predictions, as image recognition is not 100% accurate and we wanted to know the real accuracy. So I designed a validation experience, sort of an annotation tool.

After almost two years the product grew organically and we started hitting the limits of what it could do. I talked to users about what wasn't working, to engineers about the data structures, to the founders about the goals. I researched other annotation tools to understand what our data and users actually needed.

Then I built a vibe-coded app connected to our real API to test the new workflows with users and show the interactions to the engineers. This helped us de-risk the feature before building the real thing — users could test it with real data so we validated the approach early, and engineers had a clear path for development which reduced misalignment on the handoff. It's just been released so we're still gathering data on the results.