Categories
Agentic Coding AI

From Forms to Autonomous Agents: My Journey Building a Financial Prediction Platform with LLMs

When I started building a platform to track financial predictions, I thought I had a straightforward path ahead. What unfolded instead transformed my understanding of what’s possible when you move from simple AI assistance to fully autonomous systems.

The Starting Point: Manual Everything

The initial concept was humble: users log in, submit financial predictions through a form, and an admin verifies them. Someone would type in a stock symbol, a predicted movement, a timeframe, a confidence level – and the platform would track their accuracy over time.

Functional. But friction kills participation.

Users didn’t want to spend five minutes filling out forms. They wanted to share predictions they’d spotted on Twitter, or paste in a screenshot from a financial news site, or drop a link from a Discord channel. The manual form was a bottleneck that no amount of clever UX was going to fix.

The First Evolution: AI-Assisted Extraction

The obvious move was to let AI do the heavy lifting. Instead of requiring users to enter data manually, let them submit a screenshot or a URL – and have a model extract the relevant information and pre-populate the form.

This was my first real integration of LLMs into the workflow: parsing screenshots of financial predictions, pulling structured data from Twitter URLs, identifying key elements like target price, direction, and timeframe.

The improvement was immediate. Submission rates climbed because the barrier had dropped dramatically. But a new problem surfaced quickly: AI extraction isn’t perfect, and asking users to correct AI mistakes is still friction. A slightly wrong pre-fill can feel worse than a blank field.

The Second Evolution: Conversational Agents

The next step was moving from passive extraction to active participation. Instead of hoping the initial extraction was right, I built agents that could identify missing fields, ask users targeted follow-up questions, and validate extracted data before anything hit the database.

This was a genuine pivot. The system was no longer a dumb form with a smarter front-end – it was an active participant in the data collection process. But I was still thinking incrementally, still trying to improve the existing workflow rather than question it.

The Paradigm Shift: Agents That Hunt

Then I asked an AI what else was possible, and the answer reframed everything.

The suggestion wasn’t to improve my submission pipeline. It was to eliminate the dependency on human submission entirely. Why wait for users to find predictions and submit them when an autonomous agent could go and find them itself?

The concept: a Prediction Hunter Agent that continuously monitors financial news sites, social media, and expert commentary – identifying and capturing predictions in real-time, without any human prompting. No form fatigue. No reliance on users to remember. No bottleneck.

That idea opened the door to a full multi-agent architecture:

The Prediction Hunter scours the internet autonomously, surfacing predictions the moment they’re published. It doesn’t wait – it goes looking.

The Validator Agent monitors live market data, decides when a prediction is ready to be evaluated, and performs the evaluation automatically. It only escalates to a human when confidence is low or conditions are unusual.

The Guru Profiler Agent tracks the historical accuracy of each prediction source – the "gurus" making calls – and produces real-time reliability scores. This is the layer that turns raw predictions into genuine signal.

The insight that crystallised it all came from the AI itself:

"The best use of agents here is automating discovery, improving data quality, and generating insight – not just making extraction smarter."

That was the epiphany. I had been using LLMs to make existing processes slightly less painful. The real leverage comes when you stop asking "how can AI help with this step?" and start asking "which steps should AI own entirely?"

The Full Arc

Looking back, the evolution followed a clear progression:

Forms → Assisted Extraction → Conversational Agents → Autonomous Agents

Each stage reduced human effort while increasing system capability. But the final step did something qualitatively different from the others. It didn’t just reduce friction – it changed what the platform could know.

With human-dependent submission, the platform was limited by how many predictions users chose to submit. With autonomous agents, it can track thousands of predictions across hundreds of sources simultaneously – building an accuracy record for each predictor that would be impossible to assemble manually at any meaningful scale.

Keeping Humans in the Loop (For Now)

One thing I’ve been deliberate about: autonomous doesn’t mean unsupervised.

The current model has the agent handle extraction and structuring, then route high-confidence cases for auto-acceptance and flag uncertain ones for a quick human review – typically two to five seconds. As the system builds a track record and confidence thresholds get validated, that human checkpoint will move later and later in the process.

The goal isn’t to remove humans from the system. It’s to make sure human attention goes where it actually adds value.

What I’d Tell Developers Starting This Journey

Don’t just ask how AI can make your forms smarter. Ask how agents could eliminate the need for forms altogether. Ask what your platform could know if it didn’t have to wait for users to tell it things.

The future of building with LLMs isn’t AI-assisted data entry. It’s autonomous systems that discover, validate, and understand information at a scale humans simply can’t match – and surface only the decisions that genuinely need a human eye.

That’s the shift worth building toward.