Building Agentic AI for Personal Finance: Lessons from a Year of Shipping LLMs

January 30, 2026
3 min read

Over the past year, I’ve been trying to solve what most personal finance tools are missing: helping people prioritize and forecast future spending in a way that’s explainable, auditable, and context-aware. To do that well, I spent the last year learning how to design agentic AI systems that can guide decisions responsibly. The result is a side project my partner and I now use to plan the year ahead with more clarity and less guesswork.


The Deterministic Approach (And Why It Failed)

My first approach was a deterministic scoring model built around predefined inputs, weighted signals, and explicit rules. While it produced predictable results, it quickly revealed its limitations. Human bias could be easily injected to manipulate the outcome: if you tell the system something is essential, it will confidently agree with you. The user experience was also very rigid and input-heavy, requiring too much manual effort just to arrive at a priority.

That failure made the core challenge clear: prioritizing future spending is inherently contextual and difficult to encode deterministically. I wanted the system to feel human-friendly, allowing people to express intent in natural language and letting the system evaluate the reasoning behind a decision. That reframing made the problem a better fit for AI.


The Key Design Decision: Separation of Concerns

The key design decision was a strict separation of concerns: AI is responsible for interpretation and explanation, while deterministic systems handle enforcement, cost optimization, and observability.

To support that model, AI is treated as infrastructure rather than a feature. All AI execution runs asynchronously in the background, ensuring core user flows remain fast and unblocked. This made it possible to add guardrails around cost, latency, and reliability early, while keeping the AI logic auditable and evolvable over time.


The Stack: Serverless, Simple, Scalable

From an architecture standpoint, I wanted a fully serverless setup with fast iteration, reliable CI/CD, and clear operational boundaries. That led to a deliberately simple, scalable stack:

  • SvelteKit + TypeScript on Vercel for a predictable request lifecycle and fast deployments
  • Supabase (Postgres + Row Level Security) for multi-tenant safety and explicit access control
  • BullMQ-backed job queues to handle all AI execution asynchronously, with retries and backpressure
  • Stripe Checkout + webhooks to mimic subscription-driven entitlements
  • Sentry for end-to-end observability across API routes, background jobs, and AI failures

Early Results

It’s still early, but this has already changed how my partner and I plan ahead—turning vague ideas into clearer priorities instead of reactive decisions.

If you’re building agentic systems or shipping LLMs into production, I’d be interested in hearing what constraints or guardrails have mattered most in your experience.

Tanner Goins - Software Consultant

Tanner Goins

Software consultant helping businesses leverage technology for growth. Based in Western Kentucky.

Get in touch

Want to discuss your project?

Learn how these ideas can be applied to your business. Contact me for a free consultation.

Get In Touch