Alex LaGuardia

Software Engineer

AI systems  ·  production platforms  ·  game engines

[email protected]

01.About Me

I'm a self-taught software engineer who found his way into code through pure curiosity and a refusal to accept “that's not possible.” What started as tinkering became an obsession with building — not just websites, but complete systems that solve real problems.

Today I work across the full stack and then some: Python backends with FastAPI, React frontends with Next.js, trading systems that manage real capital, a game engine written from scratch in Rust, and AI systems that route thoughts to different language models based on complexity. I don't just use tools — I build them.

My philosophy is simple: ship things that work, then make them better. I believe in craftsman's pride — doing it right the first time, owning the full problem, and never hiding behind “that's not my layer.”

When I'm not building, I'm probably sketching out the next system in my head. I think in architectures, not features.

02.What I've Built

Cognitive Infrastructure for AI Agents

Vigil

An open-source Python library and hosted cloud platform that gives AI agents persistent awareness, coordinated signals, session handoff, and a knowledge base. Open-source core on PyPI, hosted tier at app.vigil-agent.com with GitHub OAuth, Stripe billing, and per-tenant isolation.

  • 8,400+ lines, 311 tests, 4 PyPI releases — open-source core + hosted cloud tier
  • v2.2 Cloud: multi-tenant API, GitHub OAuth, Stripe billing ($29/$79/$199), MCPWatch observability, dashboard
  • 3 transport modes, embedded dashboard, event triggers, signal compaction, session handoff chains
PythonFastAPISQLiteMCP ProtocolStripeGitHub OAuthPyPIMIT License

The problem

AI agents today are stateless. Every session starts from zero — no awareness of what happened last time, no memory of decisions, no understanding of what’s active right now. I built a production system that solved this across 6 different interfaces and 95+ tools. Vigil extracts those patterns into a standalone library anyone can use — and a hosted platform for teams that don’t want to run their own infrastructure.

Open-source core

Vigil ships as a complete cognitive layer. Signals let agents emit structured observations with type-based content budgets. An awareness daemon compiles signals into hot context every 90 seconds. Session handoff chains give agents structured continuity. A knowledge base stores persistent facts that survive signal compaction. Event triggers fire actions when patterns match incoming signals. Everything stores in a single SQLite file with zero external dependencies. pip install vigil-agent and you’re running in 30 seconds.

Hosted cloud tier

The v2.0 hosted platform at app.vigil-agent.com adds multi-tenancy on top of the open-source core. GitHub OAuth for login, per-tenant SQLite isolation with LRU-cached connections, API key auth with hashed storage, usage metering, and Stripe billing for Pro/Team/Enterprise tiers. Each tenant gets their own isolated Vigil instance — same awareness daemon, same signal protocol, zero infrastructure to manage. Built the entire hosted backend (1,295 lines across 11 files) in a single session.

Three ways to connect

The MCP server exposes 15 tools over stdio or SSE — connect from Claude Code, Claude Desktop, or Cursor with one line of config. The REST API adds 25 endpoints with Bearer auth and an SSE event stream for real-time signal feeds. The embedded dashboard gives a live web view of awareness state, agents, signals, handoffs, and frames. All three share the same database, so a signal emitted via MCP shows up in the dashboard instantly. A Python SDK (vigil-client) wraps the REST API with 20+ methods for programmatic access.

AI Code Security Scanner

Critik

An open-source, two-pass code security scanner built for the vibe-coding era. First pass uses regex and AST to catch patterns. Second pass runs an AI review with full file context to filter false positives and catch logic-level vulnerabilities. Zero config, one command.

  • Two-pass architecture: regex + AST first, then AI review with full file context
  • VS Code extension with inline diagnostics, GitHub Action for CI/CD, pre-commit hook
  • 4,400 lines, 138 tests, custom YAML rules, watch mode, baseline support
PythonTree-sitterGroqLlama 3.3 70BVS Code ExtensionGitHub ActionPyPIMIT License

The problem

53% of AI-generated code has security vulnerabilities. Copilot autocompletes SQL injections. Cursor pastes API keys into public files. 35 new CVEs in March 2026 alone from AI-assisted code. Snyk charges $25+/mo. GitHub CodeQL only works on public repos. The indie developer security gap is wide open.

Two-pass scanning

Pattern matching alone has too many false positives. AI alone hallucinates findings. Combining them gets accurate results on cheap infrastructure. Pass one runs regex patterns and Tree-sitter AST parsing to catch hardcoded secrets, SQL injection sinks, XSS vectors, and command injection patterns. Pass two sends flagged files to an LLM (Llama 3.3 70B via Groq) with full file context to confirm, reclassify, or dismiss each finding. The AI sees the whole file, not just the matching line.

Ship everywhere

pip install critik && critik scan. That is the entire setup. The VS Code extension shows findings as inline diagnostics with severity levels and fix suggestions. The GitHub Action runs on every PR. The pre-commit hook catches issues before they reach the repo. Custom YAML rules let teams add their own patterns. Watch mode re-scans on file save. Baseline support lets you mark existing findings as accepted and only flag new ones.

AI Freelancer Business Tool

Stampwerk

A $12/mo HoneyBook alternative built after three freelancer tools died in two months. AI writes proposals from 5 questions, contracts auto-generate from accepted proposals, invoices chase themselves with a 3-step follow-up daemon.

  • AI proposals powered by Llama 3.3 70B — answer 5 questions, get a structured proposal in 2 minutes
  • Connected flow: proposal → contract → invoice → automated follow-up, no manual steps
  • 3-step AI follow-up daemon sends escalating reminders so you never chase clients
PythonFastAPINext.jsGroqLlama 3.3 70BStripeSQLiteResend

The displacement event

HoneyBook hiked prices 89% in 2025 and the loyalty discount expired Feb 2026. AND.CO shut down March 1, 2026. Bonsai got acquired by Zoom. Three displacement events in sixty days. I was paying $29/mo for HoneyBook and using two features: proposals and invoice reminders. Stampwerk does the 80% that matters at $12/mo flat.

AI that actually works

Most freelancer tools bolt on AI as template-fill. Stampwerk uses Groq with Llama 3.3 70B to generate real proposals from five inputs: client name, project type, scope, timeline, and budget. The output is a structured document with scope, deliverables, pricing, and timeline. Not a form letter. When a client accepts, a contract auto-generates with the proposal terms. One-click e-signature. Signed contracts trigger milestone invoices through Stripe.

The daemon

The follow-up daemon is the feature nobody else has. Most tools make you manually hit send reminder. Stampwerk sends a friendly nudge at 3 days overdue, a firmer reminder at 7, and a final notice at 14. Configurable per client. The daemon runs on an hourly cycle, checking all outstanding invoices and sending the appropriate escalation. You do the work. The software chases the money.

3D Game Engine in Rust

Supra Engine

A custom 3D FPS game engine built from scratch in Rust. Features a hand-rolled Entity Component System, wgpu-based rendering, rapier3d physics, and a full movement state machine inspired by Apex Legends. Designed for multiplayer from day one.

  • 85K+ lines of Rust across 8 crates (render, physics, ECS, input, assets, scripting)
  • Movement system: walk, sprint, slide, wall-run, wall-bounce, bunny hop, mantle
  • "Rust for the engine, Lua for the game" — clean separation of systems and gameplay
Rustwgpurapier3dCustom ECSLua Scriptingglam

Why build from scratch?

Every game engine makes tradeoffs that become your constraints. I wanted to understand the render pipeline at the metal level — how frames get to the screen, how physics ticks sync with render ticks, how an ECS actually works under the hood. Building from scratch means owning every decision and every line. When something breaks at 3am, there’s no mystery.

The movement system

Movement is the most important system in an FPS. If it doesn’t feel right in the first 10 seconds, players leave. Supra’s movement is a velocity-driven state machine: walk, sprint, slide, jump, air-strafe, bunny hop, wall-run, wall-bounce, and mantle. Each state defines its own physics — slide has friction decay, wall-run has gravity reduction and a timer, bunny hop preserves momentum on frame-perfect jumps. The goal was a parkour playground that’s fun with zero objectives.

Architecture

8 Rust crates organized as a workspace: core, window (winit), render (wgpu pipeline), input, ECS (custom archetype-based), assets (async loading), physics (rapier3d), and script (Lua). Every component is serializable and physics is deterministic — designed for multiplayer from day one. The philosophy: Rust for the engine, Lua for the game. A scripting layer lets gameplay logic iterate without recompiling the engine.

Multi-Strategy Trading System

Paradise

An autonomous trading intelligence platform running four independent strategies across forex, stocks, and prediction markets. Features institutional-grade risk management with a three-layer oversight system.

  • 4 strategies (position trading, scalping, prediction markets, funding rate arbitrage)
  • 3-layer risk system: signal quality gate, portfolio risk management, discipline enforcement
  • Thesis-driven investment pipeline with automated research cycles
PythonOANDA APIAlpaca APIPolymarketSQLitePM2

Four cats, four personalities

Each strategy operates independently with its own thesis, timeframe, and risk parameters. Lion is patient — weekly and daily charts, position trading, thesis-driven entries. Cheetah is fast — M5 timeframe, London session scalping. Tiger scans Polymarket for prediction market opportunities. Jaguar runs funding rate arbitrage: long spot, short perpetual, delta-neutral, collecting the spread across three exchanges. They don’t coordinate. They don’t need to.

The birds

Risk oversight runs in three layers. The signal quality gate filters entries before they reach execution — bad thesis, bad risk/reward, no trade. Hawk monitors portfolio-level risk in real-time: position sizing, correlation, exposure limits. Eagle enforces discipline: no revenge trading, no overtrading, mandatory cooldowns after losses. The system protects capital from the most dangerous risk factor in trading — the trader.

Paper to production

Everything runs in paper trading mode through OANDA and Alpaca. The discipline of treating paper money like real money is the point — same position sizes, same rules, same journaling. Every position has a documented thesis, automated research refreshes, and clear invalidation criteria. When the track record proves out across market conditions, real capital follows.

AI-Powered SaaS Platform

Guardia Content

A social media automation platform serving paying clients. Content flows through an AI pipeline — styling, caption generation, quality control, scheduling, and publishing — all orchestrated by named AI agents with isolated worker processes.

  • Full content automation: upload to published post with zero manual steps
  • Multi-agent architecture: Artemis (style), Mercury (captions), Argus (QC)
  • Production SaaS with Stripe billing, OAuth, and custom domain support
PythonFastAPINext.jsReactSQLiteTailwindStripeAI Pipeline

The pipeline

Content enters as a raw upload and flows through a chain of AI agents, each with a single responsibility. Artemis handles visual styling via Replicate SDXL, transforming images to match a client’s brand aesthetic. Mercury generates captions using Groq’s Llama 3.3 70B — fast, cheap, and surprisingly good at matching brand voice. Argus runs quality control, scoring each piece before it’s allowed to publish. Everything runs as isolated PM2 workers, so a failure in styling doesn’t block caption generation.

Real users, real constraints

This isn’t a side project — it processes content for paying clients on a recurring schedule. That changes every decision. Error recovery has to be graceful. The scheduling system handles timezone-aware posting windows. Stripe handles billing with tiered plans and add-on services. Custom domain support lets clients serve their content hub on their own domain via Cloudflare for SaaS. When you’re processing someone else’s content on a deadline, reliability isn’t optional.

Infrastructure

Python/FastAPI backend with 48 concurrent PM2 services on a single VPS, Next.js frontend, 7 SQLite databases, and Cloudflare tunnel for zero-port-exposure hosting. The whole thing runs on an 8GB Hetzner box. Resource discipline matters when you’re not throwing money at infrastructure.

Cognitive AI Architecture

Akatskii

A multi-LLM cognitive layer that routes thoughts to different language models based on complexity — fast pattern matching to Groq, deep reasoning to Claude, vision to Gemini. Features semantic memory with vector embeddings and an agentic tool loop.

  • Thought routing: complexity-based LLM selection optimizing cost and latency
  • Semantic memory with cosine similarity search and hybrid recall
  • Context compaction: extracts facts, drops noise, creates continuity across sessions
PythonFastAPIGroqAnthropicGoogle AIfastembedONNX

The routing problem

Different tasks need different LLMs. A quick status check shouldn’t cost the same as deep architectural reasoning. The thought router analyzes incoming requests and selects the optimal model: fast pattern matching to Groq (Llama 3.3 70B), complex reasoning to Claude, vision tasks to Gemini. The router considers complexity, required capabilities, cost, and latency. Most requests resolve on the cheapest model. The ones that need more get it automatically. The routing logic was mature enough to extract into a standalone open-source library — llm-route, published on PyPI.

Memory that persists

LLMs forget everything between sessions. Akatskii doesn’t. Semantic memory uses fastembed with all-MiniLM-L6-v2 — a 22MB embedding model running on ONNX Runtime, no PyTorch required. Recall is hybrid: keyword search plus cosine similarity, with a boost for memories found by both methods. Below a 0.25 similarity threshold, results are treated as noise. The result is genuine continuity across conversations.

Context compaction

As conversations grow, context windows fill with noise. The compaction layer extracts structured facts — decisions made, code written, problems identified — and drops the filler. This compressed context carries forward across sessions, giving continuity without token waste. The system also runs an agentic tool loop: think, decide on a tool, execute, observe, repeat — until the task is complete or it decides to ask for help.

Model Context Protocol Server

Guardia MCP

A custom MCP server exposing 95+ tools across business operations, trading, creative writing, and infrastructure. Features frame-based filtering — each interface sees only the tools relevant to its context.

  • 95+ tools organized by domain with decorator-based auto-registration
  • Frame filtering: core (14), serberus (23), paradise (25), luna (55), all (95+)
  • Bridges AI assistants to every system in the stack via a single protocol
PythonMCP ProtocolSSE TransportOAuthTool Registry

The problem with 70+ tools

When an AI assistant connects to a server with 95 tools, it drowns in schemas. Frame-based filtering solves this: each interface declares its context (trading, creative writing, system admin), and the server returns only the relevant tools. My trading interface sees trading tools. My fiction-writing interface sees lore tools. Frames only affect discovery — all tools remain callable regardless, so an interface can reach across domains when needed.

Auto-registration

Every tool is a decorated Python function. The decorator captures the function’s name, docstring, and type hints, then auto-generates the MCP schema. Adding a new tool means writing a function and dropping it in the right module. No manual schema files. No registration boilerplate. The registry handles discovery, filtering, and execution dispatch.

Bridging everything

Through a single SSE connection, an AI assistant can query databases, restart services, check trading positions, read creative lore, manage client content, and orchestrate background tasks. It turns any MCP-compatible client into an operator for the entire infrastructure. One protocol, one endpoint, every system.

Premium MCP Servers for Major Platforms

MCP Server Suite

Production-grade MCP servers for underserved SaaS platforms. Four servers exposing 53-73 tools each (262 total) with full CRUD, reports, and system diagnostics — filling gaps where 12,000+ existing servers offer 3-5 tools at most.

  • mcp-mailchimp: 71 tools for 12M Mailchimp users (campaigns, audiences, e-commerce, analytics, webhooks, A/B testing)
  • mcp-woocommerce: 73 tools for 5M+ WooCommerce stores (products, orders, refunds, reports, shipping, tax, gateways)
  • mcp-activecampaign: 65 tools for 185K+ ActiveCampaign users (contacts, deals, campaigns, scoring, segments, forms, goals)
  • mcp-freshbooks: 53 tools for 30M FreshBooks users (invoices, recurring billing, 5 report types, workflow tools) with full OAuth2
PythonMCP ProtocolhttpxOAuth2REST APIsPyPI

The gap

The MCP ecosystem has 12,000+ servers, but less than 5% are production-grade. Major platforms like FreshBooks (30M users), WooCommerce (5M stores), Mailchimp (12M users), and ActiveCampaign (185K businesses) had zero comprehensive MCP coverage. The best existing servers offered 3-5 tools — barely scratching the API surface. Each server in this suite covers 25-34 tools: full CRUD, reporting, and proper error handling.

Covering what competitors skip

Most MCP servers handle basic reads. These handle the full lifecycle: create invoices, process payments, manage campaigns, pull financial reports. The FreshBooks server implements full OAuth2 with automatic token refresh — a complexity barrier that keeps weekend builders out. The WooCommerce server covers 8 API categories including analytics. The Mailchimp server handles campaign creation through performance reporting. The ActiveCampaign server wraps the entire API v3 surface with built-in rate limiting and auto-retry. Every response is structured and predictable — not raw API dumps.

Distribution strategy

Each server ships simultaneously to PyPI (pip install), GitHub (MIT license, public), and MCP registries (Smithery, mcp.so). The stack is intentionally simple: Python, httpx, FastMCP. No heavy frameworks, no Docker required. Environment variable auth, stdio transport. Point it at your store or account and go.

MCP Server CLI Inspector

mcpcat

A CLI tool that connects to any MCP server and pretty-prints available tools, schemas, and lets you call them interactively. Like curl, but for the Model Context Protocol.

  • 4 commands: tools, inspect, call, ping — everything you need to debug an MCP server
  • Auto-detects transport mode (streamable HTTP vs SSE) — point it at a URL, it figures out the rest
  • ~250 lines across two files. Pip-installable. Fills a tooling gap in the MCP ecosystem.
PythonTyperhttpxRichMCP Protocol

The gap

MCP is new enough that the tooling gap is wide open. When building a 70+ tool MCP server, every schema change meant reading source code or wiring up a test client to verify what was exposed. There was no curl equivalent — no way to just point at a server and see what’s there. That’s the gap mcpcat fills.

Transport detection

MCP has two transport modes: the original SSE-based flow and the newer streamable HTTP. The first version only handled SSE and hung against my own server, which uses streamable HTTP. The fix: try a plain GET first. If the server returns JSON with protocol info, it’s streamable HTTP. If not, fall back to SSE. Simple, but it took a real failure to discover the need.

Keep it small

The entire tool is about 250 lines across two files. Python, Typer for the CLI, httpx for HTTP, Rich for pretty tables. Four commands: tools, inspect, call, ping. Sometimes the most useful tools are the smallest ones.

03.Writing

Three freelancer tools died in two months. I built the replacement.↗

April 2026

HoneyBook hiked prices 89%. AND.CO shut down. Bonsai got acquired. Building Stampwerk — AI proposals, auto contracts, smart invoicing at $12/mo.

PythonFastAPIAISaaSFreelance

Give your AI agents a nervous system

March 2026

How I extracted cognitive infrastructure from a 95-tool production system into Vigil — from open-source library to hosted cloud platform with multi-tenant isolation, Stripe billing, and GitHub OAuth.

AI AgentsPythonMCPSaaSOpen Source

I got tired of guessing what my MCP server was doing

March 2026

So I built mcpcat — a CLI inspector for MCP servers. The build, the transport detection bug, and why the tooling gap exists.

MCPPythonCLIOpen Source

33 tools for Mailchimp in one MCP server↗

March 2026

Building a production-grade MCP server for Mailchimp with 33 tools covering campaigns, audiences, templates, and automations.

MCPPythonMailchimp

Managing a WooCommerce store from Claude — 34 MCP tools↗

March 2026

34 MCP tools for WooCommerce covering products, orders, customers, reports, and webhooks — with URL normalization and array response handling.

MCPPythonWooCommerce

Zero to 33 tools: building the first MCP server for ActiveCampaign↗

March 2026

The first MCP server for ActiveCampaign — contacts, deals, automations, pipelines, and campaigns with client-side rate limiting.

MCPPythonActiveCampaign

OAuth2, two APIs, and soft deletes — building an MCP server for FreshBooks↗

March 2026

Navigating FreshBooks’ dual API surface, OAuth2 token refresh, and soft delete patterns while building a 25-tool MCP server.

MCPPythonFreshBooksOAuth2

How I built a cognitive AI layer that routes thoughts to the right brain↗

March 2026

Building Akatskii — a multi-LLM cognitive layer that routes thoughts based on complexity, manages semantic memory, and maintains continuity across sessions.

AILLMsPythonArchitecture

04.Skills & Tools

Languages

  • PythonExpert
  • RustAdvanced
  • TypeScriptProficient
  • SQLProficient
  • LuaIntermediate

Backend

  • FastAPI
  • Async Workers
  • Queue Pipelines
  • Daemon Architecture
  • REST API Design
  • OAuth2

Frontend

  • Next.js 14
  • React 18
  • Tailwind CSS
  • Framer Motion
  • App Router
  • Responsive Design

AI & ML

  • Multi-LLM Routing
  • Agentic Tool Loops
  • Vector Embeddings
  • RAG Systems
  • MCP Protocol
  • Prompt Engineering

Infrastructure

  • PM2
  • Cloudflare Tunnels
  • SQLite
  • Linux/Ubuntu
  • SSH Hardening
  • Automated Backups

Game Engine

  • wgpu (WebGPU)
  • rapier3d Physics
  • Custom ECS
  • State Machines
  • Procedural Gen
  • Lua Scripting

05. What's Next

Let's Build Something

I'm currently open to full-stack and AI engineering roles, as well as interesting freelance projects. If you're building something ambitious and need an engineer who can own the full stack from infrastructure to interface, I'd love to hear about it.

[email protected]