AI in Enterprise — Where It Works, Where It Doesn't
Opinions formed from building AI into production systems, not watching demos.
The Good and the Bad
Where AI Works
- Decision support — surfacing patterns humans miss
- Pattern recognition across massive datasets
- Augmenting human judgment, not replacing it
- Automating repetitive compliance checks
Where AI Doesn't
- Regulatory enforcement with accountability requirements
- Black-box decisions on customer-facing processes
- Replacing domain expertise with statistical correlation
- Anywhere you can't explain the "why" to a regulator
What I've Seen Work
Compliance Screening Augmentation
AI that suggests screening matches for human review, not AI that auto-decides. The human stays in the loop. The AI makes them faster, not obsolete.
Pattern Detection in Supply Chain
Spotting anomalies in trade flows that humans miss because the volume is inhuman. AI as a spotlight, not a judge.
Document Intelligence
Extracting structured data from unstructured trade documents. Boring but transformative at scale.
What I'm Experimenting With
Autonomous Agents
Building Aegis, a multi-agent personal AI OS. Running 10+ specialized agents orchestrating calendar, email, content, health, career, finances. Learning what works and what's theatre.
Voice-Calibrated Content
Teaching AI to write in my voice. 2 weeks of calibration, 160+ rewrite examples, still not quite there. Documenting the gap between "sounds like me" and "is me."
My Take
AI is a tool, not a strategy. The gap between a compelling demo and a production system that doesn't embarrass you at scale is enormous — and it's filled with guardrails, fallback logic, human-in-the-loop checkpoints, and domain expertise that no foundation model ships with. Enterprise AI that works is boring AI: well-scoped, well-monitored, and honest about its failure modes. The moment you treat AI as magic instead of machinery, you're building on sand.