# I'm Joe
I've been working with computers long enough to have touched most of the evolutionary tree—from a Timex Sinclair 1000 to Univac iron, through mainframes, networked systems, virtualization, cloud, and now what may be the last major inflection: AI. Over 24 years professionally, that's included infrastructure management, process automation, vendor governance, and compliance work across regulated public-sector environments.
Linux and NixOS are ongoing interests. I value reproducibility and declarative configuration, and I still test distributions and evaluate hardware for real-world compatibility. But the tools are in service of the work, not the other way around.
## What I Do Now
I build AI-enabled integrations, primarily in the government and legal space. My current focus is on custom RAG systems designed to make legal code actually navigable in plain language—starting with Indiana and expanding to other states. The first iteration taught me a lot about what breaks when you try to impose structure on legal text at scale, and the rebuild addresses those problems directly.
I use AI tooling heavily in my own development workflow—Goland, Cursor, Claude Code—and have for long enough to have strong opinions about where these tools help and where they just generate confident-looking noise.
Beyond my own projects, I build workflows for people I've connected with who want to test what these systems can actually do in practice. Less pitch deck, more pressure test.
## AI Safety, Disinformation, and Manipulation
I'm interested in AI safety not as an abstract alignment problem but as a practical concern rooted in what these systems already enable.
Generative tools make it easier to produce convincing narratives at scale, lower the cost of coordinated misinformation, and let individuals with manipulative tendencies project influence far beyond what was previously possible. This isn't speculative; it's the current landscape.
What concerns me isn't intelligence itself, but leverage: how automation reduces friction for bad actors while increasing the difficulty of verification and trust. Detection lags generation, and social systems adapt slower than technical ones.
## Practical Value of Skepticism
I have a strong bias toward substance over presentation. I'm attentive to gaps between claims and reality, whether that shows up in vendor pitches, architectural proposals, or organizational narratives. That perspective carries directly into evaluating AI systems—focusing on misuse potential, incentive alignment, and downstream effects rather than abstract optimism or fear.
## The Bottom Line
My work centers on keeping systems understandable, resilient, and grounded in reality—whether that's infrastructure, operating systems, legal data pipelines, or the AI tooling woven through all of it. The goal is the same it's always been: reduce unnecessary complexity, surface real risks early, and avoid being distracted by hype or theatrics.
My experience spans decades, and my knowledge of the landscape is fairly complete. When the machines take over, I won't be saved—I know where too many digital bodies are buried.