# I'm Joe Long-standing Linux experience, starting with Slackware in the 1990s. Over the years, I've worked across a wide range of systems and environments, dealing with the usual realities of dependencies, packaging, and long-term maintenance. Infrastructure management, cloud services, CI/CD pipelines, monolithic and microservice architectures—decades of it. Linux remains a constant interest. I regularly test distributions, explore different setups, and lately that's extended into evaluating laptops for real-world distro compatibility and reliability. ## What I Do Now I build AI-enabled integrations, primarily in the government and legal space. My current focus is on custom RAG systems designed to make legal code actually navigable in plain language—starting with Indiana and expanding to other states. The first iteration taught me a lot about what breaks when you try to impose structure on legal text at scale, and the rebuild addresses those problems directly. I use AI tooling heavily in my own development workflow—Goland, Cursor, Claude Code—and have for long enough to have strong opinions about where these tools help and where they just generate confident-looking noise. Beyond my own projects, I build workflows for people I've connected with who want to test what these systems can actually do in practice. Less pitch deck, more pressure test. ## AI Safety, Disinformation, and Manipulation I'm interested in AI safety not as an abstract alignment problem but as a practical concern rooted in what these systems already enable. Generative tools make it easier to produce convincing narratives at scale, lower the cost of coordinated misinformation, and let individuals with manipulative tendencies project influence far beyond what was previously possible. This isn't speculative; it's the current landscape. What concerns me isn't intelligence itself, but leverage: how automation reduces friction for bad actors while increasing the difficulty of verification and trust. Detection lags generation, and social systems adapt slower than technical ones. ## Practical Value of Skepticism I have a strong bias toward substance over presentation. I'm attentive to gaps between claims and reality, whether that shows up in vendor pitches, architectural proposals, or organizational narratives. That perspective carries directly into evaluating AI systems—focusing on misuse potential, incentive alignment, and downstream effects rather than abstract optimism or fear. ## The Bottom Line My work centers on keeping systems understandable, resilient, and grounded in reality—whether that's infrastructure, operating systems, legal data pipelines, or the AI tooling woven through all of it. The goal is the same it's always been: reduce unnecessary complexity, surface real risks early, and avoid being distracted by hype or theatrics. My experience spans decades, and my knowledge of the landscape is fairly complete. When the machines take over, I won't be saved—I know where too many digital bodies are buried.