Theory note: lagging signals
A placeholder entry about why systems fail quietly first.
Read more →
Anthony Cote
I’m the founder of Control OS and the researcher behind the Law of Instrumental Integrity. I’ve spent years studying how systems break—up close, under pressure, and usually the hard way. Now I’m watching technology reshape human life faster than we can collectively adapt. Instead of solving yesterday’s mistakes, the newest tools automate and amplify them at unprecedented scale. I’m on a mission to change that trajectory.
Diary
A placeholder entry about why systems fail quietly first.
Read more →A placeholder entry about why boundaries create resilience.
Read more →A placeholder entry for a case study in system failure patterns.
Read more →The problem
The incentives shaping modern technology sit at the root of persistent failures around access, privacy, attention, and ownership. These dynamics increasingly influence how people work, learn, and participate in society. As AI and automation grow more capable, those same incentives amplify existing pressures and introduce new risks that emerge from the same foundations.
My work
The trajectory we're on genuinely scares me. In ten years, the foundational failures plaguing modern software will be amplified a hundredfold. Inequality of access, exploitation of attention, misaligned economic incentives, and centralized control structures are already accelerating as AI reshapes work, education, and economic participation. I'm building essential infrastructure to prevent that outcome by tackling these structural issues while there's still time to change course.
When AI reshapes entire industries, geography and wealth will determine who can adapt and who gets left behind. Essential infrastructure must reach everyone who needs it, or we build a future where capability remains concentrated among those who could already afford it.
The attention economy is already burning people out. As AI accelerates the pace of work and learning, tools that fragment focus and exploit cognitive limits will make sustainable participation impossible. Systems must preserve mental clarity, or we automate exhaustion at scale.
Software funded through surveillance and extraction optimizes for revenue over human outcomes. As these systems grow more powerful, misaligned incentives amplify. Funding models must reward user wellbeing, or the infrastructure shaping society will serve interests opposed to the people depending on it.
Centralized platforms already decide who can build, learn, and earn. As automation concentrates more power in fewer hands, people either retain sovereignty over their tools or inherit permanent dependency. Control must remain with users, transparent and local, or agency vanishes entirely.
My values
Here's the thing about building technology: every decision is a value judgment disguised as a technical choice. When you decide to make something free or charge for it, local or cloud based, transparent or black boxed, you're not just picking features. You're deciding who gets to participate and who doesn't. You're choosing whose interests matter and whose get ignored.
I've watched this industry optimize itself into a corner. We built tools that mine attention because that's what pays. We locked capability behind subscription walls because recurring revenue looks good on a pitch deck. We centralized control because it's easier to scale. And now we're surprised that people feel exhausted, surveilled, and trapped by the software they depend on.
I'm not interested in making those same trade offs with a fresh coat of paint. The infrastructure I'm building starts from different assumptions: that essential capability should reach people regardless of their bank account, that tools should protect focus instead of fragment or exploit it, that funding must remain aligned with the mission, and that control should stay with users instead of misaligned corporate interests. These are structural commitments that make certain business models impossible and certain outcomes inevitable.
Some days that makes everything harder. Slower funding. Narrower paths. Fewer shortcuts. But the alternative is building another system that claims to help people while quietly optimizing against their interests. I've seen enough of those already.
My project
Principles mean nothing without execution. I'm building Control OS, an AI native computing environment with the essential tools people need to adapt and keep pace in a post AI world. Documents, databases, agentic AI, automation, learning systems, unified in one coherent workspace you actually own. Free, local first, designed to work offline, and funded by people who believe it should exist instead of investors who need an exit.
See what's possible →Nothing. No subscriptions. No paywalls. No freemium bait and switch. Globally accessible because capability should not require a credit card to access essential infrastructure.
On your machine, macOS, Windows, Linux, or cloud hosted on a VPS or traditional server. Your data stays local. Your privacy stays intact. Cloud becomes optional, not mandatory.
You do. Open source. Inspectable. Extendable. The system adapts to you instead of forcing you into someone else's workflow or business model.
Because the future should not be gated by wealth, fragmented across dozens of tools, or designed to exploit the people using it to generate quarterly earnings.
Voluntary contribution from people who want this to succeed. No venture capital. No growth at all costs pressure. Just aligned incentives and long term thinking that prioritizes users.
Dozens of subscriptions. Multiple logins. Scattered context. Fragmented workflows. Control OS consolidates what used to require an entire software stack into one unified environment you control.
People navigating an automated future who need powerful tools to adapt, build, and thrive without surrendering control, privacy, or autonomy just to access capability.
Not another productivity app. Not another SaaS platform. Essential infrastructure designed to remain accessible, transparent, and under user control as AI reshapes everything around it.
As automation reshapes work and learning, you either have tools that help you adapt independently or you're dependent on platforms answering to someone else's interests.
Connect
Markets shift when better options make old models obsolete. By offering essential infrastructure that's free, locally controlled, and designed around human needs, I'm betting I can force change across every domain Control OS enters, productivity tools, AI systems, data management, communications. The hypothesis is simple: if the offering is good enough, extraction based models become both unnecessary and undesirable. As we move deeper into the AI era, we either carry forward systems designed to amplify inequality and exploitation, or we prove different incentives create better outcomes. I'm building the proof. If that resonates, follow along. I share the build, the decisions, the progress, and the setbacks in real time.