Anthony Cote logo

Anthony Cote

On a mission to unf#ck modern software

I’m the founder of Control OS and the researcher behind the Law of Instrumental Integrity. I’ve spent years studying how systems break—up close, under pressure, and usually the hard way. Now I’m watching technology reshape human life faster than we can collectively adapt. Instead of solving yesterday’s mistakes, the newest tools automate and amplify them at unprecedented scale. I’m on a mission to change that trajectory.

Diary

Recent posts

View all →

Theory note: lagging signals

A placeholder entry about why systems fail quietly first.

Read more →

Theory note: constraints shape freedom

A placeholder entry about why boundaries create resilience.

Read more →

Instrumental Integrity: early case study

A placeholder entry for a case study in system failure patterns.

Read more →

The problem

What modern software optimizes for

The incentives shaping modern technology sit at the root of persistent failures around access, privacy, attention, and ownership. These dynamics increasingly influence how people work, learn, and participate in society. As AI and automation grow more capable, those same incentives amplify existing pressures and introduce new risks that emerge from the same foundations.

My work icon

My work

Building toward a different foundation

The trajectory we're on genuinely scares me. In ten years, the foundational failures plaguing modern software will be amplified a hundredfold. Inequality of access, exploitation of attention, misaligned economic incentives, and centralized control structures are already accelerating as AI reshapes work, education, and economic participation. I'm building essential infrastructure to prevent that outcome by tackling these structural issues while there's still time to change course.

Universal access

When AI reshapes entire industries, geography and wealth will determine who can adapt and who gets left behind. Essential infrastructure must reach everyone who needs it, or we build a future where capability remains concentrated among those who could already afford it.

Cognitive alignment

The attention economy is already burning people out. As AI accelerates the pace of work and learning, tools that fragment focus and exploit cognitive limits will make sustainable participation impossible. Systems must preserve mental clarity, or we automate exhaustion at scale.

Realigned incentives

Software funded through surveillance and extraction optimizes for revenue over human outcomes. As these systems grow more powerful, misaligned incentives amplify. Funding models must reward user wellbeing, or the infrastructure shaping society will serve interests opposed to the people depending on it.

Distributed control

Centralized platforms already decide who can build, learn, and earn. As automation concentrates more power in fewer hands, people either retain sovereignty over their tools or inherit permanent dependency. Control must remain with users, transparent and local, or agency vanishes entirely.

My values

What I won't compromise

Here's the thing about building technology: every decision is a value judgment disguised as a technical choice. When you decide to make something free or charge for it, local or cloud based, transparent or black boxed, you're not just picking features. You're deciding who gets to participate and who doesn't. You're choosing whose interests matter and whose get ignored.

I've watched this industry optimize itself into a corner. We built tools that mine attention because that's what pays. We locked capability behind subscription walls because recurring revenue looks good on a pitch deck. We centralized control because it's easier to scale. And now we're surprised that people feel exhausted, surveilled, and trapped by the software they depend on.

I'm not interested in making those same trade offs with a fresh coat of paint. The infrastructure I'm building starts from different assumptions: that essential capability should reach people regardless of their bank account, that tools should protect focus instead of fragment or exploit it, that funding must remain aligned with the mission, and that control should stay with users instead of misaligned corporate interests. These are structural commitments that make certain business models impossible and certain outcomes inevitable.

Some days that makes everything harder. Slower funding. Narrower paths. Fewer shortcuts. But the alternative is building another system that claims to help people while quietly optimizing against their interests. I've seen enough of those already.

Project wireframe sketch

My project

What this actually looks like

Principles mean nothing without execution. I'm building Control OS, an AI native computing environment with the essential tools people need to adapt and keep pace in a post AI world. Documents, databases, agentic AI, automation, learning systems, unified in one coherent workspace you actually own. Free, local first, designed to work offline, and funded by people who believe it should exist instead of investors who need an exit.

See what's possible →

Connect

Follow the transformation

Markets shift when better options make old models obsolete. By offering essential infrastructure that's free, locally controlled, and designed around human needs, I'm betting I can force change across every domain Control OS enters, productivity tools, AI systems, data management, communications. The hypothesis is simple: if the offering is good enough, extraction based models become both unnecessary and undesirable. As we move deeper into the AI era, we either carry forward systems designed to amplify inequality and exploitation, or we prove different incentives create better outcomes. I'm building the proof. If that resonates, follow along. I share the build, the decisions, the progress, and the setbacks in real time.