Join our network of startups in AGI security today
Applications now open

Co-Founder Matching Recap
Last night, we hosted a co-founder matching session as part of the AI safety founders community. 83 founders signed up to take part in a 2 hour session to mix and match, hopefully leading to many more exciting startups taking flight.

We started with a presentation to the topic of AI safety founding by Finn, Seldon's co-founder, along with introductions by Ashyae Singh and Esben Kran.
We had the amazing founders of Workshop Labs and Andon Labs come by to give short introductions to their companies and interact with the aspiring AI safety founders.

With representation across topics, such as senior engineers at FAANG, web3 founders, and ML researchers, the room was full of energy and discussion and we look forward to the next one.
Thanks to everyone making it an amazing night!
Seldon Grande Finale: Celebrating a Successful Batch #1
At Seldon, we believe that for-profit companies can shape AI safety and we're here to prove it.
Last Friday, we closed off our pilot program with the Seldon Grand Finale, an event with a keynote from Esben Kran, a demo by each of our four startups, and a panel of leaders in AI safety entrepreneurship.
Our pilot program has been three amazing months with amazing founders who have now raised over $10m, sold security to xAI and Anthropic, and have patented new inventions for verifiable compute.
All of this while converting to Public Benefit Corporations, discussing the future of humanity, and having dinners with leaders in this new field and inspiring each other.

During the event, we had interesting demos by all of the founders:
Kristian presented Lucid Computing's exciting new direction and the importance of understanding where GPUs are and what they are processing.
Luke presented Workshop Labs' plan for solving the disparity of white collar workers in a post-AGI economy
Lukas presented Andon Labs' most recent experiments at xAI, YCombinator, and Anthropic letting chatbots run real-world businesses, such as vending machines
DeepResponse is still in stealth mode
After the fantastic demos, we hosted a panel with the amazing Eric Ho, CEO of Goodfire, Buck Shlegeris, CEO of Redwood Research, and Nick Fitz, co-founder of Juniper Ventures, discussing some of the most important questions as we enter the AGI end game.
The questions discussed included when people expected to see mass destruction due to AGI, how startups can impact frontier development of AI, and how you can shape companies to create large-scale, blitzscaling impact on the world.

In general, the diversity of the investors, founders, and researchers that came for our event is a testimony to the new path we're shaping at Seldon in for-profit AI safety.
We had ample time to mingle, in addition to our late-night afterparty, and many came up and asked where they could apply, when we'd have our next program, and how they can contribute to this field.

As Seldon, it's our commitment to make this field a success. We're excited to continue nurturing this growing network of startups and collaborators.
Building the infrastructure that secures humanity's future is everyone's responsibility, and we've learnt that if we don't do it, no one will.
Become a part of Seldon's ambitious network by applying to our next batch later this year!
Aug 18, 2025
Aug 18, 2025
Aug 18, 2025
Aug 18, 2025
Read More
Read More
Why Technical AI Safety Is The Only Path to AGI Governance
What does the Big Red Button do?
In the last year, technical AI safety work has seen a dip in interest in favor of governance, demonstrations, and evaluations. While these efforts matter, they're putting the cart before the horse.
Today, I'll share the strong argument for technical AI safety: Without the technology to secure AI systems, governance will be impossible.
The Inconvenient Truth About AI Governance
Imagine tomorrow, the US President and Chinese Premier shake hands on a historic AI safety treaty. They agree to halt all dangerous AI development. They even install a "Big Red Button" to enforce it.
But what exactly would that button do?
By default, it would have to bomb every data center outside treaty zones. Because without verifiable auditing mechanisms, we can't verify compliance. And if we lack verified compute, the button might need to destroy treaty-compliant data centers too, just to be sure.
It's mutually assured destruction with extra steps.
Why Technical Solutions Create Their Own Adoption
History shows us that when technical solutions work, adoption follows naturally:
The Climate Parallel
Look at Chinese solar panel installations over the last decade. Once the technology became cost-effective, deployment exploded—no global treaty required. Technical progress created economic incentives that aligned with climate goals.

The Security Precedent
On our side of the fence, Cloudflare now protects over 20% of the internet. Not because of regulations mandating DDoS protection, but because the technology is so effective and accessible that using it became the obvious choice. And if the Big Red Button needs technology that Cloudflare provides (such as AI labyrinths), Cloudflare can provide as a result of their for-profit scale.
The Path Forward: Infrastructure Before Governance
At Seldon, we're building the technical infrastructure that makes AI safety not just possible, but default. Here's what real governance-enabling technology looks like:
Hardware-level guarantees (like our portfolio company Lucid Computing's remote attestation)
Automated real-world incidence systems (Andon Labs' physical agent evaluations for Anthropic and xAI)
Real-time monitoring and response (Deep Response's incredible speed-up of detection and response)
Cryptographic verification (Workshop Labs' verifiably private data and model pipeline)
The Strong Case
The strong case for technical AI safety isn't that governance doesn't matter. It's that governance without technical solutions is merely theater.
When we have:
Reliable auditing mechanisms deployed in every data center
Hardware-level security that can't be bypassed
Automated systems that detect and prevent dangerous capabilities
Secure proofing of model behavior
Then, and only then, can governance actually govern.
The alternative is a world where our only enforcement mechanism is destruction, where verification requires devastation, and where the Big Red Button is just another name for giving up.
Technical AI safety isn't just about building aligned AGI or adding another layer of security.
It's the foundation that makes all governance possible.
And at Seldon, we're not going to wait for the perfect policies. We want you to build the technologies that make these policies enforceable. Because when the leaders of the world finally agree on AI safety, we want their Big Red Button to do something more sophisticated than "destroy everything and hope for the best".
The future of AI governance runs through technical innovation. Everything else is just politics.
Aug 14, 2025
Aug 14, 2025
Aug 14, 2025
Aug 14, 2025
Read More
Read More
Announcing the Seldon Summer 2025 Batch
We're thrilled to introduce Seldon’s Act 1: Accelerated Technology Companies, our trail-blazing pilot batch bringing together world-class AI safety and assurance startups. This milestone accelerates our vision of building the infrastructure that ensures humanity's long-term safety as AGI reshapes our world.
Seldon's First Act
What is Act 1?
It’s our inaugural cohort—founding teams dedicated to advancing AI safety. These startups benefit from intimate sessions with elite guests (e.g., Geoff Ralston, former YC president), hands-on support across all aspects of company building, and access to a global talent network.
Why it matters.
As AGI grows more pervasive, we face an increasing number of catastrophic risks. Through Act 1, we're scaling technological safety in tandem—because solving tomorrow’s threats requires building a robust sociotechnical infrastructure today. If there are a hundred existential challenges ahead, we need a hundred decacorn startups to address them.
Meet The First Four Companies
Andon Labs
Building safe autonomous organizations. AI-run systems operating independently from human intervention, while ensuring alignment and safety in real-world deployments.
Lucid Computing
Offering a hardware-rooted, zero-trust platform that cryptographically verifies AI chip usage and data processing location, ensuring regulatory compliance and reducing overhead in sensitive environments.
Workshop Labs
Providing personal sovereignty of AI models by training billions of models, one for each person, and ensuring ownership stays in the hands of the professionals that provide the training data.
DeepResponse
Deploying autonomous cyber defense operations for detection and response in the AGI era.
Meet Them This Friday
This Friday, August 15th, we are hosting an exclusive event to meet our founders, along with the CEO of Goodfire, Eric Ho, and the CEO of Redwood Research, Buck Shlegeris, among many other invited guests.
We're excited to see you there!
Godspeed
Aug 13, 2025
Aug 13, 2025
Aug 13, 2025
Aug 13, 2025
Read More
Read More
AGI Security: Rethinking Society's Infrastructure
AGI security is the proactive security-first infrastructure for a world where AI systems can reason, adapt, and act autonomously across digital and physical domains.
Specifically, it posits that:
Powerful AI will by default destroy our social, technological, and civilizational infrastructure due to the speed at which it is being developed.
Attacks become the default, not an infrequent occurrence, due to the fall in the price of intelligence and the proliferation of powerful AI.
Attack surfaces expand from the digital to physical, cognitive, and institutional through for example drones, mass media, and information control.
As a result, we need to rethink how we construct our societal infrastructure. We need to go beyond ‘adding a security layer’ to embedding security into the very foundations of anything we build.
Without this, the threats of powerful AI (such as Sentware, loss-of-control, societal disruption, and civilization-scale manipulation) will easily penetrate our weak security posture for whoever wishes to destabilize or gain control.
This post was originally published on Esben's blog.
Call For Technologies (CFT)
However, if we act today, we can be the ones to shape this field. These are but a few ideas for what we need to create today to lay the foundation of tomorrow:
Model Security and Control
Runtime containment: Hardware/software boundaries preventing model breakout and access control
Capability capping: Cryptographic limits requiring multi-party authorization vs single-party company control
Behavioral attestation: Continuous (inference-time) alignment verification vs current pre-deployment adhoc evaluation
Example: NVIDIA H100 Confidential Computing (<5% overhead)
Infrastructure Hardening
Supply chain verification: Privacy-preserving component tracking from manufacture to deployment
Air-gap capabilities: Millisecond physical isolation and shutdown
Redundant sovereignty: Critical systems across geopolitical boundaries
Attack-resilient design: Run-time monitoring and forensics assuming constant breaches
Societal Autonomy Protection
Manipulation detection: Identifying influence vs information
Authenticity verification: Human vs AI-generated content
Cognitive firewalls: Filtering AI access to human attention
Autonomy audits: Measuring for reductions of human autonomy
Deployment Infrastructure
Silicon-level security: Compute use attestation and auditing
Multilayer identification systems: Human / AI / hybrid
Capability attestation and capability-based permissions: Enforcement of tool-AI level control
Rollback capabilities: Logging at every step with enforcement of zero destructive changes
Examples
Hardware and compute infrastructure: Location verification
AI network protocol defenses: Agent labyrinths
Secure AGI deployment systems: Evaluation
Verification-as-a-service: Content moderation
Tool-AI for security: Digital forensics
Provably secure systems: Safeguarded AI
Scale, Scale, Scale
One of the most important factors in AGI security is whether we can reach the scale necessary within the allotted time.
I definitely believe we can and the evidence is in. We’re already at over $50M value in Seldon, our AGI security lab program, and 10x’d our investment in Goodfire with Juniper Ventures.
For founders, this means to have ambition for unicorn growth within 12 months. For investors, it means radical levels of investment in the earliest stages of this industry that will transform our society during the next years.
AGI Security
This is the most exciting opportunity in our lifetime. We stand at the precipice of a new world and we can be the ones to shape it.
The core of what makes us human will need guardians and there’s no one else but you to step up.
Pioneering startups are already running full speed ahead.
Join them!
Jun 16, 2025
Jun 16, 2025
Jun 16, 2025
Jun 16, 2025
Read More
Read More
AGI Privacy
Yesterday, I was discussing AGI governance with a founder working on AI safety infrastructure when they mentioned something concerning: the same systems we’re building to monitor and reduce AGI misuse could easily become the perfect tools for controlling humans.
This isn’t some distant sci-fi concern. Right now, today, major AI companies are implementing monitoring systems for their most capable models. Governments are drafting comprehensive AI oversight legislation. We’re literally watching the early infrastructure for AGI control take shape in real-time.
The question isn’t whether we need to monitor powerful AI systems. Given their potential for catastrophic misuse, from bioweapons to financial manipulation, oversight is inevitable. The question is whether we can build this infrastructure without accidentally creating a surveillance state that would make Big Brother jealous.
AGI Privacy is the socio-technical question of how we maintain personal autonomy while ensuring AGI remains a secure technology.
This post was originally published on Esben's blog.
The Infrastructure is Already Here

Having grown up in Denmark, one of the world’s most surveilled democracies, I’ve seen firsthand how digital infrastructure built for convenience becomes a foundation for control. Our MitID system, digital government services, and comprehensive data integration create remarkable public services.
Personally, I’m fine with Danish mass surveillance because I trust Denmark as a government. They do an extreme amount to protect data, something I know researchers in the US are not subject to at the same level. But a state still has a monopoly on violence, and that changes the calculation.
Now scale this globally. The Five Eyes alliance already maintains fiber-optic cable taps that can store every communication for days and metadata for months. China operates over 200 million AI-powered cameras. The NSA’s XKEYSCORE system provides real-time access to internet traffic from 150 global collection sites.
This existing surveillance infrastructure provides the perfect foundation for AGI control. The networks designed to monitor human communications can easily be repurposed to monitor AI-human interactions.
Current legal frameworks for human surveillance already provide precedent for monitoring human use of AI systems. We’re not building AGI governance systems from scratch. We’re building them on top of an extensive surveillance apparatus.
Why This Should Terrify Anyone Working in AI
The effects of social media on mass manipulation should be the first indication of the default path we are taking. It is not just the threat of violence from authoritarian states against dissidents that should worry you. It is the effect of monitoring itself.
When people know they’re being monitored, they self-censor, avoid creative risks, and conform to perceived expectations. Following the Snowden revelations, Wikipedia saw a 20% decline in views of terrorism-related articles. People stopped researching legal topics simply because they knew someone might be watching.
Additionally, academic researchers report avoiding certain topics and international collaborations out of surveillance concerns. Creative professionals demonstrate measurably lower creativity when they know they’re being observed.
This isn’t new. Orwell wrote about it in “Politics and the English Language” (with “1984” as a narrative framing) - how the fear of being judged leads to intellectual conformity and the death of original thought. The chilling effect isn’t just about avoiding illegal activity. It’s about the slow strangulation of curiosity itself.
For you as the AI researcher, you’ve probably experienced self-censoring because of your employer or because of your social circle’s perspectives on AGI.
And this is even worse when we research sensitive areas like AGI security, consciousness, human cognition, and social dynamics. It is crucial that we avoid self-censoring our way into compliance with the established systems and remain unable to escape the status quo.
This worry extends even to the state AI race level. If you do not allow freedom of thought and individuality, your government may not be able to do the creative work necessary to make AGI.
As I discussed in my post on making AI agents safe, we need robust testing and oversight of all AI systems. But if we’re not careful, the infrastructure we build for safety could become the infrastructure for oppression.
The Technical Path Forward

The good news? We already know how to solve this. The internet’s own security evolution provides the perfect model.
When Netscape introduced SSL in the mid-‘90s, it created client-server encryption that secured trillions of transactions while remaining completely transparent to users. This is the model we need for AGI governance: verification openly built into infrastructure, not surveillance bolted on afterward.
Here’s what privacy-preserving AGI control can look like:
Zero-knowledge safety verification: AI systems prove they passed safety evaluations without revealing evaluation criteria or internal reasoning. You can prove you have the driver’s license without ever sharing personal information.
Independent auditing before activation: Before an AI system executes potentially dangerous tasks, run it through outcome verification systems that check safety constraints without accessing user data or system internals. Delete the results or maintain privacy algorithmically. Auditors can confirm safety without seeing content.
Granular permission control: Instead of giving AI systems broad access that requires constant monitoring, we build fine-grained permission systems. Like smartphone app permissions, but designed for AI systems that could potentially cause catastrophic damage.
Distributed verification networks: Rather than centralized monitoring by single entities, we create networks of independent verifiers that provide oversight through consensus without any single party having surveillance capabilities.
Where Privacy and Safety Actually Align
The fascinating thing about privacy-preserving AGI control is that privacy and safety are not opposing forces. They’re complementary. When you force surveillance systems to be cryptographically verifiable and distributed, you actually get better safety properties, similar to the modern internet:
No single point of failure: Distributed verification is more robust than centralized monitoring.
Tamper resistance: Cryptographic verification is harder to game than human oversight.
Reduced insider threats: Centralized surveillance systems are incredibly vulnerable to abuse from within. Distributed systems eliminate single points of control that can be compromised.
Transparency without surveillance: Open-source verification tools provide accountability without creating surveillance infrastructure.
Innovation protection: Researchers can work freely while still enabling necessary safety oversight.
The best security systems protect both users and society without creating new attack vectors.
Next Steps for Research to Avoid the AGI Surveillance Trap
If you’re working in AI safety, here are concrete research directions that could help us avoid the surveillance state trap:
Privacy-preserving verification protocols: Develop production-ready zero-knowledge proof systems for AI safety compliance. “HTTPS for AI safety” - protocols that enable input/output verification without surveillance.
Decentralized AI governance infrastructure: Build networks for distributed AI oversight that prevent any single entity from gaining surveillance capabilities. This includes consensus mechanisms for safety decisions and cryptographic voting systems for governance.
Privacy-preserving capability evaluation: Extend current evaluation frameworks (like inspect
from AISI) to work with encrypted model interactions. We need to test AI capabilities without exposing user data, reasoning traces, or proprietary systems.
Open-source verification tools: Create auditing tools that anyone can use to verify AI safety claims. Transparency through open algorithms instead of hidden surveillance systems.
Regulatory technology for AI: Develop “RegTech” that enables compliance with AI safety regulations through cryptographic means rather than surveillance. Something along the lines of smart contracts for AI governance.
The Window is Closing
The infrastructure decisions we make in the next 2-3 years will determine whether AGI transforms human freedom or enables unprecedented control. Once surveillance systems are built and normalized, historical precedent shows they’re nearly impossible to dismantle.
But here’s what gives me hope: the technical community can still shape this outcome. Just as cryptographers developed SSL/TLS standards before widespread e-commerce, we can develop privacy-preserving AGI governance before surveillance-heavy alternatives become entrenched.
The best time to build privacy-preserving infrastructure is before you need it. The second-best time is now.
What You Can Do

If this sounds interesting to you,, here are a few concrete next steps to avoid sitting around and actually doing something:
Start building, start contributing: Don’t just implement privacy-preserving verification in your current AI projects. Contribute to make existing open source protocols like Model Context Protocol (MCP) safer. Build better evaluation frameworks. Start new foundations around new projects. Share your ideas for new open source consortiums.
Get involved with standards bodies: IEEE, ISO, EU, AISI, and IETF are all working on AI governance standards right now. Here’s the thing - anyone with a decent academic or industry affiliation can contribute to these standards. It’s not some exclusive club. Technical input from people who understand both AI and privacy is desperately needed.
Join the conversation and build community: Whether through Apart Research, AI safety conferences, or technical working groups, we need more people advocating for privacy-preserving approaches. Make sure you write about it if you begin working on it. Share this post or its ideas. Make it the topic of a lecture.
Get involved in $1B+ scale solutions: Open source projects are necessary for trustworthy protocols everyone can use, but we also need large-scale hardware and AGI security companies. If you’re interested, check out Seldon to be part of building tomorrow’s infrastructure.
Support and build open alternatives: When you see privacy-preserving AI tools, use them, contribute to them, help them succeed. When you don’t see them, build them. The market needs to see that privacy-preserving approaches can win.
Think like a cryptographer: Before building any AI oversight system, ask: “How could this be done with zero-knowledge proofs?” Often, the answer leads to better security properties anyway.
The future of AGI governance doesn’t have to be a choice between safety and privacy. With the right technical approaches, we can have both. And create systems that are more robust, more trustworthy, and more aligned with human values than pure surveillance could ever be.
But only if we start building immediately.
If your work is pushing this idea forward, I’d love to hear from you. Reach out on X.
Contribute
Open source:
Model Context Protocol (MCP) - Open standard for connecting AI systems to data sources
Inspect evaluation framework - Open-source AI testing tools
OpenMined privacy-preserving ML - Community building privacy-preserving AI
Differential Privacy Library - Production-ready privacy tools
The SOLID Web Decentralization Project - Tim Berners-Lee’s personal data repository decentralization project
Interesting historical context on the effects of surveillance:
Chilling Effects: Online Surveillance and Wikipedia Use - Measurable impacts of surveillance on intellectual freedom
The Economic Costs of Mass Surveillance: Insights from Stasi Spying in East Germany - Quantitative evidence on surveillance’s innovation costs
Politics and the English Language - Orwell on intellectual conformity and self-censorship
Current AI governance developments that, if you follow their profiles, you can do expert contributions to directly:
EU AI Act technical standards - Current regulatory framework
NIST AI Risk Management Framework - US government approach to AI oversight
UK AI Safety Institute evaluation methodology - Technical approaches to AI testing
Jun 3, 2025
Jun 3, 2025
Jun 3, 2025
Jun 3, 2025
Read More
Read More