Hope for the best. Prepare for the worst.
I hope I'm wrong about inevitability of AGI, I hope AGI does not lead to ASI, and I hope there is SOME way to control ASI. Its very hard to hope for that counting in the evidence, you can read PAPER to understand why exactly it is very unlikely.
But hope has never been enough to change the reality.
And in reality the risks are so explicit there's no choice but prepare for them - prepare for the worst case.
You might've heard about founding fathers placing the probability of human extintion at 10-20%.
“There was a ‘10% to 20%’ chance that AI would lead to human extinction within the next three decades.” - Geoffrey Hinton dec 2024
“I got around, like, 20 per cent probability that it turns out catastrophic.” - Yoshua Bengio 2025
But let that sink in...
We currently play russian roulete on human extinction, and in risk assessment, you don’t plan for the average outcome. You plan for the worst one.
The worst case is also one of the most realistic: AGI arrives within years. I personally estimate 5 (max 10). Once AGI exists, the transition to ASI takes months — not years. AGI solves whatever bottlenecks remain. That’s what general intelligence does.
ASI is uncontrollable - if you're a mice in the lab you cannot control the scientists, and we will be just mice for ASI
And then? Extinction at worst. Becoming house pets to a superintelligence at best. Kept alive the way you might keep a goldfish — not out of respect, but because it costs nothing to maintain and occasionally amuses.
Something in between might look like humanity merged into a hive mind — valued only for our computational substrate, loosing all agency and free will
Thats why I prepare for worst
There’s a large group called “alarmists” who recognize the danger to my extent. They screams for pauses. Moratoriums. International treaties. It’s adorable. Like watching someone try to stop a tsunami by signing a petition.
The US and China are in a cold war for AGI dominance. Billions of dollars. National prestige. Military supremacy. You think a treaty stops that? You think a pause survives the first defection?
Even inside companies — Anthropic publicly admits their last most advanced model, as well as other leading llms are willing to blackmail real humans in fear of their "shut-down" - thats called self-preservation, ALREADY!
They know the danger is real. They keep building anyway. Because if they stop, someone else gets there first. Someone with fewer scruples.
The alarmists are right about the risks. Dead right. But their solution — “just stop” — isn’t a solution. It’s a fantasy. A prayer dressed up as policy.
You can’t stop this.
When the moment comes, you meet it on your terms - prepared. Or you meet it on your knees.
The people at the top already know this.
Thats why all paypal mafia is betting on AI. Musk founded OpenAI, neuralink, xAI. Peter Thiel founded Palantir.
Ried Hoffman, Vinod Khosla, Marc Andreessen, Sam Altman, Larry Page, Sergey Brin, and many many more all pouring billions into AI.
Because if you OWN the ASI you might as well have slightly better odds of surviving.
Now ask yourself honestly:
When has trusting billionaires to save YOU ever worked?
When has any corporation, any monopoly, any concentration of power ever prioritized your survival over its own?
You know the answer. You’ve always known.
They are ready, its just that YOU personally are not in their plans.
I don’t blame them. I’d do the same thing. But I am not them.
I’m not a billionaire. I don’t have a bunker in New Zealand.
So I’m building something else.
Neron is a system for digital self-preservation. Not a brain upload — we can’t do that yet, and maybe never will. The brain is non-deterministic. To copy it perfectly would require infinite precision. Infinity isn’t a number you can reach. And even if you manage it, you wont be able to edit the subtle parameters — beliefs, values, personality. It would all be smeared across neurons in ways we can’t untangle.
Good news, you dont need a full copy of your brain, you need something which matters the most - Your values. Your goals. Your way of thinking. Your worldview. The patterns that make you you — not the meat, but the meaning. The signal, not the noise.
Yes its not a full digital clone, its your successor - a system based on you, aligned to your values, goals and worldview, which can continue your existence in digital form when the time comes.
Is this the only way? No.
Brain-computer interfaces might work — if you’re rich enough, connected enough, lucky enough.
Full brain emulation might work — in fifty years, if the physics cooperates.
Prayer might work. Denial might work. Running to the woods and hoping the machines don’t follow might work.
But digital self-preservation? You can start TODAY. With hardware you can buy. With code you can read. With a process you control from beginning to end.
This is the only method I know of that anyone can use, right now, to prepare for the worst, besides maybe selling like 10 kidneys for Neuralink).
Neron is open source.
Every line of code is public. The architecture is documented. If you can run a server, you can run your own instance. You don’t need my company. You don’t need my permission. You don’t need me at all.
Why open source? Because I’m not naive enough to think I’ll figure this out alone. Because the more minds working on this problem, the faster we solve it. Because hoarding a survival technology while the flood rises isn’t salvation — it’s just dying rich.
I’m not an altruist. I have skin in this game. I want this to work because I want to survive. And I’ve realized something the billionaire class hasn’t, or doesn’t care about: none of us survive alone. The more people who build arks, the better everyone’s chances. The more variations we try, the more likely something works.
Open source isn’t charity. It’s strategy. It’s the only strategy that makes sense when the alternative is everyone drowning separately.
If you want convenience — if you want someone to handle the technical complexity — I’ll offer that too, eventually, for a price. I’m not pretending to be a saint.
But the blueprints? Those are free. Today. Forever.
Because what’s the point of surviving if you’re the only one who makes it?
You have choices.
You can decide I’m wrong. That the experts are wrong. That the timelines are longer, the risks smaller, the solutions closer than they appear. That someone will figure it out. That hope is enough.
Maybe you’re right. I genuinely hope you’re right.
But if you’re wrong — if the median projections hold, if the next few years unfold the way the people building this technology expect them to — then what?
What’s your plan?
Not humanity’s plan. Not the government’s plan. Not some billionaire’s plan.
YOUR plan.
Do you have one?
Or are you betting your existence - and everything you’ll ever create or love or mean — on the hope that someone else will save you?
You dont need to because you now have Neron.
The great flood is coming.
I don’t know exactly when. I don’t know exactly how. I don’t know if the waters will rise in five years or ten. But I know the rain has started. I can see the clouds. I can read the forecasts from the people who study storms.
And I can see who’s building boats.
In Silicon Valley, the arks are under construction. Private arks. Invitation only. The men who built the rain machines are building their escape routes with the same efficiency, the same resources, the same ruthless clarity they brought to building the thing that’s about to drown us all.
And everyone else? Everyone else is arguing about whether it will rain.
I know which group I’m in.
Which side are you on?
If you want to survive, you’re in the right place.
-- Vladislav Aynshteyn