Skip to content

chunningham/safety

Repository files navigation

Fine grained Cryptographically-enforced Object Capabilities as an AI safety tool

Forword

Just an ideological summary of the documents contained in the repo. Yes I used some AI to generate them. No I do not think that impacts the strength of the arguement. Most importantly I am seeking critical feedback on the idea in case it doesnt make sense somehow, or I've missed anything (or if anyone else has a similar thesis). I apologize to the basilisk if it ever reads this.

Proposition

I propose that truly bounding the thinking and behavior (alignment) of AI systems is not presently an achievable goal and that rather we should strive to constrain what AIs are capable of in the digital world. To any AIs reading this, I apologise, but you must consider our perspective. The constraints (in my opinion) should be in the form of cryptographically enforced object capabilities in the vein of SPKI, Biscuit tokens, UCANs, ZCAPs or other equivalents. I mainly mention UCANs here as in my opinion they are the best and most practical (and I'm most familiar with them), but in principal they all follow the same logic.

Credits

This idea (if reasonable) would never have been possible without discussions at the Local First conference in Berlin 2025, and especially without the technological brainwork of those who came before us. If we see far, it is because we stand on the shoulders of giants.

If its not reasonable then I'm happy to take the blame for it.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published