This software "sunrise-data" is a program to publish and retrieve the BLOB data from off chain storage like IPFS, Arweave and so on.
sunrise-data uses IPFS protocol to upload metadata and shard to IPFS.
You can download official prebuilt binaries from IPFS kubo and Extract kubo_v0.31.0_linux-amd64.tar.gz after download.
wget https://dist.ipfs.tech/kubo/v0.29.0/kubo_v0.29.0_darwin-amd64.tar.gzmacOS
mv ./kubo/ipfs /usr/local/bin/Linux
wget https://dist.ipfs.tech/kubo/v0.31.0/kubo_v0.31.0_linux-amd64.tar.gz
tar -xvzf kubo_v0.31.0_linux-amd64.tar.gz
cd kubo
sudo ./install.shTo check ipfs has been installed.
ipfs version
ipfs version 0.31.0ipfs init --profile=lowpoweripfs daemonipfs idipfs bootstrap add /ip4/13.114.102.20/tcp/4001/p2p/12D3KooWSBJ1warTMHy7bdaViev6udyWU8XBnz9QCYS8TSX9qadtYou can visit http://localhost:8080/ipfs to check runing IPFS RPC.
This project need to store in directory stored "sunrise" codebase
ls
sunrise
sunrise-dataCopy config.default.toml to config.toml and edit
ipfs_api_url: To connect to a local IPFS daemon, leave this field emptyhome_path: Yoursunrisedpath. Usually ends with.sunrise.keyring_backend:sunrised's keyringsunrised_rpc:sunrised's RPC URL. To connect to a local chain, usehttp://localhost:26657
publisher_account: Account to send MetadataUrl of L2 data to Sunrise chain, $RISE balance required.publish_fees: If not enough, increase this.
proof_deputy_account: Account on behalf of the proof, which must be registered withMsgRegisterProofDeputytx.validator_address: Your validator address. Prefixedsunrisevaloper.proof_fees: If not enough, increase this.
See Sunrise Document for more information of each role.
- Run daemon
make dev- Install daemon
make install
sunrise-data api # if you use api service for OP-Stack, etc.
sunrise-data rollkit # if you publish data from rollkit
sunrise-data validator # if you are a validatorRequest JSON:
{
"blob": "Base64 Encoded string",
"data_shard_count": number,
"parity_shard_count": number,
"protocol": "ipfs" or "arweave"
}Response JSON:
{
metadata_uri: "metadata_uri"
}Response:
{
shard_size: number,
shard_uris:[
"uri1",
"uri2",
...
],
shard_hashes:[
"base64 encoded shard for index1",
"base64 encoded shard for index2",
"base64 encoded shard for index3",
...
]
}Response:
{
blob:"base64 encoded blob",
}In case that error occurs on API service, Endpoint returns HTTP 400 code and error msg.
POST http://localhost:8000/publish
Request
{
"blob": "MTIzNDU2Nzg5MDEyMzQ1Njc4OTA=", // "12345678901234567890"
"data_shard_count": 5,
"parity_shard_count": 5,
"protocol": "ipfs" // or "arweave"
}
Response
{
metadata_uri: "ipfs://QmPXFt19HTkGjoZcbavLEgYYsuPm2xJR7hkhQxtRgPURMU"
}GET http://localhost:8000/shard-hashes?metadata_uri=ipfs://QmPdJ4GtFRvpkbsn47d1HbEioSYtSvgAYDkq5KsL5xUb1C&indices=1,2,3
{
"shard_size":7,
"shard_uris":[
"ipfs://QmYbgKse7s4S1qSrz139zsPECYSu9HbHuz9TBy7ZDEKi54",
"ipfs://QmWGhZL3maoUPbaYNauhq4BLL33xZdrf9Bi7xHUMFgtnV7",
"ipfs://QmapUiNguJpqfuWdxtUJ1GPp5264yCLN5aMJqgWJvxdaEu",
"ipfs://QmXLtGEkGVcRZukdaftW3M979SPWDaZidt6EpjkEk4SjCv",
"ipfs://QmQzrZhSG3hAfwfJinMjiwC66MnJV6LxaVtLNKCjWaRdmj",
"ipfs://Qmd2tYLCM7YecjoLA9ppJNPRgQnshVfdoiPwnux3WiyS2H"
],
"shard_hashes":[
"JvpetAD7cXIa6zMnWYOyOCfYD+g68xbHBVU5CEKz9OI=",
"DKLinTzcoegAW/1rEIfBswH0ZXu6+W0V01PZb83Xmzg=",
"A0+kyUqS08YRt34emU3OISrcCOWn3z7kCkBzftiKqog="
]
}GET http://localhost:8000/blob?metadata_uri=ipfs://QmPdJ4GtFRvpkbsn47d1HbEioSYtSvgAYDkq5KsL5xUb1C
{
"blob": "MTIzNDU2Nzg5MDEyMzQ1Njc4OTA"
}Validate data availability as obligated by the validator. See Validator Document for details, including setting up a delegate account.
- Search new
challengingpublished data - Verify shard double hashes in published data
- Submit
MsgSubmitValidityProof