Polar
Descentralized object storage network for resilient services
Polar is a binary that you can run on any linux distribution, it will expose a RPC API to communicate with it locally and a REST API on an open port for communicating with others nodes. Its purpose is to create a kind of descentralized and resilient database, to store metadatas of files, entities like user accounts or file chunks. This protocol will allow anyone to run their own instance of any of our services (like bloc) without worrying about loosing data after a hard drive break or a shutdown forced by a government or any unfriendly third party, for the Coldwire project it's a very important part that allow us to be more resilient against any government who are willing to shutdown our services and delete the ressources we publish.
How it's working?
all of this is theorical and have been written in one our, so there is probably errors, etc but will be edited when everything will be more accurate
basic functions
There are 4 main function you can call over RPC:
- push: cut data in chunks and then create objects for each chunks on a swarm of nodes and create a signed UID for each of theses object using owner's private key.
- pull: to pull data form a swarm of node by using the object/entity UID and verifying the ownership with its cryptographic signature
- find: to find the datas owned by someone by looking for datas signed with the entity's private key (and so by verifying using the public key)
- erase: erase data from a swarm of nodes by using the UID and verying the ownership
What are UIDs?
UIDs (Unique Identifiers) are 64 bytes ids generated by signing the data in an object using an entity private key
What are objects?
Object are just a chunk of data of maximum 8Mb identified with an UID to find it and to prove it's owner.
What are entities?
Entities are a kind of object that is a bit different, it aims to store user-related data, like username, hashed with argon2id password, etc and of course everything is encrypted using user's private key. Storing this kind of data on the network allow the resiliency of accounts and also to allow using the same account on every instances, if I made an account on coldwire.org I'll be able to also use it on the services hosted by another org/person.
How pushing data works?
In your program you call the "push" function where you pass the buffer stream (encrypting data is not managed by Polar!!!), then it will cut the buffer in serveral 8Mb chunks, will find in its local nodes database 3 nodes with the most storage available, will generate three UID and then will push each object on the nodes, at the end it will push a last "metadata" object with the file size, name, etc and an array of arrays that contain UID of each chunks:
{
"name": "file.zip",
"size": 74957045711,
"checksum": "61be55a8e2f6b4e172338bddf184d6dbee29c98853e0a0485ecee7f27b9af0b4",
"chunks": [
[
"6dd355667fae4eb43c6e0ab92e870edb2de0a88cae12dbd8591507f584fe4912babff497f1b8edf9567d2483d54ddc6459bea7855281b7a246a609e3001a4e08",
"6dd355667fae4eb43c6e0ab92e870edb2de0a88cae12dbd8591507f584fe4912babff497f1b8edf9567d2483d54ddc6459bea7855281b7a246a609e3001a4e08",
"6dd355667fae4eb43c6e0ab92e870edb2de0a88cae12dbd8591507f584fe4912babff497f1b8edf9567d2483d54ddc6459bea7855281b7a246a609e3001a4e08",
],
[...],
[...]
]
}
// To finish/make better/less wrong/improve