s3git: git for Cloud Storage. Distributed Version Control for Data.

s3git: git for Cloud Storage
(or Version Control for Data)

Join the chat at https://gitter.im/s3git/s3git

s3git applies the git philosophy to Cloud Storage. If you know git, you will know how to use s3git!

s3git is a simple CLI tool that allows you to create a distributed, decentralized and versioned repository. It scales limitlessly to 100s of millions of files and PBs of storage and stores your data safely in S3. Yet huge repos can be cloned on the SSD of your laptop for making local changes, committing and pushing back.

Exactly like git, s3git does not require any server-side components, just download and run the executable. It imports the golang package s3git-go that can be used from other applications as well. Or see the Python module or Ruby gem.

Use cases for s3git

  • Build and Release Management (see example with all Kubernetes releases).
  • DevOps Scenarios
  • Data Consolidation
  • Analytics
  • Photo and Video storage

See use cases for a detailed description of these use cases.

Download binaries

DISCLAIMER: These are PRE-RELEASE binaries -- use at your own peril for now

OSX

Download s3git from https://github.com/s3git/s3git/releases/download/v0.9.2/s3git-darwin-amd64

$ mkdir s3git && cd s3git
$ wget -q -O s3git https://github.com/s3git/s3git/releases/download/v0.9.2/s3git-darwin-amd64
$ chmod +x s3git
$ export PATH=$PATH:${PWD}   # Add current dir where s3git has been downloaded to
$ s3git

Linux

Download s3git from https://github.com/s3git/s3git/releases/download/v0.9.2/s3git-linux-amd64

$ mkdir s3git && cd s3git
$ wget -q -O s3git https://github.com/s3git/s3git/releases/download/v0.9.2/s3git-linux-amd64
$ chmod +x s3git
$ export PATH=$PATH:${PWD}   # Add current dir where s3git has been downloaded to
$ s3git

Windows

Download s3git.exe from https://github.com/s3git/s3git/releases/download/v0.9.1/s3git.exe

C:\Users\Username\Downloads> s3git.exe

Building from source

Build instructions are as follows (see install golang for setting up a working golang environment):

$ go get -d github.com/s3git/s3git
$ cd $GOPATH/src/github.com/s3git/s3git 
$ go install
$ s3git

BLAKE2 Tree Hashing and Storage Format

Read here how s3git uses the BLAKE2 Tree hashing mode for both deduplicated and hydrated storage (and here for info for BLAKE2 at scale).

Example workflow

Here is a simple workflow to create a new repository and populate it with some data:

$ mkdir s3git-repo && cd s3git-repo
$ s3git init
Initialized empty s3git repository in ...
$ # Just stream in some text
$ echo "hello s3git" | s3git add
Added: 18e622875a89cede0d7019b2c8afecf8928c21eac18ec51e38a8e6b829b82c3ef306dec34227929fa77b1c7c329b3d4e50ed9e72dc4dc885be0932d3f28d7053
$ # Add some more files
$ s3git add "*.mp4"
$ # Commit and log
$ s3git commit -m "My first commit"
$ s3git log --pretty

Push to cloud storage

$ # Add remote back end and push to it
$ s3git remote add "primary" -r s3://s3git-playground -a "AKIAJYNT4FCBFWDQPERQ" -s "OVcWH7ZREUGhZJJAqMq4GVaKDKGW6XyKl80qYvkW"
$ s3git push
$ # Read back content
$ s3git cat 18e6
hello s3git

Note: Do not store any important info in the s3git-playground bucket. It will be auto-deleted within 24-hours.

Directory versioning

You can also use s3git for directory versioning. This allows you to 'capture' changes coherently all the way down from a directory and subsequently go back to previous versions of the full state of the directory (and not just any file). Think of it as a Time Machine for directories instead of individual files.

So instead of 'saving' a directory by making a full copy into 'MyFolder-v2' (and 'MyFolder-v3', etc.) you capture the state of a directory and give it a meaningful message ("Changed color to red") as version so it is always easy to go back to the version you are looking for.

In addition you can discard any uncommitted changes that you made and go back to the last version that you have captured, which basically means you can (after committing) mess around in a directory and then be rest assured that you can always go back to its original state.

If you push your repository into the cloud then you will have an automatic backup and additionally you can easily collaborate with other people.

Lastly, it works of course with huge binary data too, so not just for text files as in the following 'demo' example:

text.txt && ls -l -rw-rw-r-- 1 ec2-user ec2-user 11 May 25 09:06 text.txt $ # $ # Create initial snapshot $ s3git snapshot create -m "Initial snapshot" . $ # Add new line to initial file and create another file $ echo "Second line" >> text.txt && echo "Another file" > text2.txt && ls -l -rw-rw-r-- 1 ec2-user ec2-user 23 May 25 09:08 text.txt -rw-rw-r-- 1 ec2-user ec2-user 13 May 25 09:08 text2.txt $ s3git snapshot status . New: /home/ec2-user/dir-versioning/text2.txt Modified: /home/ec2-user/dir-versioning/text.txt $ # $ # Create second snapshot $ s3git snapshot create -m "Second snapshot" . $ s3git log --pretty 3a4c3466264904fed3d52a1744fb1865b21beae1a79e374660aa231e889de41191009afb4795b61fdba9c156 Second snapshot 77a8e169853a7480c9a738c293478c9923532f56fcd02e3276142a1a29ac7f0006b5dff65d5ca245255f09fa Initial snapshot $ more text.txt First line Second line $ more text2.txt Another file $ # $ # Go back one version in time $ s3git snapshot checkout . HEAD^ $ more text.txt First line $ more text2.txt text2.txt: No such file or directory $ # $ # Switch back to latest revision $ s3git snapshot checkout . $ more text2.txt Another file ">
$ mkdir dir-versioning && cd dir-versioning
$ s3git init .
$ # Just create a single file
$ echo "First line" > text.txt && ls -l
-rw-rw-r-- 1 ec2-user ec2-user 11 May 25 09:06 text.txt
$ #
$ # Create initial snapshot
$ s3git snapshot create -m "Initial snapshot" .
$ # Add new line to initial file and create another file
$ echo "Second line" >> text.txt && echo "Another file" > text2.txt && ls -l
-rw-rw-r-- 1 ec2-user ec2-user 23 May 25 09:08 text.txt
-rw-rw-r-- 1 ec2-user ec2-user 13 May 25 09:08 text2.txt
$ s3git snapshot status .
     New: /home/ec2-user/dir-versioning/text2.txt
Modified: /home/ec2-user/dir-versioning/text.txt
$ #
$ # Create second snapshot
$ s3git snapshot create -m "Second snapshot" .
$ s3git log --pretty
3a4c3466264904fed3d52a1744fb1865b21beae1a79e374660aa231e889de41191009afb4795b61fdba9c156 Second snapshot
77a8e169853a7480c9a738c293478c9923532f56fcd02e3276142a1a29ac7f0006b5dff65d5ca245255f09fa Initial snapshot
$ more text.txt
First line
Second line
$ more text2.txt
Another file
$ #
$ # Go back one version in time
$ s3git snapshot checkout . HEAD^
$ more text.txt
First line
$ more text2.txt
text2.txt: No such file or directory
$ #
$ # Switch back to latest revision
$ s3git snapshot checkout .
$ more text2.txt
Another file

Note that snapshotting works for all files in the directory including any subdirectories. Click the following link for a more elaborate repository that includes all releases of the Kubernetes project.

Clone the YFCC100M dataset

Clone a large repo with 100 million files totaling 11.5 TB in size (Multimedia Commons), yet requiring only 7 GB local disk space.

(Note that this takes about 7 minutes on an SSD-equipped MacBook Pro with 500 Mbit/s download connection so for less powerful hardware you may want to skip to the next section (or if you lack 7 GB local disk space, try a df -h . first). Then again it is quite a few files...)

olympic.jpg $ # List and count total nr of files $ s3git ls | wc -l 97974749 ">
$ s3git clone s3://s3git-100m -a "AKIAI26TSIF6JIMMDSPQ" -s "5NvshAhI0KMz5Gbqkp7WNqXYlnjBjkf9IaJD75x7"
Cloning into ...
Done. Totaling 97,974,749 objects.
$ cd s3git-100m
$ # List all files starting with '123456'
$ s3git ls 123456
12345649755b9f489df2470838a76c9df1d4ee85e864b15cf328441bd12fdfc23d5b95f8abffb9406f4cdf05306b082d3773f0f05090766272e2e8c8b8df5997
123456629a711c83c28dc63f0bc77ca597c695a19e498334a68e4236db18df84a2cdd964180ab2fcf04cbacd0f26eb345e09e6f9c6957a8fb069d558cadf287e
123456675eaecb4a2984f2849d3b8c53e55dd76102a2093cbca3e61668a3dd4e8f148a32c41235ab01e70003d4262ead484d9158803a1f8d74e6acad37a7a296
123456e6c21c054744742d482960353f586e16d33384f7c42373b908f7a7bd08b18768d429e01a0070fadc2c037ef83eef27453fc96d1625e704dd62931be2d1
$ s3git cat cafebad > olympic.jpg
$ # List and count total nr of files
$ s3git ls | wc -l
97974749

Fork that repo

Below is an example for alice and bob working together on a repository.

$ mkdir alice && cd alice
alice $ s3git clone s3://s3git-spoon-knife -a "AKIAJYNT4FCBFWDQPERQ" -s "OVcWH7ZREUGhZJJAqMq4GVaKDKGW6XyKl80qYvkW"
Cloning into .../alice/s3git-spoon-knife
Done. Totaling 0 objects.
alice $ cd s3git-spoon-knife
alice $ # add a file filled with zeros
alice $ dd if=/dev/zero count=1 | s3git add
Added: 3ad6df690177a56092cb1ac7e9690dcabcac23cf10fee594030c7075ccd9c5e38adbaf58103cf573b156d114452b94aa79b980d9413331e22a8c95aa6fb60f4e
alice $ # add 9 more files (with random content)
alice $ for n in {1..9}; do dd if=/dev/urandom count=1 | s3git add; done
alice $ # commit
alice $ s3git commit -m "Commit from alice"
alice $ # and push
alice $ s3git push

Clone it again as bob on a different computer/different directory/different universe:

$ mkdir bob && cd bob
bob $ s3git clone s3://s3git-spoon-knife -a "AKIAJYNT4FCBFWDQPERQ" -s "OVcWH7ZREUGhZJJAqMq4GVaKDKGW6XyKl80qYvkW"
Cloning into .../bob/s3git-spoon-knife
Done. Totaling 10 objects.
bob $ cd s3git-spoon-knife
bob $ # Check if we can access our empty file
bob $ s3git cat 3ad6 | hexdump
00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00
*
00000200
bob $ # add another 10 files
bob $ for n in {1..10}; do dd if=/dev/urandom count=1 | s3git add; done
bob $ # commit
bob $ s3git commit -m "Commit from bob"
bob $ # and push back
bob $ s3git push

Switch back to alice again to pull the new content:

alice $ s3git pull
Done. Totaling 20 objects.
alice $ s3git log --pretty
3f67a4789e2a820546745c6fa40307aa490b7167f7de770f118900a28e6afe8d3c3ec8d170a19977cf415d6b6c5acb78d7595c825b39f7c8b20b471a84cfbee0 Commit from bob
a48cf36af2211e350ec2b05c98e9e3e63439acd1e9e01a8cb2b46e0e0d65f1625239bd1f89ab33771c485f3e6f1d67f119566523a1034e06adc89408a74c4bb3 Commit from alice

Note: Do not store any important info in the s3git-spoon-knife bucket. It will be auto-deleted within 24-hours.

Here is an nice screen recording:

asciicast

Happy forking!

You may be wondering about concurrent behaviour from

Integration with Minio

Instead of S3 you can happily use the Minio server, for example the public server at https://play.minio.io:9000. Just make sure you have a bucket created using mc (example below uses s3git-test):

$ mkdir minio-test && cd minio-test
$ s3git init 
$ s3git remote add "primary" -r s3://s3git-test -a "Q3AM3UQ867SPQQA43P2F" -s "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG" -e "https://play.minio.io:9000"
$ echo "hello minio" | s3git add
Added: c7bb516db796df8dcc824aec05db911031ab3ac1e5ff847838065eeeb52d4410b4d57f8df2e55d14af0b7b1d28362de1176cd51892d7cbcaaefb2cd3f616342f
$ s3git commit -m "Commit for minio test"
$ s3git push
Pushing 1 / 1 [==============================================================================================================================] 100.00 % 0

and clone it

$ s3git clone s3://s3git-test -a "Q3AM3UQ867SPQQA43P2F" -s "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG" -e "https://play.minio.io:9000"
Cloning into .../s3git-test
Done. Totaling 1 object.
$ cd s3git-test/
$ s3git ls
c7bb516db796df8dcc824aec05db911031ab3ac1e5ff847838065eeeb52d4410b4d57f8df2e55d14af0b7b1d28362de1176cd51892d7cbcaaefb2cd3f616342f
$ s3git cat c7bb
hello minio
$ s3git log --pretty
6eb708ec7dfd75d9d6a063e2febf16bab3c7a163e203fc677c8a9178889bac012d6b3fcda56b1eb160b1be7fa56eb08985422ed879f220d42a0e6ec80c5735ea Commit for minio test

Contributions

Contributions are welcome! Please see CONTRIBUTING.md.

Key features

  • Easy: Use a workflow and syntax that you already know and love

  • Fast: Lightning fast operation, especially on large files and huge repositories

  • Infinite scalability: Stop worrying about maximum repository sizes and have the ability to grow indefinitely

  • Work from local SSD: Make a huge cloud disk appear like a local drive

  • Instant sync: Push local changes and pull down instantly on other clones

  • Versioning: Keep previous versions safe and have the ability to undo or go back in time

  • Forking: Ability to make many variants by forking

  • Verifiable: Be sure that you have everything and be tamper-proof (“data has not been messed with”)

  • Deduplication: Do not store the same data twice

  • Simplicity: Simple by design and provide one way to accomplish tasks

Command Line Help

$ s3git help
s3git applies the git philosophy to Cloud Storage. If you know git, you will know how to use s3git.

s3git is a simple CLI tool that allows you to create a distributed, decentralized and versioned repository.
It scales limitlessly to 100s of millions of files and PBs of storage and stores your data safely in S3.
Yet huge repos can be cloned on the SSD of your laptop for making local changes, committing and pushing back.

Usage:
  s3git [command]

Available Commands:
  add         Add stream or file(s) to the repository
  cat         Read a file from the repository
  clone       Clone a repository into a new directory
  commit      Commit the changes in the repository
  init        Create an empty repository
  log         Show commit log
  ls          List files in the repository
  pull        Update local repository
  push        Update remote repositories
  remote      Manage remote repositories
  snapshot    Manage snapshots
  status      Show changes in repository

Flags:
  -h, --help[=false]: help for s3git

Use "s3git [command] --help" for more information about a command.

License

s3git is released under the Apache License v2.0. You can find the complete text in the file LICENSE.

FAQ

Q Is s3git compatible to git at the binary level?
A No. git is optimized for text content with very nice and powerful diffing and using compressed storage whereas s3git is more focused on large repos with primarily non-text blobs backed up by cloud storage like S3.
Q Do you support encryption?
A No. However it is trivial to encrypt data before streaming into s3git add, eg pipe it through openssl enc or similar.
Q Do you support zipping?
A No. Again it is trivial to zip it before streaming into s3git add, eg pipe it through zip -r - . or similar.
Q Why don't you provide a FUSE interface?
A Supporting FUSE would mean introducing a lot of complexity related to POSIX which we would rather avoid.

Owner
s3git
s3git: git for cloud storage
s3git
Comments
  • Put Access Key and Secret Key in a dot file

    Put Access Key and Secret Key in a dot file

    A suggestion. It could be a good idea to put the Access Key and Secret Key in a configuration file, such as a dot file ( eg .s3git.cfg ) . So we would not need to expose this information whenever you execute a command.

  • add viper config and read from configuration files

    add viper config and read from configuration files

    This is my first attempt. Please note I'm just starting to use Go.

    • use https://github.com/spf13/viper in root.go
    • move access, secret, and endpoint arguments to global and shadow them with viper.BindPFlags (which resolves some TODOs already in place)
    • accept any variable in the environment, e.g. S3GIT_ACCESS is then viper.GetString("access")
    • add a global profile argument, also can be set with AWS_PROFILE from the environment
    • try to load from paths /etc/s3git, ~/.s3git with filenames config.json, config.yaml, and config.yml
    • try to load ~/.aws/config and ~/.aws/credentials as YAML (to be INI when Viper supports it)

    Not working: the Viper binding to the Cobra flags. Even though viper.GetString("access") has foobar showing that the configuration is loaded from echo '{ "access": "foobar" }' > ~/.s3git/config.json, the accessKey variable doesn't get foobar. This may be something about the order in which things are loaded, or I may have missed something.

  • Don't import code in our own repository from GitHub.

    Don't import code in our own repository from GitHub.

    It seems not logical to me to import local code from GitHub. This only complicates development, because some code in the local repository is fetched from GitHub.

  • Is s3git project still active? - s3git future

    Is s3git project still active? - s3git future

    I'm wondering if this project is still active? the last commit was 2 years ago. I really like s3git and appreciate your effort but I'm just curious about the future of this project.

  • Check if the repository directory is empty upon creation

    Check if the repository directory is empty upon creation

    Check if the repository directory doesn't already exist and if it does exist, then it should be empty. This avoid overwriting or cluttering existing directories.

  • Usage qurstion

    Usage qurstion

    The s3git-go namespace is in another repo ?

    This is great. I have been using minio to do exactly the same thing funnily enough.

    Does this support merging though ?

  • Add a Gitter chat badge to README.md

    Add a Gitter chat badge to README.md

    s3git/s3git now has a Chat Room on Gitter

    @fwessels has just created a chat room. You can visit it here: https://gitter.im/s3git/s3git.

    This pull-request adds this badge to your README.md:

    Gitter

    If my aim is a little off, please let me know.

    Happy chatting.

    PS: Click here if you would prefer not to receive automatic pull-requests from Gitter in future.

  • cannot find package

    cannot find package "github.com/hashicorp/hcl/hcl/printer"

    The following error occurred when attempting to build.

    takuya@takuya-MacBookPro2012mid ~ % go get -d github.com/s3git/s3git
    cannot find package "github.com/hashicorp/hcl/hcl/printer" in any of:
    	/usr/local/Cellar/go/1.15.8/libexec/src/github.com/hashicorp/hcl/hcl/printer (from $GOROOT)
    	/Users/takuya/go/src/github.com/hashicorp/hcl/hcl/printer (from $GOPATH)
    takuya@takuya-MacBookPro2012mid ~ % go version
    go version go1.15.8 darwin/amd64
    
  • project abandoned?!?

    project abandoned?!?

    Hi.

    no updates in ~4 years?!? so:

    • is the project abandoned?
    • is there any maintainer fork known / available?
    • any alternative known / used by someone?
      • maybe git-annex?!?
  • doesn't compile on Windows 10

    doesn't compile on Windows 10

    Hi.

    • go version go1.14.2 windows/amd64
    • Windows 10, 1909
    C:\Users\me\go\src\github.com\s3git\s3git>go install
    # github.com/bmatsuo/lmdb-go/lmdb
    mdb.c: In function 'mdb_env_setup_locks':
    mdb.c:4853:17: warning: implicit declaration of function 'pthread_mutexattr_setrobust'; did you mean 'pthread_mutexattr_settype'? [-Wimplicit-function-declaration]
       if (!rc) rc = pthread_mutexattr_setrobust(&mattr, PTHREAD_MUTEX_ROBUST);
                     ^~~~~~~~~~~~~~~~~~~~~~~~~~~
                     pthread_mutexattr_settype
    mdb.c:4853:53: error: 'PTHREAD_MUTEX_ROBUST' undeclared (first use in this function); did you mean 'PTHREAD_MUTEX_DEFAULT'?
       if (!rc) rc = pthread_mutexattr_setrobust(&mattr, PTHREAD_MUTEX_ROBUST);
                                                         ^~~~~~~~~~~~~~~~~~~~
                                                         PTHREAD_MUTEX_DEFAULT
    mdb.c:4853:53: note: each undeclared identifier is reported only once for each function it appears in
    mdb.c: In function 'mdb_mutex_failed':
    mdb.c:362:37: warning: implicit declaration of function 'pthread_mutex_consistent'; did you mean 'pthread_mutex_init'? [-Wimplicit-function-declaration]
     #define mdb_mutex_consistent(mutex) pthread_mutex_consistent(mutex)
                                         ^
    mdb.c:10193:10: note: in expansion of macro 'mdb_mutex_consistent'
        rc2 = mdb_mutex_consistent(mutex);
              ^~~~~~~~~~~~~~~~~~~~
    mdb.c: In function 'mdb_cursor_put':
    mdb.c:6725:9: warning: this statement may fall through [-Wimplicit-fallthrough=]
          if (SIZELEFT(fp) < offset) {
             ^
    mdb.c:6730:5: note: here
         case MDB_CURRENT:
         ^~~~
    go: failed to remove work dir: GetFileInformationByHandle C:\Users\me\AppData\Local\Temp\go-build863785342\NUL: Incorrect function.
    
  • Push progress

    Push progress

    Is there a way to show push progress ? It seems at the moment it just sits there showing 100% while the file is being uploaded

    i0x71@debian:~/gitty$ s3git push Pushing 1 / 1 [====================================================================================================================================] 100.00%0

  • Not returning error on invalid auth

    Not returning error on invalid auth

    Is it expected behavior to not return an error on invalid auth credentials ? I have just spent 10 minutes trying to figure out how come clone makes an empty directory after my commit :smile:

    i0x71@debian:~$ s3git clone s3://gitty -a 'blah' -s 'blah' -e "http://xxxxxxx:9000" Cloning into /home/i0x71/gitty Done. Totaling 0 objects.

"rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

Website | Documentation | Download | Contributing | Changelog | Installation | Forum Rclone Rclone ("rsync for cloud storage") is a command-line progr

Jan 9, 2023
Cloud-Native distributed storage built on and for Kubernetes
Cloud-Native distributed storage built on and for Kubernetes

Longhorn Build Status Engine: Manager: Instance Manager: Share Manager: Backing Image Manager: UI: Test: Release Status Release Version Type 1.1 1.1.2

Jan 1, 2023
QingStor Object Storage service support for go-storage

go-services-qingstor QingStor Object Storage service support for go-storage. Install go get github.com/minhjh/go-service-qingstor/v3 Usage import ( "

Dec 13, 2021
Storj is building a decentralized cloud storage network
Storj is building a decentralized cloud storage network

Ongoing Storj v3 development. Decentralized cloud object storage that is affordable, easy to use, private, and secure.

Jan 8, 2023
SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

Jan 4, 2023
Rook is an open source cloud-native storage orchestrator for Kubernetes

Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Oct 25, 2022
tstorage is a lightweight local on-disk storage engine for time-series data
tstorage is a lightweight local on-disk storage engine for time-series data

tstorage is a lightweight local on-disk storage engine for time-series data with a straightforward API. Especially ingestion is massively opt

Jan 1, 2023
An encrypted object storage system with unlimited space backed by Telegram.

TGStore An encrypted object storage system with unlimited space backed by Telegram. Please only upload what you really need to upload, don't abuse any

Nov 28, 2022
storage interface for local disk or AWS S3 (or Minio) platform

storage interface for local disk or AWS S3 (or Minio) platform

Apr 19, 2022
Terraform provider for the Minio object storage.

terraform-provider-minio A Terraform provider for Minio, a self-hosted object storage server that is compatible with S3. Check out the documenation on

Dec 1, 2022
A Redis-compatible server with PostgreSQL storage backend

postgredis A wild idea of having Redis-compatible server with PostgreSQL backend. Getting started As a binary: ./postgredis -addr=:6380 -db=postgres:/

Nov 8, 2021
CSI for S3 compatible SberCloud Object Storage Service

sbercloud-csi-obs CSI for S3 compatible SberCloud Object Storage Service This is a Container Storage Interface (CSI) for S3 (or S3 compatible) storage

Feb 17, 2022
Void is a zero storage cost large file sharing system.

void void is a zero storage cost large file sharing system. License Copyright © 2021 Changkun Ou. All rights reserved. Unauthorized using, copying, mo

Nov 22, 2021
This is a simple file storage server. User can upload file, delete file and list file on the server.
This is a simple file storage server.  User can upload file,  delete file and list file on the server.

Simple File Storage Server This is a simple file storage server. User can upload file, delete file and list file on the server. If you want to build a

Jan 19, 2022
High Performance, Kubernetes Native Object Storage
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Ama

Jan 2, 2023
Perkeep (née Camlistore) is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.

Perkeep is your personal storage system. It's a way to store, sync, share, import, model, and back up content. Keep your stuff for life. For more, see

Dec 26, 2022
Storage Orchestration for Kubernetes

What is Rook? Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse se

Dec 29, 2022
A High Performance Object Storage released under Apache License
A High Performance Object Storage released under Apache License

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storag

Sep 30, 2021
Akutan is a distributed knowledge graph store, sometimes called an RDF store or a triple store.

Akutan is a distributed knowledge graph store, sometimes called an RDF store or a triple store. Knowledge graphs are suitable for modeling data that is highly interconnected by many types of relationships, like encyclopedic information about the world. A knowledge graph store enables rich queries on its data, which can be used to power real-time interfaces, to complement machine learning applications, and to make sense of new, unstructured information in the context of the existing knowledge.

Jan 7, 2023