High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide

Slack Docker Pulls

MinIO

MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.

This README provides quickstart instructions on running MinIO on baremetal hardware, including Docker-based installations. For Kubernetes environments, use the MinIO Kubernetes Operator.

Docker Installation

Use the following commands to run a standalone MinIO server on a Docker container.

Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

Stable

Run the following command to run the latest stable image of MinIO on a Docker container using an ephemeral data volume:

docker run -p 9000:9000 minio/minio server /data

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

NOTE: To deploy MinIO on Docker with persistent storage, you must map local persistent directories from the host OS to the container using the docker -v option. For example, -v /mnt/data:/data maps the host OS drive at /mnt/data to /data on the Docker container.

Edge

Run the following command to run the bleeding-edge image of MinIO on a Docker container using an ephemeral data volume:

docker run -p 9000:9000 minio/minio:edge server /data

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

NOTE: To deploy MinIO on Docker with persistent storage, you must map local persistent directories from the host OS to the container using the docker -v option. For example, -v /mnt/data:/data maps the host OS drive at /mnt/data to /data on the Docker container.

macOS

Use the following commands to run a standalone MinIO server on macOS.

Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

Homebrew (recommended)

Run the following command to install the latest stable MinIO package using Homebrew. Replace /data with the path to the drive or directory in which you want MinIO to store data.

brew install minio/stable/minio
minio server /data

NOTE: If you previously installed minio using brew install minio then it is recommended that you reinstall minio from minio/stable/minio official repo instead.

brew uninstall minio
brew install minio/stable/minio

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

Binary Download

Use the following command to download and run a standalone MinIO server on macOS. Replace /data with the path to the drive or directory in which you want MinIO to store data.

wget https://dl.min.io/server/minio/release/darwin-amd64/minio
chmod +x minio
./minio server /data

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

GNU/Linux

Use the following command to run a standalone MinIO server on Linux hosts running 64-bit Intel/AMD architectures. Replace /data with the path to the drive or directory in which you want MinIO to store data.

wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
./minio server /data

Replace /data with the path to the drive or directory in which you want MinIO to store data.

The following table lists supported architectures. Replace the wget URL with the architecture for your Linux host.

Architecture URL
64-bit Intel/AMD https://dl.min.io/server/minio/release/linux-amd64/minio
64-bit ARM https://dl.min.io/server/minio/release/linux-arm64/minio
64-bit PowerPC LE (ppc64le) https://dl.min.io/server/minio/release/linux-ppc64le/minio
IBM Z-Series (S390X) https://dl.min.io/server/minio/release/linux-s390x/minio

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

Microsoft Windows

To run MinIO on 64-bit Windows hosts, download the MinIO executable from the following URL:

https://dl.min.io/server/minio/release/windows-amd64/minio.exe

Use the following command to run a standalone MinIO server on the Windows host. Replace D:\ with the path to the drive or directory in which you want MinIO to store data. You must change the terminal or powershell directory to the location of the minio.exe executable, or add the path to that directory to the system $PATH:

minio.exe server D:\

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

FreeBSD

MinIO does not provide an official FreeBSD binary. However, FreeBSD maintains an upstream release using pkg:

pkg install minio
sysrc minio_enable=yes
sysrc minio_disks=/home/user/Photos
service minio start

Install from Source

Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow How to install Golang. Minimum version required is go1.16

GO111MODULE=on go get github.com/minio/minio

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Browser, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MINIO SDKS in the navigation to view MinIO SDKs for supported languages.

NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

MinIO strongly recommends against using compiled-from-source MinIO servers for production environments.

Deployment Recommendations

Allow port access for Firewalls

By default MinIO uses the port 9000 to listen for incoming connections. If your platform blocks the port by default, you may need to enable access to the port.

ufw

For hosts with ufw enabled (Debian based distros), you can use ufw command to allow traffic to specific ports. Use below command to allow access to port 9000

ufw allow 9000

Below command enables all incoming traffic to ports ranging from 9000 to 9010.

ufw allow 9000:9010/tcp

firewall-cmd

For hosts with firewall-cmd enabled (CentOS), you can use firewall-cmd command to allow traffic to specific ports. Use below commands to allow access to port 9000

firewall-cmd --get-active-zones

This command gets the active zone(s). Now, apply port rules to the relevant zones returned above. For example if the zone is public, use

firewall-cmd --zone=public --add-port=9000/tcp --permanent

Note that permanent makes sure the rules are persistent across firewall start, restart or reload. Finally reload the firewall for changes to take effect.

firewall-cmd --reload

iptables

For hosts with iptables enabled (RHEL, CentOS, etc), you can use iptables command to enable all traffic coming to specific ports. Use below command to allow access to port 9000

iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
service iptables restart

Below command enables all incoming traffic to ports ranging from 9000 to 9010.

iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart

Pre-existing data

When deployed on a single drive, MinIO server lets clients access any pre-existing data in the data directory. For example, if MinIO is started with the command minio server /mnt/data, any pre-existing data in the /mnt/data directory would be accessible to the clients.

The above statement is also valid for all gateway backends.

Test MinIO Connectivity

Test using MinIO Browser

MinIO Server comes with an embedded web based object browser. Point your web browser to http://127.0.0.1:9000 to ensure your server has started successfully.

Screenshot

Test using MinIO Client mc

mc provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services. Follow the MinIO Client Quickstart Guide for further instructions.

Upgrading MinIO

MinIO server supports rolling upgrades, i.e. you can update one MinIO instance at a time in a distributed cluster. This allows upgrades with no downtime. Upgrades can be done manually by replacing the binary with the latest release and restarting all servers in a rolling fashion. However, we recommend all our users to use mc admin update from the client. This will update all the nodes in the cluster simultaneously and restart them, as shown in the following command from the MinIO client (mc):

mc admin update <minio alias, e.g., myminio>

NOTE: some releases might not allow rolling upgrades, this is always called out in the release notes and it is generally advised to read release notes before upgrading. In such a situation mc admin update is the recommended upgrading mechanism to upgrade all servers at once.

Important things to remember during MinIO upgrades

  • mc admin update will only work if the user running MinIO has write access to the parent directory where the binary is located, for example if the current binary is at /usr/local/bin/minio, you would need write access to /usr/local/bin.
  • mc admin update updates and restarts all servers simultaneously, applications would retry and continue their respective operations upon upgrade.
  • mc admin update is disabled in kubernetes/container environments, container environments provide their own mechanisms to rollout of updates.
  • In the case of federated setups mc admin update should be run against each cluster individually. Avoid updating mc to any new releases until all clusters have been successfully updated.
  • If using kes as KMS with MinIO, just replace the binary and restart kes more information about kes can be found here
  • If using Vault as KMS with MinIO, ensure you have followed the Vault upgrade procedure outlined here: https://www.vaultproject.io/docs/upgrading/index.html
  • If using etcd with MinIO for the federation, ensure you have followed the etcd upgrade procedure outlined here: https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrading-etcd.md

Explore Further

Contribute to MinIO Project

Please follow MinIO Contributor's Guide

License

Use of MinIO is governed by the Apache 2.0 License found at LICENSE.

Owner
High Performance, Kubernetes Native Object Storage
High Performance, Kubernetes Native Object Storage
Comments
  • I/O Timeout

    I/O Timeout

    Howdy folks,

    No matter running the latest version or not, or latest Debian (9) or not, on RAID 10, with 30 transfers and full verbose, we get:

    ERRO[0124] Unable to create object part. cause=read tcp 46.4..:443->89.114.13*.*:49244: i/o timeout source=[object-handlers.go:817:objectAPIHandlers.PutObjectPartHandler()] stack=fs-v1-helpers.go:272:fsCreateFile fs-v1-multipart.go:523:fsObjects.PutObjectPart :339:(*fsObjects).PutObjectPart object-handlers.go:814:objectAPIHandlers.PutObjectPartHandler api-router.go:46:(objectAPIHandlers).PutObjectPartHandler-fm

    Any hint?

    Disks are fine, plenty of IO available. And these are mostly EPS files, jpg, etc, only 63GB total. I got this very same error in two servers.

  • Error : hash does not match

    Error : hash does not match

    after updating minio to 2021-05-27T22:06:31Z we got for alot of files such a message in the console

    minio04 API: SYSTEM() minio04 Time: 10:27:54 UTC 06/05/2021 minio04 DeploymentID: 7c507082-fe65-439a-9391-3b44b1b1d2ed minio04 Error: Disk: http://minio01:9000/drive6 -> alwaqiyah/Cong_Sweden_Fiter1441SD.mp4/aed31203-8a34-4a28-92dc-063bfe0dc34c/part.1 - content hash does not match - expected c5e2c21880064196b4c8c21bad1b08452ba8662ae32408d92343086a34412322, got 73462c13ceab3d7d4e1849af26b94fa716ad868d9080344999a61e6ab9bc5510 (*errors.errorString) minio04 2: cmd/bitrot-streaming.go:179:cmd.(*streamingBitrotReader).ReadAt() minio04 1: cmd/erasure-decode.go:163:cmd.(*parallelReader).Read.func1()

    minio04 API: SYSTEM() minio04 Time: 10:27:54 UTC 06/05/2021 minio04 DeploymentID: 7c507082-fe65-439a-9391-3b44b1b1d2ed minio04 Error: Disk: http://minio03:9000/drive6 -> alwaqiyah/Cong_Sweden_Fiter1441SD.mp4/aed31203-8a34-4a28-92dc-063bfe0dc34c/part.1 - content hash does not match - expected 6319f76775b13b6f23495b4e53374f7629ce37ce27ccb971ae42bed55efe5771, got e271da0633bf2c805eb576bf3b7379013b27d8621ad066c015de7477cde0ced0 (*errors.errorString) minio04 2: cmd/bitrot-streaming.go:179:cmd.(*streamingBitrotReader).ReadAt() minio04 1: cmd/erasure-decode.go:163:cmd.(*parallelReader).Read.func1()

    Current Behavior

    in the browser we got something like <Error><Code>SlowDown</Code><Message>Resource requested is unreadable, please reduce your request rate</Message><Key>Cong_Sweden_Fiter1441SD.mp4</Key><BucketName>alwaqiyah</BucketName><Resource>/Cong_Sweden_Fiter1441SD.mp4</Resource><RequestId>1685A99F3532F870</RequestId><HostId>7c507082-fe65-439a-9391-3b44b1b1d2ed</HostId></Error>

    server enviroment

    • Version 2021-05-27T22:06:31Z
    • Server setup and configuration: 4 nodes CPU : Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz 8C /16T MEMORY: 32G Disk: 6 disk 11T per disk
    • Operating System and version (uname -a): Linux minio04 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Minio server stops in every few hours

    Minio server stops in every few hours

    Expected Behavior

    Expected behavior is for Server to remain up till it is not stopped.

    Current Behavior

    Server stops(process dies) in every few hours.

    Steps to Reproduce (for bugs)

    1. docker pull minio/minio:edge
    2. docker run -p 9000:9000 minio/minio:edge server /export
    3. Wait for few hours and the Server would go down

    Context

    Because of this issue, I need to restart the Server in every few hours. Then I restart it and a new public and private key is generated. Then I need to change the keys in my code and redeploy it. A new Server instance is created every time I restart it, which essentially means that all my files, stored in previous Server instance, get lost.

    Your Environment

    • Server type and version: Latest from Master branch
    • Operating System and version: centos-release-7-3.1611.el7.centos.x86_64
  • Minio cluster fails to start/sync

    Minio cluster fails to start/sync "Waiting for a minimum of 2 disks to come online"

    Expected Behavior

    I expect it to start or at least provide log data as to what is wrong.

    Current Behavior

    All 4 nodes show the same data in the log.

    Waiting for a minimum of 2 disks to come online (elapsed 16m31s) Waiting for a minimum of 2 disks to come online (elapsed 16m32s) Waiting for a minimum of 2 disks to come online (elapsed 16m33s) Waiting for a minimum of 2 disks to come online (elapsed 16m34s) Waiting for a minimum of 2 disks to come online (elapsed 16m35s)

    If I log into a container I can ping the other containers and a curl to the http://minio#:9000/data produces the following

    <Error><Code>XMinioServerNotInitialized</Code><Message>Server not initialized, please try again.</Message><BucketName>data</BucketName><Resource>/data</Resource><RequestId>15F7158280B18599</RequestId><HostId></HostId></Error>/

    Steps to Reproduce (for bugs)

    Note sure how to reproduce but it happened when my docker swarm failed and I had to recreate it. While digging around in the config for minio, stored in the volume I noticed that recreating the stack updated to a newer minio so the old version was RELEASE.2019-09-26T19-42-35Z, while the new one is RELEASE.2020-02-20T22-51-23Z.

    Note: This had run for months without issue.

    Your Environment

    • Version used (minio version): RELEASE.2020-02-20T22-51-23Z
    • Environment name and version (e.g. nginx 1.9.1): Docker 19.03.6
    • Server type and version: 20 VM Docker Swarm
    • Operating System and version (uname -a): (docker nodes are) CentOS Linux release 7.7.1908 (Core)

    I've been working on this for about a week, digging through posts to see if I can find a way to get it back up and have not found anything. So I thought I would ask here.

    For Reference, this is my stack.yaml

    version: "3.2"
    services:
      minio1:
        image: minio/minio
        volumes:
          - data2:/data
        networks:
          - traefik_default
          - minio
        deploy:
           labels:
              - "traefik.frontend.rule=Host:minio-c.docker.mydomain.com"
              - "traefik.port=9000"
              - "traefik.enable=true"     
        environment:
          MINIO_ACCESS_KEY: <MY ACCESS KEY>
          MINIO_SECRET_KEY: <MY SECRET KEY>
        command: server  http://minio1:9000/data http://minio4:9000/data http://minio2:9000/data http://minio3:9000/data
    
      minio2:
        image: minio/minio
        volumes:
          - data3:/data
        networks:
          - traefik_default
          - minio
        deploy:
           labels:
              - "traefik.frontend.rule=Host:minio-c.docker.mydomain.com"
              - "traefik.port=9000"
              - "traefik.enable=true"     
        environment:
          MINIO_ACCESS_KEY: <MY ACCESS KEY>
          MINIO_SECRET_KEY: <MY SECRET KEY>
        command: server http://minio2:9000/data http://minio4:9000/data http://minio1:9000/data http://minio3:9000/data
    
      minio3:
        image: minio/minio
        volumes:
          - data4:/data
        networks:
          - traefik_default
          - minio
        deploy:
           labels:
              - "traefik.frontend.rule=Host:minio-c.docker.mydomain.com"
              - "traefik.port=9000"
              - "traefik.enable=true"     
        environment:
          MINIO_ACCESS_KEY: <MY ACCESS KEY>
          MINIO_SECRET_KEY: <MY SECRET KEY>
        command: server http://minio3:9000/data http://minio4:9000/data http://minio1:9000/data http://minio2:9000/data 
    
      minio4:
        image: minio/minio
        volumes:
          - data5:/data
        networks:
          - traefik_default
          - minio
        deploy:
           labels:
              - "traefik.frontend.rule=Host:minio-c.docker.mydomain.com"
              - "traefik.port=9000"
              - "traefik.enable=true"     
        environment:
          MINIO_ACCESS_KEY: <MY ACCESS KEY>
          MINIO_SECRET_KEY: <MY SECRET KEY>
        command: server http://minio4:9000/data http://minio1:9000/data http://minio2:9000/data http://minio3:9000/data
        
    <Removed Volumes>
    <Removed Networks>
    
    
  • Implement S3 Gateway to third party cloud storage providers.

    Implement S3 Gateway to third party cloud storage providers.

    Description

    Currently supported backend is Azure Blob Storage.

    export MINIO_ACCESS_KEY=azureaccountname
    export MINIO_SECRET_KEY=azureaccountkey
    minio gateway azure
    

    Motivation and Context

    Minio gateway adds Amazon S3 compatibility to third party cloud storage providers.

    How Has This Been Tested?

    Manually and by rest of the @minio/core

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [x] I have updated the documentation accordingly.
    • [x] I have added tests to cover my changes.
    • [x] All new and existing tests passed.
  • Hadoop 3.3 Compatibility issue with single drive mode (minio server /data)

    Hadoop 3.3 Compatibility issue with single drive mode (minio server /data)

    MinIO works with Hadoop 3.2, but not with Hadoop 3.3.

    Current Behavior

    Repro code:

    val conf = new Configuration()
    conf.set("fs.s3a.endpoint", "http://127.0.0.1:9000")
    conf.set("fs.s3a.path.style.access", "true")
    conf.set("fs.s3a.access.key", "user_access_key")
    conf.set("fs.s3a.secret.key", "password")
    
    val path = new Path("s3a://comcast-test")
    val fs = path.getFileSystem(conf)
    
    fs.mkdirs(new Path("/testdelta/_delta_log"))
    fs.getFileStatus(new Path("/testdelta/_delta_log"))
    

    Fails with FileNotFoundException. The same code works in real S3. It also works in Hadoop 3.2. Only fails on 3.3 and newer Hadoop branches.

    Possible Solution

    This works in Hadoop 3.2 because of this infamous "Is this necessary?" block of code https://github.com/apache/hadoop/blob/branch-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2204-L2223

    that was removed in Hadoop 3.3 - https://github.com/apache/hadoop/blob/branch-3.3.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2179

    and this causes the regression

    Steps to Reproduce (for bugs)

    See code above.

    Context

    Some applications need to create subdirectories before they can write to Minio, so this affects

    Regression

    Yes, this is a regression.

    Your Environment

    • Subnet ticket 3439
  • AWS iOS SDK working with minio

    AWS iOS SDK working with minio

    Error messgae

    SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method./images/test.jpg3L1373L137 Upload failed with error: (The operation couldn’t be completed. (com.amazonaws.AWSServiceErrorDomain error 3.))

    Here is how I configure

    let accessKey = "xxxxxxx"
    let secretKey = "xxxxxxx"       
    let credentialsProvider = AWSStaticCredentialsProvider(accessKey: accessKey, secretKey: secretKey)
    let configuration = AWSServiceConfiguration(region: AWSRegionType.USEast1, endpoint: AWSEndpoint(region: .USEast1, service: .APIGateway, url: URL( string:"http://xxxx.com:9000")),credentialsProvider: credentialsProvider)
    AWSServiceManager.default().defaultServiceConfiguration = configuration
    

    Possible Solution

    refer to this issue, https://github.com/minio/mc/issues/1707 maybe it caused by region issue?

    Your Environment

    • Version used:

      • official docker latest
      • awss3 lib used: pod 'AWSS3', '~> 2.5'
  • ecosystem: Validate all spark and hadoop supported s3 connectors

    ecosystem: Validate all spark and hadoop supported s3 connectors

    Expected behaviour

    I want to use spark-shell (v2.0.1) to connect to minio local to read input data and write back output data.

    Actual behaviour

    Attempting a simple read of a text file (json in this case) results in the error:

    com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: null, AWS Error Code: null, AWS Error Message: Bad Request
    

    (full stack will be below)

    Steps to reproduce the behaviour

    Install Oracle Java 8 (1.8.0_71) Install Scala 2.11.8 Minio installed by Home Brew Install Spark 2.0.1

    Start the minio server local, log into the browser UI, create a bucket named data and upload a small json file to it (e.g. bacon_tiny.json).

    { "bacon": {
        "isCrispy": true,
        "isDelicious": true,
        "isVegan": false
        }
    }
    

    Relevant bits of spark-env.sh (all other bits left to default):

    HADOOP_CONF_DIR=/Users/bkarels/hadoop/hadoop-2.7.3/etc/hadoop
    SPARK_LOCAL_IP="127.0.0.1"
    SPARK_MASTER_HOST="localhost"
    SPARK_MASTER_WEBUI_PORT=8080
    SPARK_MASTER_PORT=7077
    SPARK_DAEMON_JAVA_OPTS="-Djava.net.preferIPv4Stack=true -Dcom.amazonaws.services.s3.enableV4=true"
    

    core-site.xml (from ~/hadoop/hadoop-2.7.3/etc/hadoop):

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <configuration>
      <property>
        <name>fs.s3a.endpoint</name>
        <value>http://127.0.0.1:9000</value>
      </property>
    
      <property>
        <name>fs.s3a.access.key</name>
        <description>AWS access key ID.</description>
        <value>*************************</value>
      </property>
    
      <property>
        <name>fs.s3a.secret.key</name>
        <description>AWS secret key.</description>
        <value>*********************************************</value>
      </property>
    
    </configuration>
    

    Starting the spark-shell

    $ ./bin/spark-shell --master local[4] --jars "./bin/hadoop-aws-2.7.1.jar,./bin/aws-java-sdk-1.7.4.jar"
    scala> val bacon = sc.textFile("s3a://data/bacon_tiny.json").first
    
    com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: null, AWS Error Code: null, AWS Error Message: Bad Request
      at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
      at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
      at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
      at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
      at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
      at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
      at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
      at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
      at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
      at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
      at scala.Option.getOrElse(Option.scala:121)
      at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
      at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
      at scala.Option.getOrElse(Option.scala:121)
      at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
      at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1303)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
      at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
      at org.apache.spark.rdd.RDD.take(RDD.scala:1298)
      at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1338)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
      at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
      at org.apache.spark.rdd.RDD.first(RDD.scala:1337)
      ... 48 elided
    

    All else being equal, I can adjust core-site.xml to point to an actual s3 instance and things work which is what has me thinking this might be a Minio issue.

    Minio version

    Version: 2016-09-11T17:42:18Z
    Release-Tag: RELEASE.2016-09-11T17-42-18Z
    Commit-ID: 85e2d886bcb005d49f3876d6849a2b5a55e03cd3
    

    System information

    Running on OSX 10.11.6
    MacBook Pro (Retina, 15-inch, Mid 2015)
    2.5 GHz Intel Core i7
    16 GB 1600 MHz DDR3
    
  • ARM64 : Minio server crashes randomly

    ARM64 : Minio server crashes randomly

    Minio server is terminating randomly on ARM64 platform. Built the latest code on ARM64 platform and used for testing.The deployment type is "distributed mode". In a week testing on 6 nodes, 3 servers crashed. The load was moderate when these crashes happened.

    Log messages ::

    goroutine 338370336 [IO wait]: runtime.gopark(0x1821768, 0xfffe25850a98, 0x4001841b02, 0x5) /usr/local/go/src/runtime/proc.go:301 +0xf0 fp=0x40042fc9c0 sp=0x40042fc9a0 pc=0x43cf50 runtime.netpollblock(0xfffe25850a70, 0x72, 0x409266a000) /usr/local/go/src/runtime/netpoll.go:389 +0xa4 fp=0x40042fca00 sp=0x40042fc9c0 pc=0x4385f4 internal/poll.runtime_pollWait(0xfffe25850a70, 0x72, 0xffffffffffffffff) /usr/local/go/src/runtime/netpoll.go:182 +0x48 fp=0x40042fca30 sp=0x40042fca00 pc=0x437b48 internal/poll.(*pollDesc).wait(0x408b030818, 0x72, 0x1000, 0x1000, 0xffffffffffffffff) /usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0xa0 fp=0x40042fca60 sp=0x40042fca30 pc=0x493fc0 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:92 internal/poll.(*FD).Read(0x408b030800, 0x409266a000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/internal/poll/fd_unix.go:169 +0x170 fp=0x40042fcac0 sp=0x40042fca60 pc=0x494e40 net.(*netFD).Read(0x408b030800, 0x409266a000, 0x1000, 0x1000, 0x40042fcb78, 0x413008, 0x40042fcb60) /usr/local/go/src/net/fd_unix.go:202 +0x44 fp=0x40042fcb20 sp=0x40042fcac0 pc=0x598c14 net.(*conn).Read(0x4031f50158, 0x409266a000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:177 +0x5c fp=0x40042fcb80 sp=0x40042fcb20 pc=0x5ab42c net/http.(*persistConn).Read(0x408dc807e0, 0x409266a000, 0x1000, 0x1000, 0x40042fcc68, 0x6c2f30, 0x405fa8c840) /usr/local/go/src/net/http/transport.go:1524 +0x60 fp=0x40042fcc00 sp=0x40042fcb80 pc=0x6c1da0 bufio.(*Reader).fill(0x4004105a40) /usr/local/go/src/bufio/bufio.go:100 +0x100 fp=0x40042fcc50 sp=0x40042fcc00 pc=0x4b0000 bufio.(*Reader).Peek(0x4004105a40, 0x1, 0x0, 0x0, 0x1, 0x400d6c5600, 0x0) /usr/local/go/src/bufio/bufio.go:138 +0x40 fp=0x40042fcc70 sp=0x40042fcc50 pc=0x4b0180 net/http.(*persistConn).readLoop(0x408dc807e0) /usr/local/go/src/net/http/transport.go:1677 +0x16c fp=0x40042fcfd0 sp=0x40042fcc70 pc=0x6c282c runtime.goexit() /usr/local/go/src/runtime/asm_arm64.s:1128 +0x4 fp=0x40042fcfd0 sp=0x40042fcfd0 pc=0x46aa74 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport

    Expected Behavior

    Minio server shouldn't terminate.

    Current Behavior

    Minio server terminates.

    Your Environment

    • Version used (minio version): Minio (built from source on ARM64 platform) Version: DEVELOPMENT.GOGET Release-Tag: DEVELOPMENT.GOGET Commit-ID: DEVELOPMENT.GOGET

    • Operating System and version (uname -a): aarch64 aarch64 aarch64 GNU/Linux

  • Scheduled removal of MinIO Gateway for GCS, Azure, HDFS

    Scheduled removal of MinIO Gateway for GCS, Azure, HDFS

    MinIO Gateway will be removed by June 1st, 2022 from the MinIO repository:

    Community Users

    • Please migrate your MinIO Gateway deployments from Azure, GCS, HDFS to MinIO Distributed Setups

    • MinIO S3 Gateway will be renamed as "minio edge" and will only support MinIO Backends to extend the functionality of supporting remote credentials etc locally as "read-only" for authentication and policy management.

    • Newer MinIO NAS/Single drive setups will move to single data and 0 parity mode (that will re-purpose the erasure-coded backend used for distributed setups but with 0 parity). This would allow for distributed setup features to be available for single drive deployments as well such as

      • Versioning
      • ILM
      • Replication and more...
    • Existing setups for NAS/Single drive setups will work as-is nothing changes.

    Paid Users

    All existing paid customers will be supported as per their LTS support contract. If there are bugs they will be fixed and backported fixes will be provided. No new features will be implemented for Gateway implementations.

  • cannot delete object with minio client

    cannot delete object with minio client

    I try to delete an object with minio client with the follow command

     mc rm exanic/newhome-dev/public/exanic/data/resources/newhome.ch/release.4.0/AGKB/BEN404357/OBJ3248251/MD68787625.jpg
    

    The result is: Removing exanic/newhome-dev/public/exanic/data/resources/newhome.ch/release.4.0/AGKB/BEN404357/OBJ3248251/MD68787625.jpg.

    But the file is always listed in directory:

     mc ls exanic/newhome-dev/public/exanic/data/resources/newhome.ch/release.4.0/AGKB/BEN404357/OBJ3248251/MD68787625.jpg
    [2020-01-17 13:43:36 CET]   13KiB MD68787625.jpg
    

    When i try download view the file i become an error:

    mc head exanic/newhome-dev/public/exanic/data/resources/newhome.ch/release.4.0/AGKB/BEN404357/OBJ3248251/MD68787625.jpg
    mc: <ERROR> Unable to read from `exanic/newhome-dev/public/exanic/data/resources/newhome.ch/release.4.0/AGKB/BEN404357/OBJ3248251/MD68787625.jpg`. Object does not exist.
    

    When i try to download the file with the webui i it returns the error "The specified key does not exist."

    Expected Behavior

    The file should not be listed with the mc ls command

    Current Behavior

    The file is listed after delete. It also exists on disk at server

    Possible Solution

    There could be a probleme in the delete

    Steps to Reproduce (for bugs)

    1. delete a file with mc delete
    2. list files in bucket with mc tree
    3. the file is not deleted

    Context

    Files are listed in bucket wich are not exists

    Your Environment

    • Version used (minio version): RELEASE.2020-01-16T22-40-29Z
    • Environment name and version (e.g. nginx 1.9.1): minio is setup as cluster with 6 nodes.
    • Server type and version: ubuntu 18.04 lts
    • Operating System and version (uname -a): Linux vstorage01 4.15.0-62-generic #69-Ubuntu SMP Wed Sep 4 20:55:53 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
  • Fix Invalid setting

    Fix Invalid setting

    Description

    When line:73 is set, it will be invalid.

    // validateID - checks if ID is valid or not.
    func (r Rule) validateID() error {
    	IDLen := len(r.ID)
    	// generate new ID when not provided
    	// cannot be longer than 255 characters
    	if IDLen == 0 {
    		if newID, err := getNewUUID(); err == nil {
    			r.ID = newID
    		} else {
    			return err
    		}
    	} else if IDLen > 255 {
    		return errInvalidRuleID
    	}
    	return nil
    }
    

    Example:

    package main
    
    import "fmt"
    
    func main() {
    	a := A{}
    	a.SetB("Test")
    	fmt.Println("output:", a.B)
    	a.SetBWithPoint("Test")
    	fmt.Println("output:", a.B)
    }
    
    type A struct {
    	B string
    }
    
    func (a A) SetB(str string) {
    	a.B = str
    }
    
    func (a *A) SetBWithPoint(str string) {
    	a.B = str
    }
    
    $ go run main.go  
    output: 
    output: Test
    

    Motivation and Context

    How to test this PR?

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Optimization (provides speedup with no functional changes)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [ ] Fixes a regression (If yes, please add commit-id or PR # here)
    • [ ] Unit tests added/updated
    • [ ] Internal documentation updated
    • [ ] Create a documentation update request here
  • persist the non-default creds in config

    persist the non-default creds in config

    Description

    persist the non-default creds in config

    Motivation and Context

    non-default creds should persist to disk and we must detect that they are not set to "default" credentials by mistake.

    Print a warning indicating that the deployment is now using default credentials.

    NOTE: we cannot stop the service instead we can simply warn the users at this point in time.

    How to test this PR?

    Start a fresh setup with

    #!/bin/bash                                                                                                                                                                                   
    
    set -x
    
    export MINIO_PROMETHEUS_AUTH_TYPE="public"
    export MINIO_API_DELETE_CLEANUP_INTERVAL="5s"
    export MINIO_CI_CD=1
    export MINIO_SCANNER_CYCLE=10s
    export GOMAXPROCS=2
    export MINIO_ROOT_USER=minio
    export MINIO_ROOT_PASSWORD=minio123
    export MINIO_SUBNET_PROXY=http://localhost:4389
    
    killall -9 minio
    # rm -rf ${HOME}/tmp/dist                                                                                                                                                                     
    
    scheme="http"
    nr_servers=4
    
    addr="localhost"
    args=""
    for ((i=0;i<$[${nr_servers}];i++)); do
        args="$args $scheme://$addr:$[9100+$i]/${HOME}/tmp/dist/path1/$i"
    done
    
    echo $args
    
    
    for ((i=0;i<$[${nr_servers}];i++)); do
        (minio server --address ":$[9100+$i]" $args 2>&1 > /tmp/log$i.txt) &
    done
    
    ~ ./run-dist.sh
    

    Then change the script by removing the MINIO_ROOT_USER and MINIO_ROOT_PASSWORD

    ~ ./run-dist.sh
    WARNING: Detected default credentials 'minioadmin:minioadmin', we recommend that you change these values with 'MINIO_ROOT_USER' and 'MINIO_ROOT_PASSWORD' environment variables
    WARNING: Detected credentials changed to 'minioadmin:minioadmin', please set them back to previously set values
    ...
    ...
    

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Optimization (provides speedup with no functional changes)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [ ] Fixes a regression (If yes, please add commit-id or PR # here)
    • [ ] Unit tests added/updated
    • [ ] Internal documentation updated
    • [ ] Create a documentation update request here
  • Freeze before exit when _MINIO_DEBUG_BLOCK_ON_FATAL_CRASH is defined

    Freeze before exit when _MINIO_DEBUG_BLOCK_ON_FATAL_CRASH is defined

    Description

    In some cases, it is a good idea to intercept fatalIf() or a panic crash and block to make it easier to fix issues.

    For example, a k8s pod crashing in a loop for any reason is hard to debug and execute commands in that pod.

    Motivation and Context

    Make it easier to debug MinIO crashing pods in k8s for internal or external reasons

    How to test this PR?

    Add a panic("error") in main() of server-main.go then

    export _MINIO_SAFE=1
    export CI=1 
    ./minio server /tmp/fs/
    

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Optimization (provides speedup with no functional changes)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [ ] Fixes a regression (If yes, please add commit-id or PR # here)
    • [ ] Unit tests added/updated
    • [ ] Internal documentation updated
    • [ ] Create a documentation update request here
  • GitHub Workflows security hardening

    GitHub Workflows security hardening

    This PR adds explicit permissions section to workflows. This is a security best practice because by default workflows run with extended set of permissions (except from on: pull_request from external forks). By specifying any permission explicitly all others are set to none. By using the principle of least privilege the damage a compromised workflow can do (because of an injection or compromised third party tool or action) is restricted. It is recommended to have most strict permissions on the top level and grant write permissions on job level case by case.

  • helm: modify the job create order

    helm: modify the job create order

    Description

    Merge the post jobs into a pod to control the execution order.

    Motivation and Context

    fix #15695

    How to test this PR?

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Optimization (provides speedup with no functional changes)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [ ] Fixes a regression (If yes, please add commit-id or PR # here)
    • [ ] Unit tests added/updated
    • [ ] Internal documentation updated
    • [ ] Create a documentation update request here
  • Helm Chart Job Ordering

    Helm Chart Job Ordering

    NOTE

    I use the Helm Chart to deploy minio and i specify the following keys:

    • buckets:
    • customCommands

    A command in the customCommands key references a bucket that is created via the buckets job.

    Since the customCommands job is executed first, it cannot resolve a bucket that has not been created, and fails.

    Is it possible to the chart user order these jobs, or to preference the "primitives" like the bucket job first?

    Expected Behavior

    customCommands runs dead last

    Current Behavior

    customCommands runs first

    Possible Solution

    Implemented ordering?

    Context

    Really kills the automation aspect of the buckets key, since I will have to manually create the buckets in customCommands

GeeseFS is a high-performance, POSIX-ish S3 (Yandex, Amazon) file system written in Go
GeeseFS is a high-performance, POSIX-ish S3 (Yandex, Amazon) file system written in Go

GeeseFS is a high-performance, POSIX-ish S3 (Yandex, Amazon) file system written in Go Overview GeeseFS allows you to mount an S3 bucket as a file sys

Sep 7, 2022
Go object files

Go object files C developers will be familiar with compiling C source files (.c) into object files (.o) before linking them into their final form. Did

Feb 1, 2022
Abstract File Storage

afs - abstract file storage Please refer to CHANGELOG.md if you encounter breaking changes. Motivation Introduction Usage Matchers Content modifiers S

Sep 27, 2022
A PDF document generator with high level support for text, drawing and images

GoFPDF document generator Package go-pdf/fpdf implements a PDF document generator with high level support for text, drawing and images. Features UTF-8

Sep 7, 2022
High Performance, Kubernetes Native Object Storage
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storag

Sep 26, 2022
High Performance, Kubernetes Native Object Storage
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Ama

Sep 20, 2022
A High Performance Object Storage released under Apache License
A High Performance Object Storage released under Apache License

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storag

Sep 30, 2021
go-fastdfs 是一个简单的分布式文件系统(私有云存储),具有无中心、高性能,高可靠,免维护等优点,支持断点续传,分块上传,小文件合并,自动同步,自动修复。Go-fastdfs is a simple distributed file system (private cloud storage), with no center, high performance, high reliability, maintenance free and other advantages, support breakpoint continuation, block upload, small file merge, automatic synchronization, automatic repair.(similar fastdfs).
go-fastdfs 是一个简单的分布式文件系统(私有云存储),具有无中心、高性能,高可靠,免维护等优点,支持断点续传,分块上传,小文件合并,自动同步,自动修复。Go-fastdfs is a simple distributed file system (private cloud storage), with no center, high performance, high reliability, maintenance free and other advantages, support breakpoint continuation, block upload, small file merge, automatic synchronization, automatic repair.(similar fastdfs).

中文 English 愿景:为用户提供最简单、可靠、高效的分布式文件系统。 go-fastdfs是一个基于http协议的分布式文件系统,它基于大道至简的设计理念,一切从简设计,使得它的运维及扩展变得更加简单,它具有高性能、高可靠、无中心、免维护等优点。 大家担心的是这么简单的文件系统,靠不靠谱,可不

Sep 22, 2022
🔑A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout (LSM+WAL) similar to Riak.

bitcask A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout

Aug 24, 2022
QingStor Object Storage service support for go-storage

go-services-qingstor QingStor Object Storage service support for go-storage. Install go get github.com/minhjh/go-service-qingstor/v3 Usage import ( "

Dec 13, 2021
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

Sep 19, 2021
High Performance Remote Object Service Engine
High Performance Remote Object Service Engine

=============== Hprose is a High Performance Remote Object Service Engine. It is a modern, lightweight, cross-language, cross-platform, object-oriente

Jul 27, 2022
topolvm operator provide kubernetes local storage which is light weight and high performance

Topolvm-Operator Topolvm-Operator is an open source cloud-native local storage orchestrator for Kubernetes, which bases on topolvm. Supported environm

Jun 29, 2022
An high performance and ops-free local storage solution for Kubernetes.
An high performance and ops-free local storage solution for Kubernetes.

Carina carina 是一个CSI插件,在Kubernetes集群中提供本地存储持久卷 项目状态:开发测试中 CSI Version: 1.3.0 Carina architecture 支持的环境 Kubernetes:1.20 1.19 1.18 Node OS:Linux Filesys

May 18, 2022
Carina: an high performance and ops-free local storage for kubernetes
Carina: an high performance and ops-free local storage for kubernetes

Carina English | 中文 Background Storage systems are complex! There are more and more kubernetes native storage systems nowadays and stateful applicatio

Sep 19, 2022
The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your container orchestrator

fortress-csi The Container Storage Interface (CSI) Driver for Fortress Block Storage This driver allows you to use Fortress Block Storage with your co

Jan 23, 2022
High-Performance server for NATS, the cloud native messaging system.
High-Performance server for NATS, the cloud native messaging system.

NATS is a simple, secure and performant communications system for digital systems, services and devices. NATS is part of the Cloud Native Computing Fo

Sep 24, 2022
High-Performance server for NATS, the cloud native messaging system.
High-Performance server for NATS, the cloud native messaging system.

NATS is a simple, secure and performant communications system for digital systems, services and devices. NATS is part of the Cloud Native Computing Fo

Sep 20, 2022
Goridge is high performance PHP-to-Golang codec library which works over native PHP sockets and Golang net/rpc package.
Goridge is high performance PHP-to-Golang codec library which works over native PHP sockets and Golang net/rpc package.

Goridge is high performance PHP-to-Golang codec library which works over native PHP sockets and Golang net/rpc package. The library allows you to call Go service methods from PHP with a minimal footprint, structures and []byte support.

Sep 22, 2022