High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide

Slack Docker Pulls license

MinIO

MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Amazon S3 cloud storage service. Use MinIO to build high performance infrastructure for machine learning, analytics and application data workloads.

This README provides quickstart instructions on running MinIO on baremetal hardware, including container-based installations. For Kubernetes environments, use the MinIO Kubernetes Operator.

Container Installation

Use the following commands to run a standalone MinIO server as a container.

Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

Stable

Run the following command to run the latest stable image of MinIO as a container using an ephemeral data volume:

podman run -p 9000:9000 -p 9001:9001 \
  quay.io/minio/minio server /data --console-address ":9001"

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Console, an embedded object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MinIO SDKs in the navigation to view MinIO SDKs for supported languages.

NOTE: To deploy MinIO on with persistent storage, you must map local persistent directories from the host OS to the container using the podman -v option. For example, -v /mnt/data:/data maps the host OS drive at /mnt/data to /data on the container.

macOS

Use the following commands to run a standalone MinIO server on macOS.

Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

Homebrew (recommended)

Run the following command to install the latest stable MinIO package using Homebrew. Replace /data with the path to the drive or directory in which you want MinIO to store data.

brew install minio/stable/minio
minio server /data

NOTE: If you previously installed minio using brew install minio then it is recommended that you reinstall minio from minio/stable/minio official repo instead.

brew uninstall minio
brew install minio/stable/minio

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MinIO SDKs in the navigation to view MinIO SDKs for supported languages.

Binary Download

Use the following command to download and run a standalone MinIO server on macOS. Replace /data with the path to the drive or directory in which you want MinIO to store data.

wget https://dl.min.io/server/minio/release/darwin-amd64/minio
chmod +x minio
./minio server /data

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MinIO SDKs in the navigation to view MinIO SDKs for supported languages.

GNU/Linux

Use the following command to run a standalone MinIO server on Linux hosts running 64-bit Intel/AMD architectures. Replace /data with the path to the drive or directory in which you want MinIO to store data.

wget https://dl.min.io/server/minio/release/linux-amd64/minio
chmod +x minio
./minio server /data

Replace /data with the path to the drive or directory in which you want MinIO to store data.

The following table lists supported architectures. Replace the wget URL with the architecture for your Linux host.

Architecture URL
64-bit Intel/AMD https://dl.min.io/server/minio/release/linux-amd64/minio
64-bit ARM https://dl.min.io/server/minio/release/linux-arm64/minio
64-bit PowerPC LE (ppc64le) https://dl.min.io/server/minio/release/linux-ppc64le/minio
IBM Z-Series (S390X) https://dl.min.io/server/minio/release/linux-s390x/minio

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MinIO SDKs in the navigation to view MinIO SDKs for supported languages.

NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

Microsoft Windows

To run MinIO on 64-bit Windows hosts, download the MinIO executable from the following URL:

https://dl.min.io/server/minio/release/windows-amd64/minio.exe

Use the following command to run a standalone MinIO server on the Windows host. Replace D:\ with the path to the drive or directory in which you want MinIO to store data. You must change the terminal or powershell directory to the location of the minio.exe executable, or add the path to that directory to the system $PATH:

minio.exe server D:\

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MinIO SDKs in the navigation to view MinIO SDKs for supported languages.

NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

Install from Source

Use the following commands to compile and run a standalone MinIO server from source. Source installation is only intended for developers and advanced users. If you do not have a working Golang environment, please follow How to install Golang. Minimum version required is go1.17

GO111MODULE=on go install github.com/minio/minio@latest

The MinIO deployment starts using default root credentials minioadmin:minioadmin. You can test the deployment using the MinIO Console, an embedded web-based object browser built into MinIO Server. Point a web browser running on the host machine to http://127.0.0.1:9000 and log in with the root credentials. You can use the Browser to create buckets, upload objects, and browse the contents of the MinIO server.

You can also connect using any S3-compatible tool, such as the MinIO Client mc commandline tool. See Test using MinIO Client mc for more information on using the mc commandline tool. For application developers, see https://docs.min.io/docs/ and click MinIO SDKs in the navigation to view MinIO SDKs for supported languages.

NOTE: Standalone MinIO servers are best suited for early development and evaluation. Certain features such as versioning, object locking, and bucket replication require distributed deploying MinIO with Erasure Coding. For extended development and production, deploy MinIO with Erasure Coding enabled - specifically, with a minimum of 4 drives per MinIO server. See MinIO Erasure Code Quickstart Guide for more complete documentation.

MinIO strongly recommends against using compiled-from-source MinIO servers for production environments.

Deployment Recommendations

Allow port access for Firewalls

By default MinIO uses the port 9000 to listen for incoming connections. If your platform blocks the port by default, you may need to enable access to the port.

ufw

For hosts with ufw enabled (Debian based distros), you can use ufw command to allow traffic to specific ports. Use below command to allow access to port 9000

ufw allow 9000

Below command enables all incoming traffic to ports ranging from 9000 to 9010.

ufw allow 9000:9010/tcp

firewall-cmd

For hosts with firewall-cmd enabled (CentOS), you can use firewall-cmd command to allow traffic to specific ports. Use below commands to allow access to port 9000

firewall-cmd --get-active-zones

This command gets the active zone(s). Now, apply port rules to the relevant zones returned above. For example if the zone is public, use

firewall-cmd --zone=public --add-port=9000/tcp --permanent

Note that permanent makes sure the rules are persistent across firewall start, restart or reload. Finally reload the firewall for changes to take effect.

firewall-cmd --reload

iptables

For hosts with iptables enabled (RHEL, CentOS, etc), you can use iptables command to enable all traffic coming to specific ports. Use below command to allow access to port 9000

iptables -A INPUT -p tcp --dport 9000 -j ACCEPT
service iptables restart

Below command enables all incoming traffic to ports ranging from 9000 to 9010.

iptables -A INPUT -p tcp --dport 9000:9010 -j ACCEPT
service iptables restart

Pre-existing data

When deployed on a single drive, MinIO server lets clients access any pre-existing data in the data directory. For example, if MinIO is started with the command minio server /mnt/data, any pre-existing data in the /mnt/data directory would be accessible to the clients.

The above statement is also valid for all gateway backends.

Test MinIO Connectivity

Test using MinIO Console

MinIO Server comes with an embedded web based object browser. Point your web browser to http://127.0.0.1:9000 to ensure your server has started successfully.

NOTE: MinIO runs console on random port by default if you wish choose a specific port use --console-address to pick a specific interface and port.

Things to consider

MinIO redirects browser access requests to the configured server port (i.e. 127.0.0.1:9000) to the configured Console port. MinIO uses the hostname or IP address specified in the request when building the redirect URL. The URL and port must be accessible by the client for the redirection to work.

For deployments behind a load balancer, proxy, or ingress rule where the MinIO host IP address or port is not public, use the MINIO_BROWSER_REDIRECT_URL environment variable to specify the external hostname for the redirect. The LB/Proxy must have rules for directing traffic to the Console port specifically.

For example, consider a MinIO deployment behind a proxy https://minio.example.net, https://console.minio.example.net with rules for forwarding traffic on port :9000 and :9001 to MinIO and the MinIO Console respectively on the internal network. Set MINIO_BROWSER_REDIRECT_URL to https://console.minio.example.net to ensure the browser receives a valid reachable URL.

Similarly, if your TLS certificates do not have the IP SAN for the MinIO server host, the MinIO Console may fail to validate the connection to the server. Use the MINIO_SERVER_URL environment variable and specify the proxy-accessible hostname of the MinIO server to allow the Console to use the MinIO server API using the TLS certificate.

For example: export MINIO_SERVER_URL="https://minio.example.net"

Dashboard Creating a bucket
Dashboard Dashboard

Test using MinIO Client mc

mc provides a modern alternative to UNIX commands like ls, cat, cp, mirror, diff etc. It supports filesystems and Amazon S3 compatible cloud storage services. Follow the MinIO Client Quickstart Guide for further instructions.

Upgrading MinIO

MinIO server supports rolling upgrades, i.e. you can update one MinIO instance at a time in a distributed cluster. This allows upgrades with no downtime. Upgrades can be done manually by replacing the binary with the latest release and restarting all servers in a rolling fashion. However, we recommend all our users to use mc admin update from the client. This will update all the nodes in the cluster simultaneously and restart them, as shown in the following command from the MinIO client (mc):

mc admin update <minio alias, e.g., myminio>

NOTE: some releases might not allow rolling upgrades, this is always called out in the release notes and it is generally advised to read release notes before upgrading. In such a situation mc admin update is the recommended upgrading mechanism to upgrade all servers at once.

Important things to remember during MinIO upgrades

  • mc admin update will only work if the user running MinIO has write access to the parent directory where the binary is located, for example if the current binary is at /usr/local/bin/minio, you would need write access to /usr/local/bin.
  • mc admin update updates and restarts all servers simultaneously, applications would retry and continue their respective operations upon upgrade.
  • mc admin update is disabled in kubernetes/container environments, container environments provide their own mechanisms to rollout of updates.
  • In the case of federated setups mc admin update should be run against each cluster individually. Avoid updating mc to any new releases until all clusters have been successfully updated.
  • If using kes as KMS with MinIO, just replace the binary and restart kes more information about kes can be found here
  • If using Vault as KMS with MinIO, ensure you have followed the Vault upgrade procedure outlined here: https://www.vaultproject.io/docs/upgrading/index.html
  • If using etcd with MinIO for the federation, ensure you have followed the etcd upgrade procedure outlined here: https://github.com/etcd-io/etcd/blob/master/Documentation/upgrades/upgrading-etcd.md

Explore Further

Contribute to MinIO Project

Please follow MinIO Contributor's Guide

License

MinIO source is licensed under the GNU AGPLv3 license that can be found in the LICENSE file. MinIO Documentation © 2021 by MinIO, Inc is licensed under CC BY 4.0.

Comments
  • I/O Timeout

    I/O Timeout

    Howdy folks,

    No matter running the latest version or not, or latest Debian (9) or not, on RAID 10, with 30 transfers and full verbose, we get:

    ERRO[0124] Unable to create object part. cause=read tcp 46.4..:443->89.114.13*.*:49244: i/o timeout source=[object-handlers.go:817:objectAPIHandlers.PutObjectPartHandler()] stack=fs-v1-helpers.go:272:fsCreateFile fs-v1-multipart.go:523:fsObjects.PutObjectPart :339:(*fsObjects).PutObjectPart object-handlers.go:814:objectAPIHandlers.PutObjectPartHandler api-router.go:46:(objectAPIHandlers).PutObjectPartHandler-fm

    Any hint?

    Disks are fine, plenty of IO available. And these are mostly EPS files, jpg, etc, only 63GB total. I got this very same error in two servers.

  • Error : hash does not match

    Error : hash does not match

    after updating minio to 2021-05-27T22:06:31Z we got for alot of files such a message in the console

    minio04 API: SYSTEM() minio04 Time: 10:27:54 UTC 06/05/2021 minio04 DeploymentID: 7c507082-fe65-439a-9391-3b44b1b1d2ed minio04 Error: Disk: http://minio01:9000/drive6 -> alwaqiyah/Cong_Sweden_Fiter1441SD.mp4/aed31203-8a34-4a28-92dc-063bfe0dc34c/part.1 - content hash does not match - expected c5e2c21880064196b4c8c21bad1b08452ba8662ae32408d92343086a34412322, got 73462c13ceab3d7d4e1849af26b94fa716ad868d9080344999a61e6ab9bc5510 (*errors.errorString) minio04 2: cmd/bitrot-streaming.go:179:cmd.(*streamingBitrotReader).ReadAt() minio04 1: cmd/erasure-decode.go:163:cmd.(*parallelReader).Read.func1()

    minio04 API: SYSTEM() minio04 Time: 10:27:54 UTC 06/05/2021 minio04 DeploymentID: 7c507082-fe65-439a-9391-3b44b1b1d2ed minio04 Error: Disk: http://minio03:9000/drive6 -> alwaqiyah/Cong_Sweden_Fiter1441SD.mp4/aed31203-8a34-4a28-92dc-063bfe0dc34c/part.1 - content hash does not match - expected 6319f76775b13b6f23495b4e53374f7629ce37ce27ccb971ae42bed55efe5771, got e271da0633bf2c805eb576bf3b7379013b27d8621ad066c015de7477cde0ced0 (*errors.errorString) minio04 2: cmd/bitrot-streaming.go:179:cmd.(*streamingBitrotReader).ReadAt() minio04 1: cmd/erasure-decode.go:163:cmd.(*parallelReader).Read.func1()

    Current Behavior

    in the browser we got something like <Error><Code>SlowDown</Code><Message>Resource requested is unreadable, please reduce your request rate</Message><Key>Cong_Sweden_Fiter1441SD.mp4</Key><BucketName>alwaqiyah</BucketName><Resource>/Cong_Sweden_Fiter1441SD.mp4</Resource><RequestId>1685A99F3532F870</RequestId><HostId>7c507082-fe65-439a-9391-3b44b1b1d2ed</HostId></Error>

    server enviroment

    • Version 2021-05-27T22:06:31Z
    • Server setup and configuration: 4 nodes CPU : Intel(R) Xeon(R) CPU D-1541 @ 2.10GHz 8C /16T MEMORY: 32G Disk: 6 disk 11T per disk
    • Operating System and version (uname -a): Linux minio04 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Minio server stops in every few hours

    Minio server stops in every few hours

    Expected Behavior

    Expected behavior is for Server to remain up till it is not stopped.

    Current Behavior

    Server stops(process dies) in every few hours.

    Steps to Reproduce (for bugs)

    1. docker pull minio/minio:edge
    2. docker run -p 9000:9000 minio/minio:edge server /export
    3. Wait for few hours and the Server would go down

    Context

    Because of this issue, I need to restart the Server in every few hours. Then I restart it and a new public and private key is generated. Then I need to change the keys in my code and redeploy it. A new Server instance is created every time I restart it, which essentially means that all my files, stored in previous Server instance, get lost.

    Your Environment

    • Server type and version: Latest from Master branch
    • Operating System and version: centos-release-7-3.1611.el7.centos.x86_64
  • Minio cluster fails to start/sync

    Minio cluster fails to start/sync "Waiting for a minimum of 2 disks to come online"

    Expected Behavior

    I expect it to start or at least provide log data as to what is wrong.

    Current Behavior

    All 4 nodes show the same data in the log.

    Waiting for a minimum of 2 disks to come online (elapsed 16m31s) Waiting for a minimum of 2 disks to come online (elapsed 16m32s) Waiting for a minimum of 2 disks to come online (elapsed 16m33s) Waiting for a minimum of 2 disks to come online (elapsed 16m34s) Waiting for a minimum of 2 disks to come online (elapsed 16m35s)

    If I log into a container I can ping the other containers and a curl to the http://minio#:9000/data produces the following

    <Error><Code>XMinioServerNotInitialized</Code><Message>Server not initialized, please try again.</Message><BucketName>data</BucketName><Resource>/data</Resource><RequestId>15F7158280B18599</RequestId><HostId></HostId></Error>/

    Steps to Reproduce (for bugs)

    Note sure how to reproduce but it happened when my docker swarm failed and I had to recreate it. While digging around in the config for minio, stored in the volume I noticed that recreating the stack updated to a newer minio so the old version was RELEASE.2019-09-26T19-42-35Z, while the new one is RELEASE.2020-02-20T22-51-23Z.

    Note: This had run for months without issue.

    Your Environment

    • Version used (minio version): RELEASE.2020-02-20T22-51-23Z
    • Environment name and version (e.g. nginx 1.9.1): Docker 19.03.6
    • Server type and version: 20 VM Docker Swarm
    • Operating System and version (uname -a): (docker nodes are) CentOS Linux release 7.7.1908 (Core)

    I've been working on this for about a week, digging through posts to see if I can find a way to get it back up and have not found anything. So I thought I would ask here.

    For Reference, this is my stack.yaml

    version: "3.2"
    services:
      minio1:
        image: minio/minio
        volumes:
          - data2:/data
        networks:
          - traefik_default
          - minio
        deploy:
           labels:
              - "traefik.frontend.rule=Host:minio-c.docker.mydomain.com"
              - "traefik.port=9000"
              - "traefik.enable=true"     
        environment:
          MINIO_ACCESS_KEY: <MY ACCESS KEY>
          MINIO_SECRET_KEY: <MY SECRET KEY>
        command: server  http://minio1:9000/data http://minio4:9000/data http://minio2:9000/data http://minio3:9000/data
    
      minio2:
        image: minio/minio
        volumes:
          - data3:/data
        networks:
          - traefik_default
          - minio
        deploy:
           labels:
              - "traefik.frontend.rule=Host:minio-c.docker.mydomain.com"
              - "traefik.port=9000"
              - "traefik.enable=true"     
        environment:
          MINIO_ACCESS_KEY: <MY ACCESS KEY>
          MINIO_SECRET_KEY: <MY SECRET KEY>
        command: server http://minio2:9000/data http://minio4:9000/data http://minio1:9000/data http://minio3:9000/data
    
      minio3:
        image: minio/minio
        volumes:
          - data4:/data
        networks:
          - traefik_default
          - minio
        deploy:
           labels:
              - "traefik.frontend.rule=Host:minio-c.docker.mydomain.com"
              - "traefik.port=9000"
              - "traefik.enable=true"     
        environment:
          MINIO_ACCESS_KEY: <MY ACCESS KEY>
          MINIO_SECRET_KEY: <MY SECRET KEY>
        command: server http://minio3:9000/data http://minio4:9000/data http://minio1:9000/data http://minio2:9000/data 
    
      minio4:
        image: minio/minio
        volumes:
          - data5:/data
        networks:
          - traefik_default
          - minio
        deploy:
           labels:
              - "traefik.frontend.rule=Host:minio-c.docker.mydomain.com"
              - "traefik.port=9000"
              - "traefik.enable=true"     
        environment:
          MINIO_ACCESS_KEY: <MY ACCESS KEY>
          MINIO_SECRET_KEY: <MY SECRET KEY>
        command: server http://minio4:9000/data http://minio1:9000/data http://minio2:9000/data http://minio3:9000/data
        
    <Removed Volumes>
    <Removed Networks>
    
    
  • Implement S3 Gateway to third party cloud storage providers.

    Implement S3 Gateway to third party cloud storage providers.

    Description

    Currently supported backend is Azure Blob Storage.

    export MINIO_ACCESS_KEY=azureaccountname
    export MINIO_SECRET_KEY=azureaccountkey
    minio gateway azure
    

    Motivation and Context

    Minio gateway adds Amazon S3 compatibility to third party cloud storage providers.

    How Has This Been Tested?

    Manually and by rest of the @minio/core

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [x] I have updated the documentation accordingly.
    • [x] I have added tests to cover my changes.
    • [x] All new and existing tests passed.
  • WebUI/listObjects() show/return object. Download/getObject() return empty file / cause error

    WebUI/listObjects() show/return object. Download/getObject() return empty file / cause error "NoSuchKey"

    The WebUI displays a file with a plausible size of 740B. Screenshot 2022-06-05 144902

    Downloading this file via WebUI results in an empty file: Screenshot 2022-06-05 145130

    API access by way of listObjects() returns an Iterator<Result> results, which includes the above mentioned objectName.

    Calling getObject() with the objectName from above causes an ErrorResponseException with an errorResponseCode "NoSuchKey".

    Expected Behavior

    Parts of Minio seem to think the file exists, others don't. A displayed / listed objectName should be retrievable.

    Current Behavior

    See description.

    Trace of a download attempt by WebUI: Screenshot 2022-06-05 150522

    Context

    Usecase is a regular low intensity consistency check with a cassandra db. We traverse all Minio objects and check if cassandra is up-to-date.

    Regression

    na

    Your Environment

    MinIO VERSION 2022-04-16T04:26:02Z 4 dedicated Rasp-4B with 8GB each with dedicated 2TB SSD. Linux rasp-3 5.15.32-v8+ #1538 SMP PREEMPT Thu Mar 31 19:40:39 BST 2022 aarch64 GNU/Linux

    This has been working like a charm for about a year with > 16 Mio object and 2 TB data.

  • Hadoop 3.3 Compatibility issue with single drive mode (minio server /data)

    Hadoop 3.3 Compatibility issue with single drive mode (minio server /data)

    MinIO works with Hadoop 3.2, but not with Hadoop 3.3.

    Current Behavior

    Repro code:

    val conf = new Configuration()
    conf.set("fs.s3a.endpoint", "http://127.0.0.1:9000")
    conf.set("fs.s3a.path.style.access", "true")
    conf.set("fs.s3a.access.key", "user_access_key")
    conf.set("fs.s3a.secret.key", "password")
    
    val path = new Path("s3a://comcast-test")
    val fs = path.getFileSystem(conf)
    
    fs.mkdirs(new Path("/testdelta/_delta_log"))
    fs.getFileStatus(new Path("/testdelta/_delta_log"))
    

    Fails with FileNotFoundException. The same code works in real S3. It also works in Hadoop 3.2. Only fails on 3.3 and newer Hadoop branches.

    Possible Solution

    This works in Hadoop 3.2 because of this infamous "Is this necessary?" block of code https://github.com/apache/hadoop/blob/branch-3.2.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2204-L2223

    that was removed in Hadoop 3.3 - https://github.com/apache/hadoop/blob/branch-3.3.0/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L2179

    and this causes the regression

    Steps to Reproduce (for bugs)

    See code above.

    Context

    Some applications need to create subdirectories before they can write to Minio, so this affects

    Regression

    Yes, this is a regression.

    Your Environment

    • Subnet ticket 3439
  • AWS iOS SDK working with minio

    AWS iOS SDK working with minio

    Error messgae

    SignatureDoesNotMatchThe request signature we calculated does not match the signature you provided. Check your key and signing method./images/test.jpg3L1373L137 Upload failed with error: (The operation couldn’t be completed. (com.amazonaws.AWSServiceErrorDomain error 3.))

    Here is how I configure

    let accessKey = "xxxxxxx"
    let secretKey = "xxxxxxx"       
    let credentialsProvider = AWSStaticCredentialsProvider(accessKey: accessKey, secretKey: secretKey)
    let configuration = AWSServiceConfiguration(region: AWSRegionType.USEast1, endpoint: AWSEndpoint(region: .USEast1, service: .APIGateway, url: URL( string:"http://xxxx.com:9000")),credentialsProvider: credentialsProvider)
    AWSServiceManager.default().defaultServiceConfiguration = configuration
    

    Possible Solution

    refer to this issue, https://github.com/minio/mc/issues/1707 maybe it caused by region issue?

    Your Environment

    • Version used:

      • official docker latest
      • awss3 lib used: pod 'AWSS3', '~> 2.5'
  • ecosystem: Validate all spark and hadoop supported s3 connectors

    ecosystem: Validate all spark and hadoop supported s3 connectors

    Expected behaviour

    I want to use spark-shell (v2.0.1) to connect to minio local to read input data and write back output data.

    Actual behaviour

    Attempting a simple read of a text file (json in this case) results in the error:

    com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: null, AWS Error Code: null, AWS Error Message: Bad Request
    

    (full stack will be below)

    Steps to reproduce the behaviour

    Install Oracle Java 8 (1.8.0_71) Install Scala 2.11.8 Minio installed by Home Brew Install Spark 2.0.1

    Start the minio server local, log into the browser UI, create a bucket named data and upload a small json file to it (e.g. bacon_tiny.json).

    { "bacon": {
        "isCrispy": true,
        "isDelicious": true,
        "isVegan": false
        }
    }
    

    Relevant bits of spark-env.sh (all other bits left to default):

    HADOOP_CONF_DIR=/Users/bkarels/hadoop/hadoop-2.7.3/etc/hadoop
    SPARK_LOCAL_IP="127.0.0.1"
    SPARK_MASTER_HOST="localhost"
    SPARK_MASTER_WEBUI_PORT=8080
    SPARK_MASTER_PORT=7077
    SPARK_DAEMON_JAVA_OPTS="-Djava.net.preferIPv4Stack=true -Dcom.amazonaws.services.s3.enableV4=true"
    

    core-site.xml (from ~/hadoop/hadoop-2.7.3/etc/hadoop):

    <?xml version="1.0" encoding="UTF-8"?>
    <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
    
    <configuration>
      <property>
        <name>fs.s3a.endpoint</name>
        <value>http://127.0.0.1:9000</value>
      </property>
    
      <property>
        <name>fs.s3a.access.key</name>
        <description>AWS access key ID.</description>
        <value>*************************</value>
      </property>
    
      <property>
        <name>fs.s3a.secret.key</name>
        <description>AWS secret key.</description>
        <value>*********************************************</value>
      </property>
    
    </configuration>
    

    Starting the spark-shell

    $ ./bin/spark-shell --master local[4] --jars "./bin/hadoop-aws-2.7.1.jar,./bin/aws-java-sdk-1.7.4.jar"
    scala> val bacon = sc.textFile("s3a://data/bacon_tiny.json").first
    
    com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 400, AWS Service: Amazon S3, AWS Request ID: null, AWS Error Code: null, AWS Error Message: Bad Request
      at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
      at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
      at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
      at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
      at com.amazonaws.services.s3.AmazonS3Client.headBucket(AmazonS3Client.java:1031)
      at com.amazonaws.services.s3.AmazonS3Client.doesBucketExist(AmazonS3Client.java:994)
      at org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:297)
      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2669)
      at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
      at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
      at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
      at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
      at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
      at org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:258)
      at org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:229)
      at org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:315)
      at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:199)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
      at scala.Option.getOrElse(Option.scala:121)
      at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
      at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:35)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:248)
      at org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:246)
      at scala.Option.getOrElse(Option.scala:121)
      at org.apache.spark.rdd.RDD.partitions(RDD.scala:246)
      at org.apache.spark.rdd.RDD$$anonfun$take$1.apply(RDD.scala:1303)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
      at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
      at org.apache.spark.rdd.RDD.take(RDD.scala:1298)
      at org.apache.spark.rdd.RDD$$anonfun$first$1.apply(RDD.scala:1338)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
      at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
      at org.apache.spark.rdd.RDD.withScope(RDD.scala:358)
      at org.apache.spark.rdd.RDD.first(RDD.scala:1337)
      ... 48 elided
    

    All else being equal, I can adjust core-site.xml to point to an actual s3 instance and things work which is what has me thinking this might be a Minio issue.

    Minio version

    Version: 2016-09-11T17:42:18Z
    Release-Tag: RELEASE.2016-09-11T17-42-18Z
    Commit-ID: 85e2d886bcb005d49f3876d6849a2b5a55e03cd3
    

    System information

    Running on OSX 10.11.6
    MacBook Pro (Retina, 15-inch, Mid 2015)
    2.5 GHz Intel Core i7
    16 GB 1600 MHz DDR3
    
  • ARM64 : Minio server crashes randomly

    ARM64 : Minio server crashes randomly

    Minio server is terminating randomly on ARM64 platform. Built the latest code on ARM64 platform and used for testing.The deployment type is "distributed mode". In a week testing on 6 nodes, 3 servers crashed. The load was moderate when these crashes happened.

    Log messages ::

    goroutine 338370336 [IO wait]: runtime.gopark(0x1821768, 0xfffe25850a98, 0x4001841b02, 0x5) /usr/local/go/src/runtime/proc.go:301 +0xf0 fp=0x40042fc9c0 sp=0x40042fc9a0 pc=0x43cf50 runtime.netpollblock(0xfffe25850a70, 0x72, 0x409266a000) /usr/local/go/src/runtime/netpoll.go:389 +0xa4 fp=0x40042fca00 sp=0x40042fc9c0 pc=0x4385f4 internal/poll.runtime_pollWait(0xfffe25850a70, 0x72, 0xffffffffffffffff) /usr/local/go/src/runtime/netpoll.go:182 +0x48 fp=0x40042fca30 sp=0x40042fca00 pc=0x437b48 internal/poll.(*pollDesc).wait(0x408b030818, 0x72, 0x1000, 0x1000, 0xffffffffffffffff) /usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0xa0 fp=0x40042fca60 sp=0x40042fca30 pc=0x493fc0 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:92 internal/poll.(*FD).Read(0x408b030800, 0x409266a000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/internal/poll/fd_unix.go:169 +0x170 fp=0x40042fcac0 sp=0x40042fca60 pc=0x494e40 net.(*netFD).Read(0x408b030800, 0x409266a000, 0x1000, 0x1000, 0x40042fcb78, 0x413008, 0x40042fcb60) /usr/local/go/src/net/fd_unix.go:202 +0x44 fp=0x40042fcb20 sp=0x40042fcac0 pc=0x598c14 net.(*conn).Read(0x4031f50158, 0x409266a000, 0x1000, 0x1000, 0x0, 0x0, 0x0) /usr/local/go/src/net/net.go:177 +0x5c fp=0x40042fcb80 sp=0x40042fcb20 pc=0x5ab42c net/http.(*persistConn).Read(0x408dc807e0, 0x409266a000, 0x1000, 0x1000, 0x40042fcc68, 0x6c2f30, 0x405fa8c840) /usr/local/go/src/net/http/transport.go:1524 +0x60 fp=0x40042fcc00 sp=0x40042fcb80 pc=0x6c1da0 bufio.(*Reader).fill(0x4004105a40) /usr/local/go/src/bufio/bufio.go:100 +0x100 fp=0x40042fcc50 sp=0x40042fcc00 pc=0x4b0000 bufio.(*Reader).Peek(0x4004105a40, 0x1, 0x0, 0x0, 0x1, 0x400d6c5600, 0x0) /usr/local/go/src/bufio/bufio.go:138 +0x40 fp=0x40042fcc70 sp=0x40042fcc50 pc=0x4b0180 net/http.(*persistConn).readLoop(0x408dc807e0) /usr/local/go/src/net/http/transport.go:1677 +0x16c fp=0x40042fcfd0 sp=0x40042fcc70 pc=0x6c282c runtime.goexit() /usr/local/go/src/runtime/asm_arm64.s:1128 +0x4 fp=0x40042fcfd0 sp=0x40042fcfd0 pc=0x46aa74 created by net/http.(*Transport).dialConn /usr/local/go/src/net/http/transport

    Expected Behavior

    Minio server shouldn't terminate.

    Current Behavior

    Minio server terminates.

    Your Environment

    • Version used (minio version): Minio (built from source on ARM64 platform) Version: DEVELOPMENT.GOGET Release-Tag: DEVELOPMENT.GOGET Commit-ID: DEVELOPMENT.GOGET

    • Operating System and version (uname -a): aarch64 aarch64 aarch64 GNU/Linux

  • Scheduled removal of MinIO Gateway for GCS, Azure, HDFS

    Scheduled removal of MinIO Gateway for GCS, Azure, HDFS

    MinIO Gateway will be removed by June 1st, 2022 from the MinIO repository:

    Community Users

    • Please migrate your MinIO Gateway deployments from Azure, GCS, HDFS to MinIO Distributed Setups

    • MinIO S3 Gateway will be renamed as "minio edge" and will only support MinIO Backends to extend the functionality of supporting remote credentials etc locally as "read-only" for authentication and policy management.

    • Newer MinIO NAS/Single drive setups will move to single data and 0 parity mode (that will re-purpose the erasure-coded backend used for distributed setups but with 0 parity). This would allow for distributed setup features to be available for single drive deployments as well such as

      • Versioning
      • ILM
      • Replication and more...
    • Existing setups for NAS/Single drive setups will work as-is nothing changes.

    Paid Users

    All existing paid customers will be supported as per their LTS support contract. If there are bugs they will be fixed and backported fixes will be provided. No new features will be implemented for Gateway implementations.

  • Fix bandwidth monitoring to be per remote target

    Fix bandwidth monitoring to be per remote target

    Since more than one remote target can exist and have its own bandwidth throttle setting,bandwidth throttling needs to apply at the remote target level rather than per bucket

    This PR also removes the bandwidth monitoring API and introduces bandwidth as part of replication metrics to be reported by mc replicate status

    Description

    Motivation and Context

    Bring all relevant metrics regarding replication into mc replicate status

    How to test this PR?

    This PR has dependency on https://github.com/minio/mc/pull/4430. set up replication and bandwidth throttling with

     mc replicate add sitea/bucket --remote-bucket http://minio:minio123@localhost:9004/bucket --bandwidth 1GiB 
    

    After uploading some objects, check mc replicate status - should show bandwidth info

    Types of changes

    • [x] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Optimization (provides speedup with no functional changes)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [ ] Fixes a regression (If yes, please add commit-id or PR # here)
    • [ ] Unit tests added/updated
    • [ ] Internal documentation updated
    • [ ] Create a documentation update request here
  • IPv6 is attempted to use even if not supported

    IPv6 is attempted to use even if not supported

    Expected Behavior

    No IPv6 shouldn't be used if system doesn't support it. It should be especially true if I hardcode addresses from command line. Why it should even attempt to bind ::1 ?

    Current Behavior

    > ./minio server --address "127.0.0.1:9000" --console-address "127.0.0.1:9090" /miniodata
    
    API: SYSTEM()
    Time: 09:30:19 UTC 12/23/2022
    Error: listen tcp [::1]:9000: bind: address family not supported by protocol (*net.OpError)
           2: internal/logger/logger.go:258:logger.LogIf()
           1: cmd/signals.go:79:cmd.handleSignals()
    >
    

    Your Environment

    > ./minio --version
    minio version RELEASE.2022-12-12T19-27-27Z (commit-id=a469e6768df4d5d2cb340749fa58e4721a7dee96)
    Runtime: go1.19.4 linux/amd64
    License: GNU AGPLv3 <https://www.gnu.org/licenses/agpl-3.0.html>
    Copyright: 2015-2022 MinIO, Inc.
    > uname -sr
    Linux 5.15.72
    >
    
  • Deduplicate erasure sets to remove uploads from

    Deduplicate erasure sets to remove uploads from

    Description

    Untested.

    Motivation and Context

    How to test this PR?

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Optimization (provides speedup with no functional changes)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [ ] Fixes a regression (If yes, please add commit-id or PR # here)
    • [ ] Unit tests added/updated
    • [ ] Internal documentation updated
    • [ ] Create a documentation update request here
  • Add loong64 support

    Add loong64 support

    Description

    Add support for loong64. At present, the support of loong64 has been added in the golang source code.

    Motivation and Context

    The LoongArch architecture (LoongArch) is an Instruction Set Architecture (ISA) that has a RISC style.

    Documentations: ISA: https://loongson.github.io/LoongArch-Documentation/LoongArch-Vol1-EN.html ABI: https://loongson.github.io/LoongArch-Documentation/LoongArch-ELF-ABI-EN.html More docs can be found at: https://loongson.github.io/LoongArch-Documentation/README-EN.html

    How to test this PR?

    ./buildscripts/checkdeps.sh ./buildscripts/cross-compile.sh

    When performing cross-compilation, rely on 2 PRs: https://github.com/minio/madmin-go/pull/163 https://github.com/shirou/gopsutil/pull/1228

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Optimization (provides speedup with no functional changes)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [ ] Fixes a regression (If yes, please add commit-id or PR # here)
    • [ ] Unit tests added/updated
    • [ ] Internal documentation updated
    • [ ] Create a documentation update request here
  • decom: List pool after decom and error out if an object is found

    decom: List pool after decom and error out if an object is found

    Description

    As a sanity check measurement, add a listing of the pool after decomissiing and try if it can find any object, mark decomissionin status as failed if that is the case

    Motivation and Context

    How to test this PR?

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Optimization (provides speedup with no functional changes)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist:

    • [ ] Fixes a regression (If yes, please add commit-id or PR # here)
    • [ ] Unit tests added/updated
    • [ ] Internal documentation updated
    • [ ] Create a documentation update request here
  • Helm: provide the ability to run bucket/user/etc jobs during install

    Helm: provide the ability to run bucket/user/etc jobs during install

    Is your feature request related to a problem? Please describe. I've created a Helm chart that installs Minio as a subchart. The application in my Helm chart depends on Minio having certain users and buckets available, but these are currently only created when the chart finishes installing. Since the installation waits for my application to become ready, this never happens and the installation times out in this catch 22 situation.

    Describe the solution you'd like I would like there to be an option to run the initialization jobs during the installation itself, for example as part of Minio's startup logic. Many of the Bitnami charts do their initialization this way.

    Describe alternatives you've considered The only alternative I can think of is to deploy Minio as a standalone chart before installing my own chart, but I prefer to install them together since they are tightly coupled, and in fact the main chart generates some resources (e.g. secrets) that the Minio chart depends on.

    Additional context N/A

QingStor Object Storage service support for go-storage

go-services-qingstor QingStor Object Storage service support for go-storage. Install go get github.com/minhjh/go-service-qingstor/v3 Usage import ( "

Dec 13, 2021
Rook is an open source cloud-native storage orchestrator for Kubernetes

Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Oct 25, 2022
Cloud-Native distributed storage built on and for Kubernetes
Cloud-Native distributed storage built on and for Kubernetes

Longhorn Build Status Engine: Manager: Instance Manager: Share Manager: Backing Image Manager: UI: Test: Release Status Release Version Type 1.1 1.1.2

Jan 1, 2023
An encrypted object storage system with unlimited space backed by Telegram.

TGStore An encrypted object storage system with unlimited space backed by Telegram. Please only upload what you really need to upload, don't abuse any

Nov 28, 2022
Terraform provider for the Minio object storage.

terraform-provider-minio A Terraform provider for Minio, a self-hosted object storage server that is compatible with S3. Check out the documenation on

Dec 1, 2022
CSI for S3 compatible SberCloud Object Storage Service

sbercloud-csi-obs CSI for S3 compatible SberCloud Object Storage Service This is a Container Storage Interface (CSI) for S3 (or S3 compatible) storage

Feb 17, 2022
TurtleDex is a decentralized cloud storage platform that radically alters the landscape of cloud storage

TurtleDex is a decentralized cloud storage platform that radically alters the landscape of cloud storage. By leveraging smart contracts, client-side encryption, and sophisticated redundancy (via Reed-Solomon codes), TurtleDex allows users to safely store their data with hosts that they do not know or trust.

May 29, 2021
"rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

Website | Documentation | Download | Contributing | Changelog | Installation | Forum Rclone Rclone ("rsync for cloud storage") is a command-line progr

Jan 9, 2023
Storage Orchestration for Kubernetes

What is Rook? Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse se

Dec 29, 2022
Storj is building a decentralized cloud storage network
Storj is building a decentralized cloud storage network

Ongoing Storj v3 development. Decentralized cloud object storage that is affordable, easy to use, private, and secure.

Jan 8, 2023
tstorage is a lightweight local on-disk storage engine for time-series data
tstorage is a lightweight local on-disk storage engine for time-series data

tstorage is a lightweight local on-disk storage engine for time-series data with a straightforward API. Especially ingestion is massively opt

Jan 1, 2023
storage interface for local disk or AWS S3 (or Minio) platform

storage interface for local disk or AWS S3 (or Minio) platform

Apr 19, 2022
SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

Jan 4, 2023
A Redis-compatible server with PostgreSQL storage backend

postgredis A wild idea of having Redis-compatible server with PostgreSQL backend. Getting started As a binary: ./postgredis -addr=:6380 -db=postgres:/

Nov 8, 2021
Void is a zero storage cost large file sharing system.

void void is a zero storage cost large file sharing system. License Copyright © 2021 Changkun Ou. All rights reserved. Unauthorized using, copying, mo

Nov 22, 2021
This is a simple file storage server. User can upload file, delete file and list file on the server.
This is a simple file storage server.  User can upload file,  delete file and list file on the server.

Simple File Storage Server This is a simple file storage server. User can upload file, delete file and list file on the server. If you want to build a

Jan 19, 2022
Perkeep (née Camlistore) is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.

Perkeep is your personal storage system. It's a way to store, sync, share, import, model, and back up content. Keep your stuff for life. For more, see

Dec 26, 2022
s3git: git for Cloud Storage. Distributed Version Control for Data.
s3git: git for Cloud Storage. Distributed Version Control for Data.

s3git: git for Cloud Storage. Distributed Version Control for Data. Create decentralized and versioned repos that scale infinitely to 100s of millions of files. Clone huge PB-scale repos on your local SSD to make changes, commit and push back. Oh yeah, it dedupes too and offers directory versioning.

Dec 27, 2022
High Performance, Kubernetes Native Object Storage
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storag

Jan 2, 2023
High Performance, Kubernetes Native Object Storage
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Ama

Jan 2, 2023