MinIO Client SDK for Go

MinIO Go Client SDK for Amazon S3 Compatible Cloud Storage Slack Sourcegraph Apache V2 License

The MinIO Go Client SDK provides simple APIs to access any Amazon S3 compatible object storage.

This quickstart guide will show you how to install the MinIO client SDK, connect to MinIO, and provide a walkthrough for a simple file uploader. For a complete list of APIs and examples, please take a look at the Go Client API Reference.

This document assumes that you have a working Go development environment.

Download from Github

GO111MODULE=on go get github.com/minio/minio-go/v7

Initialize MinIO Client

MinIO client requires the following four parameters specified to connect to an Amazon S3 compatible object storage.

Parameter Description
endpoint URL to object storage service.
minio.Options All the options such as credentials, custom transport etc.
package main

import (
	"log"

	"github.com/minio/minio-go/v7"
	"github.com/minio/minio-go/v7/pkg/credentials"
)

func main() {
	endpoint := "play.min.io"
	accessKeyID := "Q3AM3UQ867SPQQA43P2F"
	secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
	useSSL := true

	// Initialize minio client object.
	minioClient, err := minio.New(endpoint, &minio.Options{
		Creds:  credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
		Secure: useSSL,
	})
	if err != nil {
		log.Fatalln(err)
	}

	log.Printf("%#v\n", minioClient) // minioClient is now set up
}

Quick Start Example - File Uploader

This example program connects to an object storage server, creates a bucket and uploads a file to the bucket.

We will use the MinIO server running at https://play.min.io in this example. Feel free to use this service for testing and development. Access credentials shown in this example are open to the public.

FileUploader.go

package main

import (
	"context"
	"log"

	"github.com/minio/minio-go/v7"
	"github.com/minio/minio-go/v7/pkg/credentials"
)

func main() {
	ctx := context.Background()
	endpoint := "play.min.io"
	accessKeyID := "Q3AM3UQ867SPQQA43P2F"
	secretAccessKey := "zuf+tfteSlswRu7BJ86wekitnifILbZam1KYY3TG"
	useSSL := true

	// Initialize minio client object.
	minioClient, err := minio.New(endpoint, &minio.Options{
		Creds:  credentials.NewStaticV4(accessKeyID, secretAccessKey, ""),
		Secure: useSSL,
	})
	if err != nil {
		log.Fatalln(err)
	}

	// Make a new bucket called mymusic.
	bucketName := "mymusic"
	location := "us-east-1"

	err = minioClient.MakeBucket(ctx, bucketName, minio.MakeBucketOptions{Region: location})
	if err != nil {
		// Check to see if we already own this bucket (which happens if you run this twice)
		exists, errBucketExists := minioClient.BucketExists(ctx, bucketName)
		if errBucketExists == nil && exists {
			log.Printf("We already own %s\n", bucketName)
		} else {
			log.Fatalln(err)
		}
	} else {
		log.Printf("Successfully created %s\n", bucketName)
	}

	// Upload the zip file
	objectName := "golden-oldies.zip"
	filePath := "/tmp/golden-oldies.zip"
	contentType := "application/zip"

	// Upload the zip file with FPutObject
	n, err := minioClient.FPutObject(ctx, bucketName, objectName, filePath, minio.PutObjectOptions{ContentType: contentType})
	if err != nil {
		log.Fatalln(err)
	}

	log.Printf("Successfully uploaded %s of size %d\n", objectName, n)
}

Run FileUploader

export GO111MODULE=on
go run file-uploader.go
2016/08/13 17:03:28 Successfully created mymusic
2016/08/13 17:03:40 Successfully uploaded golden-oldies.zip of size 16253413

mc ls play/mymusic/
[2016-05-27 16:02:16 PDT]  17MiB golden-oldies.zip

API Reference

The full API Reference is available here.

API Reference : Bucket Operations

API Reference : Bucket policy Operations

API Reference : Bucket notification Operations

API Reference : File Object Operations

API Reference : Object Operations

API Reference : Presigned Operations

API Reference : Client custom settings

Full Examples

Full Examples : Bucket Operations

Full Examples : Bucket policy Operations

Full Examples : Bucket lifecycle Operations

Full Examples : Bucket encryption Operations

Full Examples : Bucket replication Operations

Full Examples : Bucket notification Operations

Full Examples : File Object Operations

Full Examples : Object Operations

Full Examples : Encrypted Object Operations

Full Examples : Presigned Operations

Explore Further

Contribute

Contributors Guide

License

This SDK is distributed under the Apache License, Version 2.0, see LICENSE and NOTICE for more information.

Owner
High Performance, Kubernetes Native Object Storage
Build high performance data infrastructure for machine learning, analytics and application data workloads with MinIO
High Performance, Kubernetes Native Object Storage
Comments
  • Can no longer put 0 sized objects

    Can no longer put 0 sized objects

    In commit f8e360d4dc446022b46f1be1e8bdda5f736b9195 and previous, this worked, where source is the path to a 0 byte file (one that had just gotten truncated):

    _, err := a.client.FPutObject(a.bucket, dest, source, contentType)
    if err != nil {
    	info, serr := os.Stat(source)
    	if serr != nil {
    		fmt.Printf("source was bad: %s\n", serr)
    	} else {
    		fmt.Printf("did FPutObject(%s, %s, %s, %s) (where source has size %d) and got %s\n", a.bucket, dest, source, contentType, info.Size(), err)
    	}
    }
    

    But since commit 8d69ba85cf79a5134321e7b10a7dbf3860511001 it fails with this error:

    did FPutObject(user, wr_tests/write.test, /tmp/[...]/wr_tests/write.test, text/plain; charset=utf-8) (where source has size 0) and got You must provide the Content-Length HTTP header.

    Is this a bug, or should I now be doing something different?

  • Unable to access bucket with restricted account in eu-central-1

    Unable to access bucket with restricted account in eu-central-1

    Hi,

    I have a somewhat peculiar problem. A user contributed a tutorial to setup a restricted account with S3 which can just access one bucket (and e.g. not create new buckets or access the console). Since I've updated minio-go in restic for https://github.com/minio/minio/issues/4275 I'm unable to create new repos via this restricted account (it worked before).

    The error is (including the trace, built with minio-go 5d7ee332f62e83d36beba669be671801180abb89):

    $ restic -r s3:s3.amazonaws.com/restic-test-travis/x2 init
    
    ---------START-HTTP---------                                   
    GET /restic-test-travis/?location= HTTP/1.1
    Host: s3.amazonaws.com
    User-Agent: Minio (linux; amd64) minio-go/2.1.0
    Authorization: AWS4-HMAC-SHA256 Credential=AKIAJSQEUXZSP56YBL6Q/20170512/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=19ac77e663ecaee24ab7832479152865fc71ac5a06bf33914dcca617b59c6fc7
    X-Amz-Content-Sha256: UNSIGNED-PAYLOAD
    X-Amz-Date: 20170512T203950Z
    Accept-Encoding: gzip
    
    HTTP/1.1 403 Forbidden
    Transfer-Encoding: chunked
    Content-Type: application/xml
    Date: Fri, 12 May 2017 20:39:51 GMT
    Server: AmazonS3
    X-Amz-Id-2: 5jNddocga32YohOP6BpvdE2/OxKFm1gOSiznLvdhXz6/ux5R3dAq1qoNQLTGswFzNZkafUtSMNI=
    X-Amz-Request-Id: 31AD0B0559C5E892
    
    f3
    <?xml version="1.0" encoding="UTF-8"?>
    <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>31AD0B0559C5E892</RequestId><HostId>5jNddocga32YohOP6BpvdE2/OxKFm1gOSiznLvdhXz6/ux5R3dAq1qoNQLTGswFzNZkafUtSMNI=</HostId></Error>
    0
    ---------END-HTTP---------
    ---------START-HTTP---------
    HEAD / HTTP/1.1
    Host: restic-test-travis.s3.amazonaws.com
    User-Agent: Minio (linux; amd64) minio-go/2.1.0
    Authorization: AWS4-HMAC-SHA256 Credential=AKIAJSQEUXZSP56YBL6Q/20170512/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=d87e750f8de5e6d21e8ceffca75b095eed69ff43744a1ac3f3e3a12029b70855
    X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
    X-Amz-Date: 20170512T203951Z
    
    HTTP/1.1 400 Bad Request
    Connection: close
    Transfer-Encoding: chunked
    Content-Type: application/xml
    Date: Fri, 12 May 2017 20:39:51 GMT
    Server: AmazonS3
    X-Amz-Bucket-Region: eu-central-1
    X-Amz-Id-2: 0qQw45YtBWvgSUm5HFoQ8UJ2w/XWn/KRpZyg6xj5DDMC7kPHhOFxVnIfM6OozkqvHOrVojoU49k=
    X-Amz-Request-Id: AE1F2FC4AA4843B1
    
    ---------END-HTTP---------
    create backend at s3:s3.amazonaws.com/restic-test-travis/x2 failed: client.BucketExists: 400 Bad Request
    

    I've done a git bisect from b1674741d196d5d79486d7c1645ed6ded902b712 (good) to 5d7ee332f62e83d36beba669be671801180abb89 (bad), and the commit that caused it was:

    fd942284fe190615d098af0b63e1698ef0969df1 is the first bad commit                                          
    commit fd942284fe190615d098af0b63e1698ef0969df1                                                           
    Author: Harshavardhana <[email protected]>                                                                  
    Date:   Tue Apr 4 11:35:20 2017 -0700                                                                     
                                                                                                              
        api: Check for Code 'InvalidRegion' for retrying with server Region. (#639)                           
                                                                                                              
    diff --git a/api.go b/api.go
    index 2beba77..e971721 100644
    --- a/api.go
    +++ b/api.go
    @@ -543,9 +543,9 @@ func (c Client) executeMethod(method string, metadata requestMetadata) (res *htt
     
            // For errors verify if its retryable otherwise fail quickly.
            errResponse := ToErrorResponse(httpRespToErrorResponse(res, metadata.bucketName, metadata.objectNa
    -       // Bucket region if set in error response, we can retry the
    -       // request with the new region.
    -       if errResponse.Region != "" {
    +       // Bucket region if set in error response and the error code dictates invalid region,
    +       // we can retry the request with the new region.
    +       if errResponse.Code == "InvalidRegion" && errResponse.Region != "" {
                c.bucketLocCache.Set(metadata.bucketName, errResponse.Region)
                continue // Retry.
            }
    

    On the other hand, a successful trace (with the commit right before that one) is:

    ---------START-HTTP---------                               
    GET /restic-test-travis/?location= HTTP/1.1
    Host: s3.amazonaws.com
    User-Agent: Minio (linux; amd64) minio-go/2.0.4
    Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20170512/us-east-1/s3/aws4_request, SignedHeaders=host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
    X-Amz-Content-Sha256: UNSIGNED-PAYLOAD
    X-Amz-Date: 20170512T203041Z
    Accept-Encoding: gzip
    
    HTTP/1.1 403 Forbidden
    Transfer-Encoding: chunked
    Content-Type: application/xml
    Date: Fri, 12 May 2017 20:30:41 GMT
    Server: AmazonS3
    X-Amz-Id-2: wFRgmf4iQpCw/spDBjf3kh6pZPCQ91bnrvBZGMs29ZqD3fLJfaHnoh0qtB1LphAv09Z67UKy0F8=
    X-Amz-Request-Id: 9D1AEFF328416280
    
    f3
    <?xml version="1.0" encoding="UTF-8"?>
    <Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>9D1AEFF328416280</RequestId><HostId>wFRgmf4iQpCw/spDBjf3kh6pZPCQ91bnrvBZGMs29ZqD3fLJfaHnoh0qtB1LphAv09Z67UKy0F8=</HostId></Error>
    0
    ---------END-HTTP---------
    ---------START-HTTP---------
    HEAD / HTTP/1.1
    Host: restic-test-travis.s3.amazonaws.com
    User-Agent: Minio (linux; amd64) minio-go/2.0.4
    Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20170512/us-east-1/s3/aws4_request, SignedHeaders=expect;host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
    Expect: 100-continue
    X-Amz-Content-Sha256: UNSIGNED-PAYLOAD
    X-Amz-Date: 20170512T203041Z
    
    HTTP/1.1 400 Bad Request
    Connection: close
    Transfer-Encoding: chunked
    Content-Type: application/xml
    Date: Fri, 12 May 2017 20:30:41 GMT
    Server: AmazonS3
    X-Amz-Bucket-Region: eu-central-1
    X-Amz-Id-2: yixxZF7DxK/3bJL9qlkrgHzjG7qDha2VrAtnhNLjezsoywZcjoxs9IsGhMN/wepfxOg++ekxou4=
    X-Amz-Request-Id: 1C93A9104ED73A7D
    
    ---------END-HTTP---------
    ---------START-HTTP---------
    HEAD / HTTP/1.1
    Host: restic-test-travis.s3-eu-central-1.amazonaws.com
    User-Agent: Minio (linux; amd64) minio-go/2.0.4
    Authorization: AWS4-HMAC-SHA256 Credential=**REDACTED**/20170512/eu-central-1/s3/aws4_request, SignedHeaders=expect;host;x-amz-content-sha256;x-amz-date, Signature=**REDACTED**
    Expect: 100-continue
    X-Amz-Content-Sha256: UNSIGNED-PAYLOAD
    X-Amz-Date: 20170512T203041Z
    
    HTTP/1.1 200 OK
    Connection: close
    Transfer-Encoding: chunked
    Content-Type: application/xml
    Date: Fri, 12 May 2017 20:30:43 GMT
    Server: AmazonS3
    X-Amz-Bucket-Region: eu-central-1
    X-Amz-Id-2: NT+/zXZ+hTi+iAKapHhlHByQ4a1I+ifFo3vbAdQBZ4Q3NEoyxgDxBIxqbHgmC/oWHPESv8S8mDc=
    X-Amz-Request-Id: AB5769CE0F7D8CA6
    
    ---------END-HTTP---------
    

    It seems to me that testing for errResponse.Code == "InvalidRegion" is not enough: It should also test for a response code of 400 Bad Request when a region is present, and try that. I've verified that the following patch works:

    diff --git a/api.go b/api.go
    index 9cd2b6d..6beb445 100644
    --- a/api.go
    +++ b/api.go
    @@ -556,7 +556,7 @@ func (c Client) executeMethod(method string, metadata requestMetadata) (res *htt
     		// Bucket region if set in error response and the error
     		// code dictates invalid region, we can retry the request
     		// with the new region.
    -		if errResponse.Code == "InvalidRegion" && errResponse.Region != "" {
    +		if errResponse.Region != "" && (errResponse.Code == "InvalidRegion" || errResponse.Code == "400 Bad Request") {
     			c.bucketLocCache.Set(metadata.bucketName, errResponse.Region)
     			continue // Retry.
     		}
    

    Maybe this also needs to be added to api-put-bucket.go?

    FYI: I'm not sure if errResponse.Code will ever by equal to InvalidRegion, since in all cases I observed the string always started with the numeric HTTP response code and it also contained spaces...

  • Iterating more than 1k objects failed with ListObjectsV2 but works with ListObjects

    Iterating more than 1k objects failed with ListObjectsV2 but works with ListObjects

    Given this simple (possibly not very go-like) function where I iterate through all files, looking for a BACKUP file, and then return the s3://bucket/prefix for a given one among other things.

        doneCh := make(chan struct{})
        defer close(doneCh)
        backups := make([]Backup, 0)
        // iterate through all objects looking for .../BACKUP which indicates a finished backup
        objectCh := s.client.ListObjectsV2(s.config.Bucket, "", true, doneCh)
        for object := range objectCh {
            if object.Err != nil {
                return nil, fmt.Errorf("unable to get backups: %s", object.Err)
            }
            if path.Base(object.Key) == "BACKUP" {
                var backupType types.BackupType
                if strings.Contains(object.Key, types.Incremental.String()) {
                    backupType = types.Incremental
                } else if strings.Contains(object.Key, types.Full.String()) {
                    backupType = types.Full
                }
                u, _ := url.Parse("")
                u.Scheme = "s3"
                u.Host = s.config.Bucket
                u.Path = path.Dir(object.Key)
                b := Backup{
                    ID:       u.Path,
                    Location: s.addAuthorization(u.String()),
                    Typ:      backupType,
                }
                backups = append(backups, b)
            }
        }
        sort.SliceStable(backups, func(i int, j int) bool {
            return backups[i].Location < backups[j].Location
        })
    

    When using ListObjectsV2 in this manner I kept getting the following error:

    Truncated response should have continuation token set

    But I get no such error and my application works using ListObjects.

  • GetObject not returning error for non-existent file

    GetObject not returning error for non-existent file

    I'm using minio-go to access files in a Ceph Object Gateway with an S3 frontend. It seems to work perfectly for all read, write and listing operations I've tried, except for this behaviour:

    package main
    
    import (
    	"bufio"
    	"fmt"
    	"github.com/minio/minio-go"
    	"io"
    	"log"
    	"os"
    )
    
    func main() {
    	bucket := "sb10"
    	remotePath := "non-existent.file"
    	localPath := "/tmp/non-existent.file"
    
    	s3Client, err := minio.New(os.Getenv("AWS_S3_ENDPOINT"), os.Getenv("AWS_ACCESS_KEY_ID"), os.Getenv("AWS_SECRET_ACCESS_KEY"), true)
    	if err != nil {
    		log.Fatalln(err)
    	}
    
    	err = s3Client.FGetObject(bucket, remotePath, localPath)
    	fmt.Printf("FGetObject err = %s\n", err)
    
    	object, err := s3Client.GetObject(bucket, remotePath)
    	fmt.Printf("GetObject err = %s\n", err)
    	if err == nil {
    		err = Stream(object)
    		fmt.Printf("Stream err = %s\n", err)
    	}
    }
    
    func Stream(r io.Reader) error {
    	br := bufio.NewReader(r)
    	b := make([]byte, 10000, 10000)
    	for {
    		_, err := br.Read(b)
    		if err != nil {
    			if err == io.EOF {
    				break
    			}
    			fmt.Println("Stream will return a non-EOF error")
    			return err
    		}
    	}
    	return nil
    }
    

    Running this code (where $AWS_S3_ENDPOINT == cog.mydomain.tld) gives this output:

    FGetObject err = The specified key does not exist.
    GetObject err = %!s(<nil>)
    Stream will return a non-EOF error
    Stream err =
    

    I understand that GetObject should return the same error as FGetObject did, but it doesn't? Any thoughts as to what I'm doing wrong, or how I can know the object didn't exist without doing an additional call (like StatObject)?

  • File corruption on S3 upload

    File corruption on S3 upload

    Hi,

    I'm working for Exoscale, a cloud company the offers an S3-compatible Object Storage service named SOS. We've noticed that content corruption occurs when uploading large files on SOS with the minio-go package, which doesn't happen with the same file on the same service endpoint using s3cmd:

    $ md5sum largefile
    e3fcf94aad137e4d8da9ef04c5647d57  largefile
    
    $ s3cmd put largefile s3://marc-templates/largefile-s3cmd
    upload: 'largefile' -> 's3://marc-templates/largefile-s3cmd'  [part 1 of 83, 15MB] [1 of 1]
     15728640 of 15728640   100% in    2s     7.22 MB/s  done
    ...
    upload: 'largefile' -> 's3://marc-templates/largefile-s3cmd'  [part 83 of 83, 14MB] [1 of 1]
     14811296 of 14811296   100% in    1s     7.09 MB/s  done
    
    $ exo sos upload -p largefile-exo marc-templates largefile
    done! 100 % [==============================================================] largefile
    
    $ s3cmd ls --list-md5 s3://marc-templates/
    2019-06-19 14:52 1304559776   ca211e79fdd8c34a1231093eaa0dfa78-20  s3://marc-templates/largefile-exo
    2019-06-19 14:41 1304559776   e3fcf94aad137e4d8da9ef04c5647d57  s3://marc-templates/largefile-s3cmd
    

    It is possible that we may have messed up in our CLI implementation (available here), but it may also be a bug in this package. Note that we don't see this issue with smaller files.

    Could you please have a look?

  • Object.Read doesn't return error on truncated response

    Object.Read doesn't return error on truncated response

    Thanos and Cortex are using the Minio client as S3 client. From time to time we've some reports of bugs that looks like if we get a partial/truncated response from the server which is not treated as an error from our application.

    I've tried to simulate a scenario (which I don't know how much realistic could be) where the server returns a body response smaller than the Content-Length. The Minio Object.Read() returns no error once all response has been consumed while, as comparison, the Google GCS client does return io.ErrUnexpectedEOF.

    I've added a test in Thanos to show it: https://github.com/thanos-io/thanos/pull/3795

    Questions:

    • Why the Minio client doesn't check if the received response size matches the Content-Length?
    • The scenario described shouldn't be reported as an error by the Minio client?
  • Implement Versioning support

    Implement Versioning support

    1. GetObject

    Download object can receive a version id to download a particular object version:

    GetObject(bucketName, objectName, GetObjectOptions{VersionID: versionID})

    2. StatObject

    Stating object can receive a particular version id:

    StatObject(bucketName, objectName, StatObjectOptions{GetObjectOptions{VersionID: versionId}})

    (In a breaking change, we need to define StatObjectOptions as type alais of GetObjectOptions)

    3. RemoveObject

    Removing object can remove a particular object

    RemoveObject(bucketName, objectName, RemoveObjectOptions{VersionID: versionId})

    4. ListObjectVersions

    Listing objects versions has a new API function introduced:

    type ObjectVersionInfo struct {
           ETag string 
           Key          string   
           LastModified time.Time
           Size         int64  
           Owner Owner
           StorageClass string
           IsLatest       bool
           IsDeleteMarker bool
           VersionID      string
           Err error
    }
    
    ListObjectVersions(bucketName, prefix string, recursive bool, doneCh <-chan struct{}) <-chan ObjectVersionInfo
    

    5. CopyObject

    Setting the version ID of the source object during copy

    NewSourceInfo(bucket, object).SetVersionID(versionID) to specify a version ID of the source object.

    6. ComposeObject

    UploadPart Copy with versioning id is the same as regular copy: NewSourceInfo(bucket, object).SetVersionID(versionID)

    7. Object Tagging

    type PutObjectTaggingOptions struct {
        VersionID string
    }
    PutObjectTaggingWithOptions(ctx context.Context, bucketName, objectName string, objectTags map[string]string, opts PutObjectTaggingOptions) error
    
    type GetObjectTaggingOptions struct {
        VersionID string
    }
    
    GetObjectTaggingWithOptions(ctx context.Context, bucketName, objectName string, opts GetObjectTaggingOptions) (map[string]string, error)
    
    type RemoveObjectTaggingOptions struct {
        VersionID string
    }
    
    RemoveObjectTaggingWithOptions(ctx context.Context, bucketName, objectName string, opts RemoveObjectTaggingOptions) error
    
  • Functional tests not passing when Minio server has a new default region

    Functional tests not passing when Minio server has a new default region

    When I set Minio region to "us-west-2" in ~/.minio/config.json, functional tests won't pass anymore:

    $ go run functional_tests.go
    ...
    INFO[0015]                                               file=functional_tests.go function:=main.testGetObjectClosedTwice line#=665
    FATA[0041] Error:Put http://localhost:9000/minio-go-testiuzn23cfq9jrzlcnz/1yzgz903xtgj9tceycazirws0cmlla: Connection closed by foreign host http://localhost:9000/minio-go-testiuzn23cfq9jrzlcnz/1yzgz903xtgj9tceycazirws0cmlla. Retry again.minio-go-testiuzn23cfq9jrzlcnz1yzgz903xtgj9tceycazirws0cmlla  file=functional_tests.go function:=main.testGetObjectClosedTwice line#=705
    exit status 1
    
  • Fix #730 by using a guaranteed reuse buffer pool.

    Fix #730 by using a guaranteed reuse buffer pool.

    Always return buffers to the pool.

    Current code fails to call bufPool.Put(bufp) for every bufPool.Get(), which is fixed here. But even with this fix, the code in #730 still results in signal: killed.

    Reimplementing the buffer pool with github.com/oxtoacart/bpool instead of sync.Pool solves the problem. Now we always reuse buffers if possible.

  • Add a new switch to explicitly control whether or not to use the virtual-hosted style

    Add a new switch to explicitly control whether or not to use the virtual-hosted style

    The s3utils.IsVirtualHostSupported is only compatible with Amazon S3 and Google Cloud Storage currently. That's not fair. In fact, there're other cloud service providers that are using the virtual-hosted style, such as DigitalOcean Spaces, Alibaba Cloud OSS and Qiniu Cloud Kodo. I really like to use minio-go to connect to all the Amazon S3 compatible object storage services. So, please consider adding a switch that can explicitly control whether or not to use the virtual-hosted style. :-)

  • fix: use virtual hosted style if server supports it.

    fix: use virtual hosted style if server supports it.

    This partially fixes https://github.com/minio/mc/issues/2335

    Auto probe if virtual hosted style urls are supported by server. If not, default to path style urls. Will also be submitting a PR for mc

  • PutObject() to return a wrapped error for io.Reader errors

    PutObject() to return a wrapped error for io.Reader errors

    PutObject() does not differentiate between internal errors or errors coming from io.Reader argument, which makes t roubleshooting harder.

    Add a prefix for the returned error during reading io.Reader argument.

    Example: Before this commit:

    mc: <ERROR> Failed to copy `http://192.168.1.113:9000/testbucket/testobject.2`. Put "http://localhost:9001/testbucket/testobject.2": Resource requested is unreadable, please reduce your request rate
    

    After this commit:

    mc: <ERROR> Failed to copy `http://192.168.1.113:9000/testbucket/testobject.2`. Put "http://localhost:9001/testbucket/testobject.2": read: Resource requested is unreadable, please reduce your request rate
    
  • PutObject for s3 csv extract fails.

    PutObject for s3 csv extract fails.

    Minio Version used minio-go: github.com/minio/minio-go/v7 v7.0.24 Minio bitnami chart version used: bitnami/minio:2022.8.13-debian-11-r0

    So we call GetObject to retrieve a CSV file from a zip: `func (ms minioService) GetObject( ctx context.Context, bucketName string, path string, ) (*minio.Object, error) { var opts minio.GetObjectOptions

    // Add extract zip header to request:
    opts.Set("x-minio-extract", true)
    
    // Download file from the archive
    reader, err := ms.minioClient.GetObject(ctx, bucketName, path, opts)
    if err != nil {
    	return nil, err
    }
    
    return reader, nil
    

    }`

    Then we call PutObject to put the object into another bucket:

    `func (ms minioService) PutObject( ctx context.Context, bucketName string, object *minio.Object, newFileName string, fileSize int64, ) error { log.Info("minio#PutObject Start Progress of uploading file.") // progressBarLog := pb.New64(fileSize) // defer progressBarLog.FinishPrint("minio#PutObject File Uploaded.") // progressBarLog.Start() info, err := ms.minioClient.PutObject(ctx, bucketName, newFileName, object, fileSize, minio.PutObjectOptions{}) if err != nil { return err } log.Info("minio#PutObject info on the upload", info)

    return nil
    

    }`

    Some files passes PutObject, but this particular file keeps on erroring out.

    This results in an error: 0.00%ERROR[2022-10-18T18:56:08Z] Failed to put file into bucket bucketName=data-sources error="Put \"http://minio:9000/data-sources/data-source-1.csv?partNumber=1&uploadId=41381d95-27cb-4bad-b4ed-e64a43539dc6\": net/http: HTTP/1.x transport connection broken: http: ContentLength=16777216 with Body length 0" method=PutObject newFileName=data-source-1.csv test2.zip

  • Object is not traced when the request returns an error

    Object is not traced when the request returns an error

    In this line https://github.com/minio/minio-go/blob/ff482a18933aa30769bfaeb7d34fa680fa51bcee/api.go#L491, we can see that if there is an error while making the request, the code returns and therefore no trace logging happens. This is unexpected behavior and makes things hard to debug. Even if the response doesn't come, the tracing logs both request and response. So it's always expected to log the request regardless of the response.

    I'd suggest to split the method into 2 parts: first logging the request unconditionally, then logging the response along with the condition of && resp.StatusCode == http.StatusOK.

    Due to this, we are unable to debug connection failures from object storage: https://github.com/mattermost/mattermost-server/issues/19584

A go sdk for baidu netdisk open platform 百度网盘开放平台 Go SDK

Pan Go Sdk 该代码库为百度网盘开放平台Go语言的SDK

Nov 22, 2022
Nextengine-sdk-go: the NextEngine SDK for the Go programming language

NextEngine SDK for Go nextengine-sdk-go is the NextEngine SDK for the Go programming language. Getting Started Install go get github.com/takaaki-s/nex

Dec 7, 2021
Commercetools-go-sdk is fork of original commercetools-go-sdk

commercetools-go-sdk The Commercetools Go SDK is automatically generated based on the official API specifications of Commercetools. It should therefor

Dec 13, 2021
Sdk-go - Go version of the Synapse SDK

synapsesdk-go Synapse Protocol's Go SDK. Currently in super duper alpha, do not

Jan 7, 2022
Redash-go-sdk - An SDK for the programmatic management of Redash, in Go
Redash-go-sdk - An SDK for the programmatic management of Redash, in Go

Redash Go SDK An SDK for the programmatic management of Redash. The main compone

Dec 13, 2022
Amplitude unofficial client for Go, inspired in their official SDK for Node

Amplitude Golang SDK Amplitude unofficial client for Go, inspired in their official SDK for Node. For reference, visit HTTP API v2 documentation. Inst

Dec 31, 2022
A Golang Client Library for building Cosmos SDK chain clients

Cosmos Client Lib in Go This is the start of ideas around how to implement the cosmos client libraries in a seperate repo How to instantiate and use t

Jan 6, 2023
ShenYu Client SDK fo golang.

shenyu-client-golang 中文 Shenyu-client-golang Shenyu-client-golang for Go client allows you to access ShenYu Gateway,it supports registory go service t

Oct 25, 2022
A Go client implementing a client-side distributed consumer group client for Amazon Kinesis.
A Go client implementing a client-side distributed consumer group client for Amazon Kinesis.

Kinesumer is a Go client implementing a client-side distributed consumer group client for Amazon Kinesis.

Jan 5, 2023
Clusterpedia-client - clusterpedia-client supports the use of native client-go mode to call the clusterpedia API

clusterpedia-client supports the use of native client-go mode to call the cluste

Jan 7, 2022
Client-go - Clusterpedia-client supports the use of native client-go mode to call the clusterpedia API

clusterpedia-client supports the use of native client-go mode to call the cluste

Dec 5, 2022
AWS SDK for the Go programming language.

AWS SDK for Go aws-sdk-go is the official AWS SDK for the Go programming language. Checkout our release notes for information about the latest bug fix

Dec 31, 2022
A Facebook Graph API SDK For Go.

A Facebook Graph API SDK In Golang This is a Go package that fully supports the Facebook Graph API with file upload, batch request and marketing API.

Dec 12, 2022
A Golang SDK for Medium's OAuth2 API

Medium SDK for Go This repository contains the open source SDK for integrating Medium's OAuth2 API into your Go app. Install go get github.com/Medium/

Nov 28, 2022
Simple no frills AWS S3 Golang Library using REST with V4 Signing (without AWS Go SDK)

simples3 : Simple no frills AWS S3 Library using REST with V4 Signing Overview SimpleS3 is a golang library for uploading and deleting objects on S3 b

Nov 4, 2022
Twilight is an unofficial Golang SDK for Twilio APIs
Twilight is an unofficial Golang SDK for Twilio APIs

Twilight is an unofficial Golang SDK for Twilio APIs. Twilight was born as a result of my inability to spell Twilio correctly. I searched for a Twillio Golang client library and couldn’t find any, I decided to build one. Halfway through building this, I realized I had spelled Twilio as Twillio when searching for a client library on Github.

Jul 2, 2021
Wechat Pay SDK(V3) Write by Go.

WechatPay GO(v3) Introduction Wechat Pay SDK(V3) Write by Go. API V3 of Office document is here. Features Signature/Verify messages Encrypt/Decrypt ce

May 23, 2022
Go Wechaty is a Conversational SDK for Chatbot Makers Written in Go
Go Wechaty is a Conversational SDK for Chatbot Makers Written in Go

go-wechaty Connecting Chatbots Wechaty is a RPA SDK for Wechat Individual Account that can help you create a chatbot in 6 lines of Go. Voice of the De

Dec 30, 2022
An easy-to-use unofficial SDK for Feishu and Lark Open Platform

go-lark go-lark is an easy-to-use unofficial SDK for Feishu and Lark Open Platform. go-lark implements messaging APIs, with full-fledged supports on b

Jan 2, 2023