Fast, concurrent, streaming access to Amazon S3, including gof3r, a CLI. http://godoc.org/github.com/rlmcpherson/s3gof3r

s3gof3r Build Status GoDoc

s3gof3r provides fast, parallelized, pipelined streaming access to Amazon S3. It includes a command-line interface: gof3r.

It is optimized for high speed transfer of large objects into and out of Amazon S3. Streaming support allows for usage like:

  $ tar -czf - <my_dir/> | gof3r put -b <s3_bucket> -k <s3_object>    
  $ gof3r get -b <s3_bucket> -k <s3_object> | tar -zx

Speed Benchmarks

On an EC2 instance, gof3r can exceed 1 Gbps for both puts and gets:

  $ gof3r get -b test-bucket -k 8_GB_tar | pv -a | tar -x
  Duration: 53.201632211s
  [ 167MB/s]
  

  $ tar -cf - test_dir/ | pv -a | gof3r put -b test-bucket -k 8_GB_tar
  Duration: 1m16.080800315s
  [ 119MB/s]

These tests were performed on an m1.xlarge EC2 instance with a virtualized 1 Gigabit ethernet interface. See Amazon EC2 Instance Details for more information.

Features

  • Speed: Especially for larger s3 objects where parallelism can be exploited, s3gof3r will saturate the bandwidth of an EC2 instance. See the Benchmarks above.

  • Streaming Uploads and Downloads: As the above examples illustrate, streaming allows the gof3r command-line tool to be used with linux/unix pipes. This allows transformation of the data in parallel as it is uploaded or downloaded from S3.

  • End-to-end Integrity Checking: s3gof3r calculates the md5 hash of the stream in parallel while uploading and downloading. On upload, a file containing the md5 hash is saved in s3. This is checked against the calculated md5 on download. On upload, the content-md5 of each part is calculated and sent with the header to be checked by AWS. s3gof3r also checks the 'hash of hashes' returned by S3 in the Etag field on completion of a multipart upload. See the S3 API Reference for details.

  • Retry Everything: All http requests and every part is retried on both uploads and downloads. Requests to S3 frequently time out, especially under high load, so this is essential to complete large uploads or downloads.

  • Memory Efficiency: Memory used to upload and download parts is recycled. For an upload or download with the default concurrency of 10 and part size of 20 MB, the maximum memory usage is less than 300 MB. Memory footprint can be further reduced by reducing part size or concurrency.

Installation

s3gof3r is written in Go and requires go 1.5 or later. It can be installed with go get to download and compile it from source. To install the command-line tool, gof3r set GO15VENDOREXPERIMENT=1 in your environment:

$ go get github.com/rlmcpherson/s3gof3r/gof3r

To install just the package for use in other Go programs:

$ go get github.com/rlmcpherson/s3gof3r

Release Binaries

To try the latest release of the gof3r command-line interface without installing go, download the statically-linked binary for your architecture from Github Releases.

gof3r (command-line interface) usage:

  To stream up to S3:
     $  <input_stream> | gof3r put -b <bucket> -k <s3_path>
  To stream down from S3:
     $ gof3r get -b <bucket> -k <s3_path> | <output_stream>
  To upload a file to S3:
     $ $ gof3r cp <local_path> s3://<bucket>/<s3_path>
  To download a file from S3:
     $ gof3r cp s3://<bucket>/<s3_path> <local_path>

Set AWS keys as environment Variables:

  $ export AWS_ACCESS_KEY_ID=<access_key>
  $ export AWS_SECRET_ACCESS_KEY=<secret_key>

gof3r also supports IAM role-based keys from EC2 instance metadata. If available and environment variables are not set, these keys are used are used automatically.

Examples:

$ tar -cf - /foo_dir/ | gof3r put -b my_s3_bucket -k bar_dir/s3_object -m x-amz-meta-custom-metadata:abc123 -m x-amz-server-side-encryption:AES256
$ gof3r get -b my_s3_bucket -k bar_dir/s3_object | tar -x    

see the gof3r man page for complete usage

Documentation

s3gof3r package: See the godocs for api documentation.

gof3r cli : godoc and gof3r man page

Have a question? Ask it on the s3gof3r Mailing List

Owner
Comments
  • memory leak?

    memory leak?

    Hi,

    Can something be done about memory consumption? I've "gof3r put" with default options of what turned out to be 671Gb object. It took 416m56.254s and gof3r memory utilization grew in the end to 21G virtual and 14G residential.

    gof3r version 0.4.5

  • Sign request using AWS v4 signature

    Sign request using AWS v4 signature

    Use AWS v4 signature for signing requests, this is a port of aws go logic exclusively for s3 and adapted to work against http.Request. It will also detect region based on s3 domain or using the AWS_REGION environment variable.

    P.S: I can move out whitespace and other small changes out of this PR if preferred.

  • Support for AWS signature version 4

    Support for AWS signature version 4

    All regions created after January 30, 2014 (e.g. Frankfurt) support only signature version 4. http://docs.aws.amazon.com/AmazonS3/latest/API/sig-v4-authenticating-requests.html

  • streaming anomoly

    streaming anomoly

    I am streaming to S3 like this: gof3r put -b trice-app-files -k bucketPath -m x-amz-server-side-encryption:AES256

    When bucketPath is just a file name (going straight into the top-level bucket), it all works fine.

    However, when bucketPath is a path (e.g., 2014/09/01/fileName), it fails with the following error: gof3r error: 403: "The request signature we calculated does not match the signature you provided. Check your key and signing method."

    Any help you can provide is greatly appreciated. I love the tool, btw.

  • Version signing broken on master

    Version signing broken on master

    It looks like to me, on master, versioning is currently broken:

    vagrant@vagrant-ubuntu-trusty-64:/vagrant$ eval "$(gimme 1.5.1)"
    go version go1.5.1 linux/amd64
    vagrant@vagrant-ubuntu-trusty-64:/vagrant$ git branch
    * master
    vagrant@vagrant-ubuntu-trusty-64:/vagrant$ go test -run Version
    --- FAIL: TestGetVersion (1.00s)
        s3gof3r_test.go:419: version id: rNBjRePSIbXqSyIA6tZ3J6yXLpO2bAPR
        s3gof3r_test.go:423: 403: "The request signature we calculated does not match the signature you provided. Check your key and signing method."
    FAIL
    exit status 1
    FAIL    _/vagrant   4.167s
    vagrant@vagrant-ubuntu-trusty-64:/vagrant$ git checkout v0.5.0
    Note: checking out 'v0.5.0'.
    HEAD is now at 31603a0... version to 0.5.0
    vagrant@vagrant-ubuntu-trusty-64:/vagrant$ go test -run Version
    PASS
    ok      _/vagrant   2.023s
    vagrant@vagrant-ubuntu-trusty-64:/vagrant$ 
    

    Looks like the bucket Travis is using is not versioned, so that test is currently being skipped.

  • gof3r: poor error message on failure

    gof3r: poor error message on failure

    I ran into another logging-related issue. I haven't changed anything in my application, but all of a sudden, I'm getting

    Error:  400: "The XML you provided was not well-formed or did not validate against our published schema"
    

    That in itself is not necessarily a problem--I haven't changed the gof3r binary I'm using so I suspect I'm doing something wrong--but the error message here is pretty opaque--I as the user am certainly not providing any XML.

  • Log formatting

    Log formatting

    It looks like gof3r logging is pretty sloppy in a number of places. I was testing it recently for large-ish uploads and it does great, but it's logging raw structs in many different places instead of friendlier messages. Would you accept a patch to clean up logging?

  • Support for EC2 instance profiles

    Support for EC2 instance profiles

    Looking at the code, I don't see support for pulling down credentials from the EC2 metadata service if they are provided. Seeding AWS access keys to instances is not really a recommended way of authorizing these applications inside of the cloud, although environment variable support should be available for the user's workstation.

  • '#' in S3 key/

    '#' in S3 key/"path"

    In an S3 bucket, I have a folder like so: #foo

    I am trying to upload a file to it, e.g. put a file with local pathname /tmp/bar into the key #foo/bar

    $my_bucket_name contains only alpha-numerics and the - character.

    [notroot@aio14 ~]$ touch /tmp/bar
    [notroot@aio14 ~]$ cat /tmp/bar | gof3r put -b "$my_bucket_name" -k \#foo/bar
    gof3r error: 412: "At least one of the pre-conditions you specified did not hold"
    [notroot@aio14 ~]$ gof3r cp /tmp/bar s3://"$my_bucket_name"/#foo/bar
    gof3r error: 400: "A key must be specified"
    [notroot@aio14 ~]$ 
    [notroot@aio14 ~]$ gof3r cp /tmp/bar s3://"$my_bucket_name"/foo/bar
    duration: 770.022417ms
    [notroot@aio14 ~]$ cat /tmp/bar | gof3r put -b "$my_bucket_name" -k foo/bar
    duration: 760.235401ms
    
  • HTTP HEAD request should be used instead of GET for initial request

    HTTP HEAD request should be used instead of GET for initial request

    To determine the amount of chunks the content-length of the object is needed. A HTTP GET is performed and the body is not read from the response in order to get this header. A HTTP HEAD would have returned this information and would have not included the payload. This would also eliminate unnecessary TCP packets that are carrying segments of the payload but never read.

    There is an opportunity I think to optimize further and have the initial request be a HTTP GET with the byte range for the first chunk. HTTP spec says that if the byte range exceeds the content-length, then the content-length is used. Then the initial response to get the content length for calculating chunk totals could re-used and there would not need to be a second HTTP GET request. Could also determine that if there is no need for any additional chunks, then do not startup any workers and just proceed with current response processing. This is bigger work and probably should be moved into its own feature / issue. Or this issue can stay open to resolve.

  • OBOE (off by one error) when determining if all chunks have been read

    OBOE (off by one error) when determining if all chunks have been read

    The code for determining if all chunks have been read has an OBOE that causes not all bytes to be read if the number of bytes in the chunk is 1 byte more than the amount of bytes read after the copy

            if g.cIdx >= g.rChunk.size-1 { // chunk complete
                g.sp.give <- g.rChunk.b
                g.chunkID++
                g.rChunk = nil
            }
    

    This will occur if the chunk size is a multiple of the default byte buffer size (32KiB, 32 * 1024) that io.Copy uses + 1 byte; When the copy exits, g.cIdx is updated with the amount bytes read. This makes the variable a count which is 1-based (opposed to 0-based if it was an index into the buffer); The chunk size is a length which is 1-based, and there is no need to offset by 1.

    When this is encountered, the goroutines will end up in an infinite select situation as the next iteration of the loop will check if g.bytesRead == g.contentLen which it does not because there is 1 byte remaining in the chunk, and it will proceed to call g.nextChunk() since the g.rChunk was cleared out earlier. Now the goroutine will block the select in nextChunk() and there are no more chunks.

    goroutine trace from pprof

    goroutine 43 [select, 83 minutes]:
    github.com/rlmcpherson/s3gof3r.(*getter).nextChunk(0xc2083705a0, 0x44c2d4, 0x0, 0x0)
        /go/src/github.com/rlmcpherson/s3gof3r/getter.go:244 +0x2d9
    github.com/rlmcpherson/s3gof3r.(*getter).Read(0xc2083705a0, 0xc21b1e0000, 0x8000, 0x8000, 0x8000, 0x0, 0x0)
        /go/src/github.com/rlmcpherson/s3gof3r/getter.go:207 +0x182
    io.Copy(0x7feef9a5e5a8, 0xc208064800, 0x7feef9a6e320, 0xc2083705a0, 0x180000, 0x0, 0x0)
        /usr/local/go/src/pkg/io/io.go:353 +0x1f3
    project/dbimport.(*bucketUtils).concat(0xc21b2071c0, 0xc208242000, 0x5a, 0x80, 0xc21b64ab40, 0x40, 0x0, 0x0)
        /go/src/project/main.go:217 +0x7ef
    project/dbimport.processS3Files(0xc2080ec1e0, 0xc2081ec660, 0xc2083ce2c0, 0x7fffc6ee6e1f, 0xf, 0xc20821ff50, 0x2a, 0xc208269200, 0x5a, 0x5a, ...)
        /go/src/project/main.go:792 +0x17a3
    project/dbimport.func·007(0xc2083cc2a0)
        /go/src/project/main.go:622 +0x13b6
    
    
    goroutine 93481 [select]:
    github.com/rlmcpherson/s3gof3r.func·002()
        /go/src/github.com/rlmcpherson/s3gof3r/pool.go:42 +0x6cd
    created by github.com/rlmcpherson/s3gof3r.bufferPool
        /go/src/github.com/rlmcpherson/s3gof3r/pool.go:68 +0x15a
    

    Can be reproduced with any file that is 1 byte larger than a multiple of 32KiB, such as in my case 1081345 (32 * 1024 * 33 + 1); Default PartSize of 20MiB

    The last couple iterations of the loop with some debugs

    ...
    s3gof3r2015/03/06 23:37:16 getter.go:203: -----loop start------
    s3gof3r2015/03/06 23:37:16 getter.go:204: g.chunkTotal: 1
    s3gof3r2015/03/06 23:37:16 getter.go:205: g.chunkID: 0
    s3gof3r2015/03/06 23:37:16 getter.go:207: g.rChunk.id: 0
    s3gof3r2015/03/06 23:37:16 getter.go:211: nw: 0
    s3gof3r2015/03/06 23:37:16 getter.go:212: len(p): 32768
    s3gof3r2015/03/06 23:37:16 getter.go:213: g.cIdx: 1015808
    s3gof3r2015/03/06 23:37:16 getter.go:214: g.bytesRead: 1015808
    s3gof3r2015/03/06 23:37:16 getter.go:215: g.contentLen: 1081345
    s3gof3r2015/03/06 23:37:16 getter.go:228: g.cIdx: 1015808
    s3gof3r2015/03/06 23:37:16 getter.go:229: g.rChunk.size: 1081345
    s3gof3r2015/03/06 23:37:16 getter.go:230: len(p): 32768
    s3gof3r2015/03/06 23:37:16 getter.go:232: len(p): 32768
    s3gof3r2015/03/06 23:37:16 getter.go:233: n: 32768
    s3gof3r2015/03/06 23:37:16 getter.go:234: bytesRead: 1015808
    s3gof3r2015/03/06 23:37:16 getter.go:238: bytesRead: 1048576
    s3gof3r2015/03/06 23:37:16 getter.go:244: g.chunkID: 0
    s3gof3r2015/03/06 23:37:16 getter.go:246: g.rChunk.id: 0
    s3gof3r2015/03/06 23:37:16 getter.go:250: -----loop end------
    s3gof3r2015/03/06 23:37:16 getter.go:203: -----loop start------
    s3gof3r2015/03/06 23:37:16 getter.go:204: g.chunkTotal: 1
    s3gof3r2015/03/06 23:37:16 getter.go:205: g.chunkID: 0
    s3gof3r2015/03/06 23:37:16 getter.go:207: g.rChunk.id: 0
    s3gof3r2015/03/06 23:37:16 getter.go:211: nw: 0
    s3gof3r2015/03/06 23:37:16 getter.go:212: len(p): 32768
    s3gof3r2015/03/06 23:37:16 getter.go:213: g.cIdx: 1048576
    s3gof3r2015/03/06 23:37:16 getter.go:214: g.bytesRead: 1048576
    s3gof3r2015/03/06 23:37:16 getter.go:215: g.contentLen: 1081345
    s3gof3r2015/03/06 23:37:16 getter.go:228: g.cIdx: 1048576
    s3gof3r2015/03/06 23:37:16 getter.go:229: g.rChunk.size: 1081345
    s3gof3r2015/03/06 23:37:16 getter.go:230: len(p): 32768
    s3gof3r2015/03/06 23:37:16 getter.go:232: len(p): 32768
    s3gof3r2015/03/06 23:37:16 getter.go:233: n: 32768
    s3gof3r2015/03/06 23:37:16 getter.go:234: bytesRead: 1048576
    s3gof3r2015/03/06 23:37:16 getter.go:238: bytesRead: 1081344
    s3gof3r2015/03/06 23:37:16 getter.go:244: g.chunkID: 1
    s3gof3r2015/03/06 23:37:16 getter.go:248: g.rChunk.id: nil
    s3gof3r2015/03/06 23:37:16 getter.go:250: -----loop end------
    s3gof3r2015/03/06 23:37:16 getter.go:203: -----loop start------
    s3gof3r2015/03/06 23:37:16 getter.go:204: g.chunkTotal: 1
    s3gof3r2015/03/06 23:37:16 getter.go:205: g.chunkID: 1
    s3gof3r2015/03/06 23:37:16 getter.go:209: g.rChunk.id: nil
    s3gof3r2015/03/06 23:37:16 getter.go:211: nw: 0
    s3gof3r2015/03/06 23:37:16 getter.go:212: len(p): 32768
    s3gof3r2015/03/06 23:37:16 getter.go:213: g.cIdx: 1081344
    s3gof3r2015/03/06 23:37:16 getter.go:214: g.bytesRead: 1081344
    s3gof3r2015/03/06 23:37:16 getter.go:215: g.contentLen: 1081345
    s3gof3r2015/03/06 23:37:16 getter.go:271: ------nextChunk select------
    

    Results in infinite select loop. File on disk is missing 1 byte (the last byte of file).

    Can also be reproduced by using any PartSize that is a multiple of 32KiB + 1; Example of a file that succeeded with default config of 20MiB PartSize, but fails with a PartSize of 131073 (32 * 1024 * 4 + 1); Also shows a multipart download which above example did not.

    ...
    s3gof3r2015/03/06 23:55:15 getter.go:203: -----loop start------
    s3gof3r2015/03/06 23:55:15 getter.go:204: g.chunkTotal: 10
    s3gof3r2015/03/06 23:55:15 getter.go:205: g.chunkID: 9
    s3gof3r2015/03/06 23:55:15 getter.go:207: g.rChunk.id: 9
    s3gof3r2015/03/06 23:55:15 getter.go:211: nw: 0
    s3gof3r2015/03/06 23:55:15 getter.go:212: len(p): 32768
    s3gof3r2015/03/06 23:55:15 getter.go:213: g.cIdx: 32768
    s3gof3r2015/03/06 23:55:15 getter.go:214: g.bytesRead: 1212416
    s3gof3r2015/03/06 23:55:15 getter.go:215: g.contentLen: 1275093
    s3gof3r2015/03/06 23:55:15 getter.go:228: g.cIdx: 32768
    s3gof3r2015/03/06 23:55:15 getter.go:229: g.rChunk.size: 95436
    s3gof3r2015/03/06 23:55:15 getter.go:230: len(p): 32768
    s3gof3r2015/03/06 23:55:15 getter.go:232: len(p): 32768
    s3gof3r2015/03/06 23:55:15 getter.go:233: n: 32768
    s3gof3r2015/03/06 23:55:15 getter.go:234: bytesRead: 1212416
    s3gof3r2015/03/06 23:55:15 getter.go:238: bytesRead: 1245184
    s3gof3r2015/03/06 23:55:15 getter.go:244: g.chunkID: 9
    s3gof3r2015/03/06 23:55:15 getter.go:246: g.rChunk.id: 9
    s3gof3r2015/03/06 23:55:15 getter.go:250: -----loop end------
    s3gof3r2015/03/06 23:55:15 getter.go:203: -----loop start------
    s3gof3r2015/03/06 23:55:15 getter.go:204: g.chunkTotal: 10
    s3gof3r2015/03/06 23:55:15 getter.go:205: g.chunkID: 9
    s3gof3r2015/03/06 23:55:15 getter.go:207: g.rChunk.id: 9
    s3gof3r2015/03/06 23:55:15 getter.go:211: nw: 0
    s3gof3r2015/03/06 23:55:15 getter.go:212: len(p): 32768
    s3gof3r2015/03/06 23:55:15 getter.go:213: g.cIdx: 65536
    s3gof3r2015/03/06 23:55:15 getter.go:214: g.bytesRead: 1245184
    s3gof3r2015/03/06 23:55:15 getter.go:215: g.contentLen: 1275093
    s3gof3r2015/03/06 23:55:15 getter.go:228: g.cIdx: 65536
    s3gof3r2015/03/06 23:55:15 getter.go:229: g.rChunk.size: 95436
    s3gof3r2015/03/06 23:55:15 getter.go:230: len(p): 32768
    s3gof3r2015/03/06 23:55:15 getter.go:232: len(p): 32768
    s3gof3r2015/03/06 23:55:15 getter.go:233: n: 29900
    s3gof3r2015/03/06 23:55:15 getter.go:234: bytesRead: 1245184
    s3gof3r2015/03/06 23:55:15 getter.go:238: bytesRead: 1275084
    s3gof3r2015/03/06 23:55:15 getter.go:244: g.chunkID: 10
    s3gof3r2015/03/06 23:55:15 getter.go:248: g.rChunk.id: nil
    s3gof3r2015/03/06 23:55:15 getter.go:250: -----loop end------
    s3gof3r2015/03/06 23:55:15 getter.go:203: -----loop start------
    s3gof3r2015/03/06 23:55:15 getter.go:204: g.chunkTotal: 10
    s3gof3r2015/03/06 23:55:15 getter.go:205: g.chunkID: 10
    s3gof3r2015/03/06 23:55:15 getter.go:209: g.rChunk.id: nil
    s3gof3r2015/03/06 23:55:15 getter.go:211: nw: 29900
    s3gof3r2015/03/06 23:55:15 getter.go:212: len(p): 32768
    s3gof3r2015/03/06 23:55:15 getter.go:213: g.cIdx: 95436
    s3gof3r2015/03/06 23:55:15 getter.go:214: g.bytesRead: 1275084
    s3gof3r2015/03/06 23:55:15 getter.go:215: g.contentLen: 1275093
    s3gof3r2015/03/06 23:55:15 getter.go:271: ------nextChunk select------
    

    Results in infinite select loop. File on disk will be missing 1 byte for every chunk.

    Pull request incoming which addresses the issue and adds a couple guards

    • If bytes read does not equal content length and all chunks have been processed, error
    • If bytes read is greater than content length, error
      • This should not occur as golang uses a LimitedReader up to the content length, but for completeness should be here

    For more robustness, there could be timeout on the select in the getNextChunk, If for any reason the routine gets into here and there are no more chunks, it will block forever. Cannot rely on the underlying HTTP connection being gone and triggering a close because the data has been read into memory. I did think about that the worker() function should close the g.readCh but that would not have it break out of the select (it would if it was a range on the select); Have not thought this fully out, but feel something can be done to signal that no more chunks will be arriving on the channel because the workers are gone.

  • Can not download yfcc100m

    Can not download yfcc100m

    Hi, I am using s3gof3r to download yfcc100mwith their instructi ons: gof3r get -b multimedia-commons -k images/ -p ./.

    gof3r get -b multimedia-commons -k images/ -p ./ gof3r error: open ./: is a directory

    gof3r get -b multimedia-commons -k images/ gof3r error: 400: "The authorization header is malformed; the region 'us-east-1' is wrong; expecting 'us-west-2'"

    gof3r get -b multimedia-commons -k images/ --endpoint s3-website-us-west-2.amazonaws.com gof3r error: Get https://multimedia-commons.s3-website-us-west-2.amazonaws.com/images: dial tcp 52.218.253.146:443: i/o timeout

  • Allow plus in key names

    Allow plus in key names

    Keys containing the character Plus ("+"), fail the md5 signature check. For example a file with key: 5-ID+Rub+IgG+II+30221_01_2002_FR.docx

    gof3r error: 403: "The request signature we calculated does not match the signature you provided. Check your key and signing method."
    

    Related to the linked issue which fixed the colon (":") and at ("@") characters.

    Originally posted by @ewold in https://github.com/rlmcpherson/s3gof3r/issues/110#issuecomment-346761148

  • Binaries for many platforms

    Binaries for many platforms

    Hi, for the Julia wrapper for s3gof3r, I set up Julia's binary builder, which builds for many platforms automatically: https://github.com/JuliaBinaryWrappers/s3gof3r_jll.jl/releases Feel free to use them / point to them :)

    Best, Simon

  • Loosen endpoint regex to allow to s3-compatible services

    Loosen endpoint regex to allow to s3-compatible services

    The current regex prohibits using third-party (non-aws) services which implement the S3 API

    https://github.com/rlmcpherson/s3gof3r/blob/864ae0bf7cf2e20c0002b7ea17f4d84fec1abc14/s3gof3r.go#L21

    Is there a reason to limit gof3r to strictly amazon services? Or could the regex be loosened to accept any valid URL?

Related tags
Clones github projects into ~/Projects/github/{org}/{repo}

Tidy clone Github cli extension (gh extension) to clone repos into ~/Projects/github/{org}/{repo} on the local filesystem Install gh extension install

Jan 19, 2022
github-actions-merger is github actions that merges pull request with commit message including pull request labels.

github-actions-merger github-actions-merger is github actions that merges pull request with commit message including pull request labels. Usage Write

Dec 7, 2022
GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.
GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.

GitOops is a tool to help attackers and defenders identify lateral movement and privilege escalation paths in GitHub organizations by abusing CI/CD pipelines and GitHub access controls.

Jan 2, 2023
ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run, exec, cp, logs, stop)
ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run, exec, cp, logs, stop)

English / 日本語 ecsk ECS + Task = ecsk ?? ecsk is a CLI tool to interactively use frequently used functions of docker command in Amazon ECS. (docker run

Dec 13, 2022
Go-github-actions - `go-github-actions` is a package for developing GitHub Actions

go-github-actions go-github-actions is a package for developing GitHub Actions.

Feb 6, 2022
This repository contains Prowjob configurations for Amazon EKS Anywhere.

Amazon EKS Anywhere Prow Jobs This repository contains Prowjob configuration for the Amazon EKS Anywhere project, which includes the eks-anywhere and

Dec 19, 2022
Run Amazon EKS on your own infrastructure 🚀

Amazon EKS Anywhere Conformance test status: Amazon EKS Anywhere is a new deployment option for Amazon EKS that enables you to easily create and opera

Jan 5, 2023
Prometheus exporter for Amazon Elastic Container Service (ECS)

ecs_exporter ?? ?? ?? This repo is still work in progress and is subject to change. This repo contains a Prometheus exporter for Amazon Elastic Contai

Nov 27, 2022
Amazon Web Services (AWS) providerAmazon Web Services (AWS) provider

Amazon Web Services (AWS) provider The Amazon Web Services (AWS) resource provider for Pulumi lets you use AWS resources in your cloud programs. To us

Nov 10, 2021
Amazon Elastic Container Service Agent
Amazon Elastic Container Service Agent

Amazon ECS Container Agent The Amazon ECS Container Agent is a component of Amazon Elastic Container Service (Amazon ECS) and is responsible for manag

Jan 2, 2023
Test-csi-driver - Amazon Elastic Block Store (EBS) CSI driver

Amazon Elastic Block Store (EBS) CSI driver Overview The Amazon Elastic Block St

Feb 1, 2022
Godart - Amazon Alexa skill in Go to read train times out loud

GODART Alexa skill to have DART times for the requested station. build and deplo

Apr 13, 2022
Managing your Kubernetes clusters (including public, private, edge, etc) as easily as visiting the Internet

Clusternet Managing Your Clusters (including public, private, hybrid, edge, etc) as easily as Visiting the Internet. Clusternet (Cluster Internet) is

Dec 30, 2022
Topology-tester - Application to easily test microservice topologies and distributed tracing including K8s and Istio

Topology Tester The Topology Tester app allows you to quickly build a dynamic mi

Jan 14, 2022
A simple Go app and GitHub workflow that shows how to use GitHub Actions to test, build and deploy a Go app to Docker Hub

go-pipeline-demo A repository containing a simple Go app and GitHub workflow that shows how to use GitHub Actions to test, build and deploy a Go app t

Nov 17, 2021
Go-github-app - Template for building GitHub Apps in Go.

Template for GitHub Apps built with Golang Blog Posts - More Information About This Repo You can find more information about this project/repository a

Dec 25, 2022
Github billing exporter - Billing exporter for GitHub organizations

GitHub billing exporter Forked From: https://github.com/borisputerka/github_bill

Nov 2, 2022
Github-language-trends - Github trending languages API

Github trending languages API This API provides list of most popular github lang

Feb 15, 2022
Nba-simulation - Golang will be simulating nba match and streaming it real time

NBA Simulation golang in-memory To build and run go build ./nbaSimulation To ru

Feb 21, 2022