Parallel S3 and local filesystem execution tool.

Go Report Github Actions Status

s5cmd

Overview

s5cmd is a very fast S3 and local filesystem execution tool. It comes with support for a multitude of operations including tab completion and wildcard support for files, which can be very handy for your object storage workflow while working with large number of files.

There are already other utilities to work with S3 and similar object storage services, thus it is natural to wonder what s5cmd has to offer that others don't.

In short, s5cmd offers a very fast speed. Thanks to Joshua Robinson for his study and experimentation on s5cmd; to quote his medium post:

For uploads, s5cmd is 32x faster than s3cmd and 12x faster than aws-cli. For downloads, s5cmd can saturate a 40Gbps link (~4.3 GB/s), whereas s3cmd and aws-cli can only reach 85 MB/s and 375 MB/s respectively.

If you would like to know more about performance of s5cmd and the reasons for its fast speed, refer to benchmarks section

Features

s5cmd supports wide range of object management tasks both for cloud storage services and local filesystems.

  • List buckets and objects
  • Upload, download or delete objects
  • Move, copy or rename objects
  • Set Server Side Encryption using AWS Key Management Service (KMS)
  • Set Access Control List (ACL) for objects/files on the upload, copy, move.
  • Print object contents to stdout
  • Create buckets
  • Summarize objects sizes, grouping by storage class
  • Wildcard support for all operations
  • Multiple arguments support for delete operation
  • Command file support to run commands in batches at very high execution speeds
  • Dry run support
  • S3 Transfer Acceleration support
  • Google Cloud Storage (and any other S3 API compatible service) support
  • Structured logging for querying command outputs
  • Shell auto-completion

Installation

Binaries

The Releases page provides pre-built binaries for Linux and macOS.

Homebrew

For macOS, a homebrew tap is provided:

brew tap peak/s5cmd https://github.com/peak/s5cmd
brew install s5cmd

Build from source

You can build s5cmd from source if you have Go 1.13+ installed.

go get github.com/peak/s5cmd

⚠️ Please note that building from master is not guaranteed to be stable since development happens on master branch.

Docker

Hub

$ docker pull peakcom/s5cmd
$ docker run --rm -v ~/.aws:/root/.aws peakcom/s5cmd <S3 operation>

Build

$ git clone https://github.com/peak/s5cmd && cd s5cmd
$ docker build -t s5cmd .
$ docker run --rm -v ~/.aws:/root/.aws s5cmd <S3 operation>

Usage

s5cmd supports multiple-level wildcards for all S3 operations. This is achieved by listing all S3 objects with the prefix up to the first wildcard, then filtering the results in-memory. For example, for the following command;

s5cmd cp 's3://bucket/logs/2020/03/*' .

first a ListObjects request is send, then the copy operation will be executed against each matching object, in parallel.

Examples

Download a single S3 object

s5cmd cp s3://bucket/object.gz .

Download multiple S3 objects

Suppose we have the following objects:

s3://bucket/logs/2020/03/18/file1.gz
s3://bucket/logs/2020/03/19/file2.gz
s3://bucket/logs/2020/03/19/originals/file3.gz
s5cmd cp 's3://bucket/logs/2020/03/*' logs/

s5cmd will match the given wildcards and arguments by doing an efficient search against the given prefixes. All matching objects will be downloaded in parallel. s5cmd will create the destination directory if it is missing.

logs/ directory content will look like:

$ tree
.
└── logs
    ├── 18
    │   └── file1.gz
    └── 19
        ├── file2.gz
        └── originals
            └── file3.gz

4 directories, 3 files

ℹ️ s5cmd preserves the source directory structure by default. If you want to flatten the source directory structure, use the --flatten flag.

s5cmd cp --flatten 's3://bucket/logs/2020/03/*' logs/

logs/ directory content will look like:

$ tree
.
└── logs
    ├── file1.gz
    ├── file2.gz
    └── file3.gz

1 directory, 3 files

Upload a file to S3

s5cmd cp object.gz s3://bucket/

by setting server side encryption (aws kms) of the file:

s5cmd cp -sse aws:kms -sse-kms-key-id <your-kms-key-id> object.gz s3://bucket/

by setting Access Control List (acl) policy of the object:

s5cmd cp -acl bucket-owner-full-control object.gz s3://bucket/

Upload multiple files to S3

s5cmd cp directory/ s3://bucket/

Will upload all files at given directory to S3 while keeping the folder hierarchy of the source.

Delete an S3 object

s5cmd rm s3://bucket/logs/2020/03/18/file1.gz

Delete multiple S3 objects

s5cmd rm s3://bucket/logs/2020/03/19/*

Will remove all matching objects:

s3://bucket/logs/2020/03/19/file2.gz
s3://bucket/logs/2020/03/19/originals/file3.gz

s5cmd utilizes S3 delete batch API. If matching objects are up to 1000, they'll be deleted in a single request. However, it should be noted that commands such as

s5cmd rm s3://bucket-foo/object s3://bucket-bar/object

are not supported by s5cmd and result in error (since we have 2 different buckets), as it is in odds with the benefit of performing batch delete requests. Thus, if in need, one can use s5cmd run mode for this case, i.e,

$ s5cmd run
rm s3://bucket-foo/object
rm s3://bucket-bar/object

more details and examples on s5cmd run are presented in a later section.

Copy objects from S3 to S3

s5cmd supports copying objects on the server side as well.

s5cmd cp 's3://bucket/logs/2020/*' s3://bucket/logs/backup/

Will copy all the matching objects to the given S3 prefix, respecting the source folder hierarchy.

⚠️ Copying objects (from S3 to S3) larger than 5GB is not supported yet. We have an open ticket to track the issue.

Count objects and determine total size

$ s5cmd du --humanize 's3://bucket/2020/*'

30.8M bytes in 3 objects: s3://bucket/2020/*

Run multiple commands in parallel

The most powerful feature of s5cmd is the commands file. Thousands of S3 and filesystem commands are declared in a file (or simply piped in from another process) and they are executed using multiple parallel workers. Since only one program is launched, thousands of unnecessary fork-exec calls are avoided. This way S3 execution times can reach a few thousand operations per second.

s5cmd run commands.txt

or

cat commands.txt | s5cmd run

commands.txt content could look like:

cp s3://bucket/2020/03/* logs/2020/03/

# line comments are supported
rm s3://bucket/2020/03/19/file2.gz

# empty lines are OK too like above

# rename an S3 object
mv s3://bucket/2020/03/18/file1.gz s3://bucket/2020/03/18/original/file.gz

# list all buckets
ls # inline comments are OK too

Dry run

--dry-run flag will output what operations will be performed without actually carrying out those operations.

s3://bucket/pre/file1.gz
...
s3://bucket/last.txt

running

s5cmd --dry-run cp s3://bucket/pre/* s3://another-bucket/

will output

cp s3://bucket/pre/file1.gz s3://another-bucket/file1.gz
...
cp s3://bucket/pre/last.txt s3://anohter-bucket/last.txt

however, those copy operations will not be performed. It is displaying what s5cmd will do when ran without --dry-run

Note that --dry-run can be used with any operation that has a side effect, i.e., cp, mv, rm, mb ...

Specifying credentials

s5cmd uses official AWS SDK to access S3. SDK requires credentials to sign requests to AWS. Credentials can be provided in a variety of ways:

  • Environment variables
  • AWS credentials file
  • If s5cmd runs on an Amazon EC2 instance, EC2 IAM role
  • If s5cmd runs on EKS, Kube IAM role

The SDK detects and uses the built-in providers automatically, without requiring manual configurations.

Shell auto-completion

Shell completion is supported for bash, zsh and fish.

To enable auto-completion, run:

s5cmd --install-completion

This will add a few lines to your shell configuration file. After installation, restart your shell to activate the changes.

Google Cloud Storage support

s5cmd supports S3 API compatible services, such as GCS, Minio or your favorite object storage.

s5cmd --endpoint-url https://storage.googleapis.com ls

will return your GCS buckets.

s5cmd will use virtual-host style bucket resolving for S3, S3 transfer acceleration and GCS. If a custom endpoint is provided, it'll fallback to path-style.

Retry logic

s5cmd uses an exponential backoff retry mechanism for transient or potential server-side throttling errors. Non-retriable errors, such as invalid credentials, authorization errors etc, will not be retried. By default, s5cmd will retry 10 times for up to a minute. Number of retries are adjustable via --retry-count flag.

ℹ️ Enable debug level logging for displaying retryable errors.

Using wildcards

Most shells can attempt to expand wildcards before passing the arguments to s5cmd, resulting in surprising no matches found errors.

To avoid this problem, surround the wildcarded expression with single quotes.

Output

s5cmd supports both structured and unstructured outputs.

  • unstructured output
$ s5cmd cp s3://bucket/testfile .

cp s3://bucket/testfile testfile
$ s5cmd cp --no-clobber s3://somebucket/file.txt file.txt

ERROR "cp s3://somebucket/file.txt file.txt": object already exists
  • If --json flag is provided:
{
    "operation": "cp",
    "success": true,
    "source": "s3://bucket/testfile",
    "destination": "testfile",
    "object": "[object]"
}
{
    "operation": "cp",
    "job": "cp s3://somebucket/file.txt file.txt",
    "error": "'cp s3://somebucket/file.txt file.txt': object already exists"
}

Benchmarks

Some benchmarks regarding the performance of s5cmd are introduced below. For more details refer to this post which is the source of the benchmarks to be presented.

Upload/download of single large file

get/put performance graph

Uploading large number of small-sized files

multi-object upload performance graph

Performance comparison on different hardware

s3 upload speed graph

So, where does all this speed come from?

There are mainly two reasons for this:

  • It is written in Go, a statically compiled language designed to make development of concurrent systems easy and make full utilization of multi-core processors.
  • Parallelization. s5cmd starts out with concurrent worker pools and parallelizes workloads as much as possible while trying to achieve maximum throughput.

Advanced Usage

Some of the advanced usage patterns provided below are inspired by the following article (thank you! @joshuarobinson)

Integrate s5cmd operations with Unix commands

Assume we have a set of objects on S3, and we would like to list them in sorted fashion according to object names.

$ s5cmd ls s3://bucket/reports/ | sort -k 4
2020/08/17 09:34:33              1364 antalya.csv
2020/08/17 09:34:33                 0 batman.csv
2020/08/17 09:34:33             23114 istanbul.csv
2020/08/17 09:34:33             26154 izmir.csv
2020/08/17 09:34:33               112 samsun.csv
2020/08/17 09:34:33             12552 van.csv

For a more practical scenario, let's say we have an avocado prices dataset, and we would like to take a peek at the few lines of the data by fetching only the necessary bytes.

$ s5cmd cat s3://bucket/avocado.csv.gz | gunzip | xsv slice --len 5 | xsv table
    Date        AveragePrice  Total Volume  4046     4225       4770   Total Bags  Small Bags  Large Bags  XLarge Bags  type          year  region
0   2015-12-27  1.33          64236.62      1036.74  54454.85   48.16  8696.87     8603.62     93.25       0.0          conventional  2015  Albany
1   2015-12-20  1.35          54876.98      674.28   44638.81   58.33  9505.56     9408.07     97.49       0.0          conventional  2015  Albany
2   2015-12-13  0.93          118220.22     794.7    109149.67  130.5  8145.35     8042.21     103.14      0.0          conventional  2015  Albany
3   2015-12-06  1.08          78992.15      1132.0   71976.41   72.58  5811.16     5677.4      133.76      0.0          conventional  2015  Albany
4   2015-11-29  1.28          51039.6       941.48   43838.39   75.78  6183.95     5986.26     197.69      0.0          conventional  2015  Albany

Beast Mode s5cmd

s5cmd allows to pass in some file, containing list of operations to be performed, as an argument to the run command as illustrated in the above example. Alternatively, one can pipe in commands into the run:

BUCKET=s5cmd-test; s5cmd ls s3://$BUCKET/*test | grep -v DIR | awk ‘{print $NF}’
| xargs -I {} echo “cp s3://$BUCKET/{} /local/directory/” | s5cmd run

The above command performs two s5cmd invocations; first, searches for files with test suffix and then creates a copy to local directory command for each matching file and finally, pipes in those into the run.

Let's examine another usage instance, where we migrate files older than 30 days to a cloud object storage:

find /mnt/joshua/nachos/ -type f -mtime +30 | xargs -I{} echo “mv {} s3://joshuarobinson/backup/{}”
| s5cmd run

It is worth to mention that, run command should not be considered as a silver bullet for all operations. For example, assume we want to remove the following objects:

s3://bucket/prefix/2020/03/object1.gz
s3://bucket/prefix/2020/04/object1.gz
...
s3://bucket/prefix/2020/09/object77.gz

Rather than executing

rm s3://bucket/prefix/2020/03/object1.gz
rm s3://bucket/prefix/2020/04/object1.gz
...
rm s3://bucket/prefix/2020/09/object77.gz

with run command, it is better to just use

rm s3://bucket/prefix/2020/0*/object*.gz

the latter sends single delete request per thousand objects, whereas using the former approach sends a separate delete request for each subcommand provided to run. Thus, there can be a significant runtime difference between those two approaches.

LICENSE

MIT. See LICENSE.

Comments
  • Unexpected subdirectory structure when running cp

    Unexpected subdirectory structure when running cp

    I'm running:

    AWS_REGION=my-region /home/ubuntu/go/bin/s5cmd cp -u -s --parents s3://my-bucket/my-subdirectory/.local/* /home/ubuntu/.local/

    I'm expecting the contents from .local inside my subdirectory to be copied into /home/ubuntu/.local/, instead, they are getting copied to /home/ubuntu/.local/my-subdirectory/.local

    Is this an expected behaviour? as per the command option documentation, the dir structure is created from the first wildcard onwards, and I recall it working like that in previous versions of s5cmd

    Please advise

  • Add

    Add "sync" command

    Would be nice to be able to have a smart sync command like s3cmd sync or aws s3 sync, which will upload the files that changed and remove the deleted ones.

  • sync command not found

    sync command not found

    Is sync currently disabled? I see the commit to main but not having any luck...

    ❯ s5cmd version
    v1.4.0-d7a0dda
    ❯ s5cmd sync
    ERROR "sync": command not found
    
  • Backblaze b2 doesn't work

    Backblaze b2 doesn't work

    I attempted to use the ls command and got this ERROR

    ERROR "ls ": InvalidAccessKeyId: The key '[redacted]' is not valid status code: 403, request id: [redacted], host id: [redacted]
    

    but I've used other cli applications and didn't get an InvalidAccessKeyID error? Including, backblaze-b2, goofys, and s3fs.

  • Support AWS SSO profiles

    Support AWS SSO profiles

    I set up my environment variables and check that the session is valid using the AWS CLI.

    $ export AWS_PROFILE=sandbox-logging
    $ export AWS_DEFAULT_REGION=eu-west-1
    $ aws sts get-caller-identity
    {
        "UserId": "AROAXXXXXXXXXXXXXXXXX:iain",
        "Account": "111111111111",
        "Arn": "arn:aws:sts::111111111111:assumed-role/AWSReservedSSO_AdministratorAccess_aaaaaaaaaaaaaaaa/iain"
    }
    

    s5cmd seems to ignore my environment variables and instead tries to query the EC2 metadata service.

    $ s5cmd --log debug ls
    DEBUG retryable error: RequestError: send request failed
    caused by: Put "http://169.254.169.254/latest/api/token": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
    DEBUG retryable error: RequestError: send request failed
    caused by: Put "http://169.254.169.254/latest/api/token": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
    DEBUG retryable error: RequestError: send request failed
    caused by: Put "http://169.254.169.254/latest/api/token": dial tcp 169.254.169.254:80: connect: no route to host
    

    I was expecting output like this:

    $ aws s3 ls
    2021-11-23 15:29:46 aws-sam-cli-managed-default-samclisourcebucket-xxxxxxxxxxxx
    2021-11-16 17:47:47 cf-templates-xxxxxxxxxxxxx-eu-west-1
    2021-11-30 20:11:58 org-trail-xxxxxxxx
    2021-12-01 12:36:42 org-trail-yyyyyyyy
    ...
    

    I'm using an SSO profile. Does that matter?

    [profile sandbox-logging]
    sso_start_url = https://d-1111111111.awsapps.com/start
    sso_region = eu-west-1
    sso_account_id = 111111111111
    sso_role_name = AdministratorAccess
    region = eu-west-1
    
  • Improved URL Parsing

    Improved URL Parsing

    • Use shellquote to split lines in commands.txt files so filenames with spaces can be used
    • Escape regex characters in prefixes when initializing ObjectURLs. These may be a part of valid S3 prefixes.
  • Add --force-path-style flag to use path-style addressing.

    Add --force-path-style flag to use path-style addressing.

    Many/most non-AWS object stores only support path-style addressing, but recent changes switched to virtual-host addressing which broke access to those object stores. This adds a flag to switch back to path style addressing.

    For more info, a virtual-host address looks like "http://bucketname.endpoint", resulting in a DNS lookup. Path-style addressing looks like "http://endpoint/bucketname" instead. The benefit of path-style addressing is simplicity; you do not need to interact with DNS servers.

  • Workers gets stuck after processing a single command when processing a list of commands

    Workers gets stuck after processing a single command when processing a list of commands

    I have a file "commands.txt" with a list of commands like this:

    cp -n s3://bucket1/file1 s3://bucket2/file1 cp -n s3://bucket1/file2 s3://bucket2/file2 cp -n s3://bucket1/file3 s3://bucket2/file3 ...

    When I call s5cmd like

    s5cmd -numworkers 2 -f commands.txt

    I see output like

    2020/02/17 23:24:47 # Using 2 workers 2020/02/17 23:24:47 +OK "cp s3://bucket1/file1 s3://bucket2/file1" 2020/02/17 23:24:47 +OK "cp s3://bucket1/file2 s3://bucket2/file2"

    and then it gets stuck, until I hit Ctrl-C and see the following output

    2020/02/17 23:23:31 # Got signal, cleaning up... 2020/02/17 23:23:31 # Exiting with code 0 2020/02/17 23:23:31 # Stats: S3 3 0 ops/sec 2020/02/17 23:23:31 # Stats: Total 3 0 ops/sec 1m19.084946532s

    The first 2 files are actually copied correctly, the rest of the commands from the file is not executed. Same with taking the commands from the standard input.

    I just installed the latest version of s5cmd - v0.7.0.

    Deleting all the objects from the second bucket works fine with a command like:

    s5cmd rm s3://bucket2/*

    Any suggestions for workaround?

    Thanks.

  • Warning: Calling bottle :unneeded is deprecated! There is no replacement.

    Warning: Calling bottle :unneeded is deprecated! There is no replacement.

    Just got a warning from brew outdated:

    Warning: Calling bottle :unneeded is deprecated! There is no replacement.
    Please report this issue to the peak/s5cmd tap (not Homebrew/brew or Homebrew/core):
      /home/linuxbrew/.linuxbrew/Homebrew/Library/Taps/peak/homebrew-s5cmd/Formula/s5cmd.rb:9
    
  • Got

    Got "read: connection reset by peer" error when using version 1.0.0+

    Tested on OpenStack Swift using s5cmd version 1.0.0 and 1.2.1 with the following command:

    s5cmd --endpoint-url http://objstor.testserver.com --no-verify-ssl  cp -n -s -u  aaaatest2 s3://aaaatest01/aaaajfg006/
    

    And got errors like this:

    ERROR "cp aaaatest2/2ms/2m_file_1 s3://aaaatest01/aaaajfg006/aaaatest2/2ms/2m_file_1": RequestError: send request failed caused by: Head "http://objstor.testserver.com/aaaatest01/aaaajfg006/aaaatest2/2ms/2m_file_1": read tcp xxx.xxx.xxx.xxx:38924->xxx.xxx.xxx.xxx:80: read: connection reset by peer
    

    However, it's all good using s5cmd v0.7.0 with the same command. No error returned at all.

    I aslo found an issue on aws-sdk-go repo (see below) complainting about the same error.

    https://github.com/aws/aws-sdk-go/issues/3027

    Is it possible that it's the new version of aws-sdk-go causes the problem?

  • Preserve the full path with cp

    Preserve the full path with cp

    Hi

    Wondering if cp can preserve the full path when copying data from local to s3? I have recently come across a use case as below and s5cmd does it as flatten

    s5cmd cp -n -s -u human/set1/hg_1.fastq.gz s3://genome/
    s5cmd cp -n -s -u human/set2/hg_1.fastq.gz s3://genome/
    s5cmd cp -n -s -u human/set3/hg_1.fastq.gz s3://genome/
    s5cmd cp -n -s -u human/set4/hg_1.fastq.gz s3://genome/
    

    at the end, I see only one hg_1.fastq.gz in s3://genome/

    Even the following does not work for me either:

    s5cmd cp -n -s -u human/*/hg_1.fastq.gz s3://genome/
    s5cmd cp -n -s -u human/*/*.fastq.gz s3://genome/
    ERROR "cp human/set1/hg_1.fastq.gz human/set2/hg_1.fastq.gz human/set3/hg_1.fastq.gz human/set4/hg_1.fastq.gz s3://genome/": expected source and destination arguments
    

    Can we have --parents by default internally in cp in this case as well? I remember it has been done for s3 > local, but not local > s3.

    Correct me if I am wrong. Otherwise, how do we do the above with existing commands/options?

  • Support `--raw` flag with `cat`

    Support `--raw` flag with `cat`

    This PR #315 has added --raw as a flag for cp and rm operations. It would be nice to have it for cat as well. Currently, we have to use AWS CLI to cat a file on S3 with "*" in the object key; keen on being able to use s5cmd for everything :)

  • Exceeding file limit when copying from s3 to local

    Exceeding file limit when copying from s3 to local

    When calling s5cmd s3 -> local I'm getting below warning and eventually error.

    increase the open file limit or try to decrease the number of workers...

    I'm trying to copy 900k from s3 to local directory with default number of workers = 256. As far as I'm aware the limit is at least a few thousands on my machine. Could you please shed more light on how to avoid this issue and why it happens? I thought that it should open maximum of 256 file descriptors at the same time.

    Copying same files from local to s3 works fine.

  • Support setting arbitrary metadata

    Support setting arbitrary metadata

    I've got a use-case where I'm uploading a single file to S3, and I'd like to set additional custom metadata on the object. In this case, it's things like the the checksum of the object's content and the revision in git that produced the artifact.

    Would it be acceptable to have additional flags on s5cmd cp to assign arbitrary metadata during object upload?

    References:

    • https://github.com/peak/s5cmd/issues/264
  • --dry run and output

    --dry run and output

    For: for https://github.com/peak/s5cmd/releases/download/v2.0.0/s5cmd_2.0.0_Linux-64bit.tar.gz:

    (base) [az@fedora bin]$ s5cmd version
    v2.0.0-35bb2fa
    

    if successful --dry-run produces no visible output, which maybe misleading.

    maybe update the documentation to suggest the use of dry-run and "stats".. or actually make "Dryrun" to run with "Stats" internally :

    s5cmd --dry-run sync s3://XXXX  s3://ZZZZZ/BBBB/
    (base) [az@fedora bin]$ echo $?
    0
    (base) [az@fedora bin]$ 
    (base) [az@fedora bin]$ s5cmd --dry-run --stat sync s3://XXXX  s3://ZZZZZ/BBBB/
    
    Operation	Total	Error	Success	
    sync		1	0	1	
    
    
  • --profile is missing

    --profile is missing

    hello, for https://github.com/peak/s5cmd/releases/download/v2.0.0/s5cmd_2.0.0_Linux-64bit.tar.gz

    (base) [az@fedora bin]$ s5cmd version
    v2.0.0-35bb2fa
    (base) [az@fedora bin]$ env | grep AWS
    (base) [az@fedora bin]$
    (base) [az@fedora bin]$ s5cmd ls -profile duke 
    Incorrect Usage: flag provided but not defined: -profile
    
    Name:
      ls - list buckets and objects
    
    Usage:
      ls [options] argument
    ....
    
    (base) [az@fedora bin]$ s5cmd -profile duke ls
    Incorrect Usage: flag provided but not defined: -profile
    See 's5cmd --help' for usage
    
    
    

    whereas with aws cli:

    (base) [az@fedora bin]$ aws --profile duke s3 ls
    2022-04-08 10:52:38 cdk-XXXXXX-us-east-1
    

    Please let me know if I'm overlooking something.

    P.S. Just to be clear. This version does pick up AWS_PROFILE env var.

  • Introduce a flag to preserve timestamps of files

    Introduce a flag to preserve timestamps of files

    This PR is in response to issue #532

    We are implementing a flag that when activated will record and store ctime, mtime, and atime in S3 as metadata on the object. When pulling files down, if that data exists, we will set it on the files as well.

    Linux & Darwin don't appear to allow setting of ctime, however atime and mtime are preserved. Windows does allow us to set ctime, mtime, and atime, however they are actually referred to as CreationTime, LastAccessTime, and LastWriteTime.

    Please let me know if you have feedback on this PR. Thanks!

Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster
Open Source runtime tool which help to detect malware code execution and run time mis-configuration change on a kubernetes cluster

Kube-Knark Project Trace your kubernetes runtime !! Kube-Knark is an open source tracer uses pcap & ebpf technology to perform runtime tracing on a de

Sep 19, 2022
sget is a keyless safe script retrieval and execution tool

sget ⚠️ Not ready for use yet! sget is a keyless safe script retrieval and execution tool Security Should you discover any security issues, please ref

Dec 18, 2022
DepCharge is a tool designed to help orchestrate the execution of commands across many directories at once.

DepCharge DepCharge is a tool that helps orchestrate the execution of commands across the many dependencies and directories in larger projects. It als

Sep 27, 2022
Parallel processing through go routines, copy and delete thousands of key within some minutes

redis-dumper CLI Parallel processing through go routines, copy and delete thousands of key within some minutes copy data by key pattern from one redis

Dec 26, 2021
Kubernetes-native framework for test definition and execution

████████ ███████ ███████ ████████ ██ ██ ██ ██ ██████ ███████ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ ██ █████

Dec 31, 2022
Go Trusted Execution Environment (TEE)
Go Trusted Execution Environment (TEE)

Introduction The GoTEE framework implements concurrent instantiation of TamaGo based unikernels in privileged and unprivileged modes, interacting with

Dec 5, 2022
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment
KNoC is a Kubernetes Virtual Kubelet that uses an HPC cluster as the container execution environment

Kubernetes Node on Cluster KNoC is a Virtual Kubelet Provider implementation that manages real pods and containers in a remote container runtime by su

Oct 26, 2022
Test-at-scale - TAS - An intelligent test execution platform for engineering teams to achieve high development velocity
Test-at-scale - TAS - An intelligent test execution platform for engineering teams to achieve high development velocity

Test At Scale Test Smarter, Release Faster with test-at-scale. Status Table of c

Dec 22, 2022
⚡️ A dev tool for microservice developers to run local applications and/or forward others from/to Kubernetes SSH or TCP
⚡️ A dev tool for microservice developers to run local applications and/or forward others from/to Kubernetes SSH or TCP

Your new microservice development environment friend. This CLI tool allows you to define a configuration to work with both local applications (Go, Nod

Jan 4, 2023
Jan 4, 2022
Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using HPE Smart Storage Administrator tool

hpessa-exporter Overview Openshift's hpessa-exporter allows users to export SMART information of local storage devices as Prometheus metrics, by using

Jan 17, 2022
topolvm operator provide kubernetes local storage which is light weight and high performance

Topolvm-Operator Topolvm-Operator is an open source cloud-native local storage orchestrator for Kubernetes, which bases on topolvm. Supported environm

Nov 24, 2022
An high performance and ops-free local storage solution for Kubernetes.
An high performance and ops-free local storage solution for Kubernetes.

Carina carina 是一个CSI插件,在Kubernetes集群中提供本地存储持久卷 项目状态:开发测试中 CSI Version: 1.3.0 Carina architecture 支持的环境 Kubernetes:1.20 1.19 1.18 Node OS:Linux Filesys

May 18, 2022
Carina: an high performance and ops-free local storage for kubernetes
Carina: an high performance and ops-free local storage for kubernetes

Carina English | 中文 Background Storage systems are complex! There are more and more kubernetes native storage systems nowadays and stateful applicatio

Dec 30, 2022
Help developer to sync between local file and remote apollo portal web since portal web is so messy to use

apollo-synchronizer Help developer to sync between local file and remote apollo portal web since portal web is so messy to use Features download names

Oct 27, 2022
A best practices Go source project with unit-test and integration test, also use skaffold & helm to automate CI & CD at local to optimize development cycle

Dependencies Docker Go 1.17 MySQL 8.0.25 Bootstrap Run chmod +x start.sh if start.sh script does not have privileged to run Run ./start.sh --bootstrap

Apr 4, 2022
CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.
CSI Driver for dynamic provisioning of Persistent Local Volumes for Kubernetes using LVM.

OpenEBS LVM CSI Driver CSI driver for provisioning Local PVs backed by LVM and more. Project Status Currently the LVM CSI Driver is in alpha

Dec 24, 2022
Open URL in your local web browser from the SSH-connected remote environment.

opener Open URL in your local web browser from the SSH-connected remote environment. How does opener work? opener is a daemon process that runs locall

Oct 20, 2022
A cli that exposes your local resources to kubernetes
A cli that exposes your local resources to kubernetes

ktunnel Expose your local resources to kubernetes ?? Table of Contents About Getting Started Usage Documentation Contributing Authors Acknowledgments

Jan 7, 2023