Goofys is a high-performance, POSIX-ish Amazon S3 file system written in Go

Goofys is a high-performance, POSIX-ish Amazon S3 file system written in Go

Build Status Github All Releases Twitter Follow

Overview

Goofys allows you to mount an S3 bucket as a filey system.

It's a Filey System instead of a File System because goofys strives for performance first and POSIX second. Particularly things that are difficult to support on S3 or would translate into more than one round-trip would either fail (random writes) or faked (no per-file permission). Goofys does not have an on disk data cache (checkout catfs), and consistency model is close-to-open.

Installation

$ brew cask install osxfuse
$ brew install goofys
  • Or build from source with Go 1.10 or later:
$ export GOPATH=$HOME/work
$ go get github.com/kahing/goofys
$ go install github.com/kahing/goofys

Usage

$ cat ~/.aws/credentials
[default]
aws_access_key_id = AKID1234567890
aws_secret_access_key = MY-SECRET-KEY
$ $GOPATH/bin/goofys <bucket> <mountpoint>
$ $GOPATH/bin/goofys <bucket:prefix> <mountpoint> # if you only want to mount objects under a prefix

Users can also configure credentials via the AWS CLI or the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

To mount an S3 bucket on startup, make sure the credential is configured for root, and can add this to /etc/fstab:

goofys#bucket   /mnt/mountpoint        fuse     _netdev,allow_other,--file-mode=0666,--dir-mode=0777    0       0

See also: Instruction for Azure Blob Storage, Azure Data Lake Gen1, and Azure Data Lake Gen2.

Got more questions? Check out questions other people asked

Benchmark

Using --stat-cache-ttl 1s --type-cache-ttl 1s for goofys -ostat_cache_expire=1 for s3fs to simulate cold runs. Detail for the benchmark can be found in bench.sh. Raw data is available as well. The test was run on an EC2 m5.4xlarge in us-west-2a connected to a bucket in us-west-2. Units are seconds.

Benchmark result

To run the benchmark, configure EC2's instance role to be able to write to $TESTBUCKET, and then do:

$ sudo docker run -e BUCKET=$TESTBUCKET -e CACHE=false --rm --privileged --net=host -v /tmp/cache:/tmp/cache kahing/goofys-bench
# result will be written to $TESTBUCKET

See also: cached benchmark result and result on Azure.

License

Copyright (C) 2015 - 2019 Ka-Hing Cheung

Licensed under the Apache License, Version 2.0

Current Status

goofys has been tested under Linux and macOS.

List of non-POSIX behaviors/limitations:

  • only sequential writes supported
  • does not store file mode/owner/group
    • use --(dir|file)-mode or --(uid|gid) options
  • does not support symlink or hardlink
  • ctime, atime is always the same as mtime
  • cannot rename directories with more than 1000 children
  • unlink returns success even if file is not present
  • fsync is ignored, files are only flushed on close

In addition to the items above, the following are supportable but not yet implemented:

  • creating files larger than 1TB

Compatibility with non-AWS S3

goofys has been tested with the following non-AWS S3 providers:

  • Amplidata / WD ActiveScale
  • Ceph (ex: Digital Ocean Spaces, DreamObjects, gridscale)
  • EdgeFS
  • EMC Atmos
  • Google Cloud Storage
  • Minio (limited)
  • OpenStack Swift
  • S3Proxy
  • Scaleway
  • Wasabi

Additionally, goofys also works with the following non-S3 object stores:

  • Azure Blob Storage
  • Azure Data Lake Gen1
  • Azure Data Lake Gen2

References

Owner
Ka-Hing Cheung
All things systems, storage, and free software. Previously at @riverbed, @gridstore, @bouncestorage, @etleap, and @cloudflare.
Ka-Hing Cheung
Comments
  • Goofys consumes all memory with extended concurrent transfers

    Goofys consumes all memory with extended concurrent transfers

    I'm using goofys to expose test data on s3 to an ec2 instance.

    Out load test script runs continuous jobs with the s3 transfer rate being the limiting factor. There are 5 concurrent transfers gong at a time. Data file size is often 100-400MB.

    One test run completed successfully, then a subsequent one failed as the s3 data transfers quit working.

    From syslog: (trimmed for previty)

    Mar 7 09:52:12 ip-10-33-48-7 kernel: [229416.289878] slurmctld invoked oom-killer: gfp_mask=0x3000d0, order=1, oom_score_adj=0 Mar 7 09:52:12 ip-10-33-48-7 kernel: [229416.290013] [ pid ] uid tgid total_vm rss nr_ptes swapents oom_score_adj name Mar 7 09:52:12 ip-10-33-48-7 kernel: [229416.290064] [11362] 0 11362 3653986 3494764 6877 0 0 goofys

    Mar 7 09:52:12 ip-10-33-48-7 kernel: [229416.290106] Out of memory: Kill process 11362 (goofys) score 826 or sacrifice child Mar 7 09:52:12 ip-10-33-48-7 kernel: [229416.295024] Killed process 11362 (goofys) total-vm:14615944kB, anon-rss:13979056kB, file-rss:0kB

    Is there any log/debug flag I can set to gather data from goofys?

    Full log here: https://gist.github.com/6b9ee07c1b8c0b5ebeca

    The current status shows that it's still mounted, but it's not working:

    root@ip-10-33-48-7:/var/log# ls /content ls: cannot access /content: Transport endpoint is not connected root@ip-10-33-48-7:/var/log# mount | grep content s3fs on /content-s3fs type fuse.s3fs (rw,nosuid,nodev,allow_other) v4test.demo.nextissue.com on /content-goofys type fuse (rw,nosuid,nodev,default_permissions,allow_other)

    Remounting the bucket resolved the hangup issue. From logs after umount/mount:

    Mar 7 22:50:39 ip-10-33-48-7 /usr/local/bin/goofys[1467]: s3.INFO Switching from region 'us-west-2' to 'us-east-1' Mar 7 22:50:39 ip-10-33-48-7 /usr/local/bin/goofys[1467]: main.INFO File system has been successfully mounted.

  • Mount from fstab runs in foreground

    Mount from fstab runs in foreground

    used goofys#bucketname /mnt/mountpoint fuse allow_other,--file-mode=0777,--dir-mode=0777,--uid="33,--gid="33",--storage-class="REDUCED_REDUNDANCY" in /etc/fstab.

    sudo mount /mnt/mountpoint mounted the volume with the following messages left open in foreground:

    2015/11/07 21:34:10.777609 s3.INFO Switching from region 'us-west-2' to 'us-east-1' 2015/11/07 21:34:10.816434 main.INFO File system has been successfully mounted.

    CTRL-C unmounted the volume.

  • disk cache

    disk cache

    You write that goofys does not have a disk cache. Is that because of a design decision or has it just not been implemented yet. Would you possibly accept a patch that implements it?

    I'm asking because we are currently using s3fs but we have run into stability problem and would perhaps use an alternative. Some kind of cache is an absolute requirement, though.

    Have you thought about goofys => caching http proxy => s3 yet as an alternative to a disk cache? Has this been tried by anybody, yet?

  • About get object with Range

    About get object with Range

    Hello, We are using goofys with very big file : ~ 20 G We are using mmap() on this file and then we do a lot of memcpy of block < 128K. We are observing a lot of big range request on S3 bucket, ~ 5M compare to 128K maximum. Can you explain how get object with http range are done with goofys ?

    Regards, Nicolas Prochazka

  • index out of range panic.

    index out of range panic.

    I have some long running processing that are (ab)using goofys with a lot of seeks followed by reads. I ran with both debug flags and at the end of megabytes of output is the traceback below. this is with v0.0.12. Let me know if you need the full output. I'm pulling from cloudwatch logs so it's easier to get the tail.

    2017/06/19 18:28:58.654265 fuse.DEBUG Op 0x00012145 connection.go:479] -> OK ()
    2017/06/19 18:28:58.654184 fuse.DEBUG < ReadFile 78 sarcoma/764-SS-767/764-SS-767.bam [0 <nil>]
    2017/06/19 18:28:58.654830 fuse.DEBUG ReadFile 78 sarcoma/764-SS-767/764-SS-767.bam [4690632704 65536]
    2017/06/19 18:28:58.654848 fuse.DEBUG out of order read 78 sarcoma/764-SS-767/764-SS-767.bam [4690632704 4690890752]
    panic: runtime error: index out of range
    goroutine 133887 [running]:
    github.com/kahing/goofys/internal.(*MBuf).WriteFrom(0xc43ec47000, 0x7f0c7baab8f8, 0xc43c2ca980, 0x7f0c7baab8f8, 0xc43c2ca980, 0x1)
    created by github.com/kahing/goofys/internal.Buffer.Init
    /home/khc/Code/go/src/github.com/kahing/goofys/internal/buffer_pool.go:312 +0xff
    /home/khc/Code/go/src/github.com/kahing/goofys/internal/buffer_pool.go:264 +0x1ce
    github.com/kahing/goofys/internal.Buffer.Init.func1(0xc43ec47040, 0xc48c2c0e40)
    /home/khc/Code/go/src/github.com/kahing/goofys/internal/buffer_pool.go:311 +0x35
    /home/khc/Code/go/src/github.com/kahing/goofys/internal/buffer_pool.go:334 +0x99
    github.com/kahing/goofys/internal.(*Buffer).readLoop(0xc43ec47040, 0xc48c2c0e40)
    github.com/kahing/goofys/internal.Buffer.Init.func1(0xc43ec47100, 0xc48c2c0f00)
    goroutine 133888 [running]:
    github.com/kahing/goofys/internal.(*MBuf).WriteFrom(0xc43ec470c0, 0x7f0c7baab8f8, 0xc42679fce0, 0x7f0c7baab8f8, 0xc42679fce0, 0x1)
    /home/khc/Code/go/src/github.com/kahing/goofys/internal/buffer_pool.go:312 +0xff
    github.com/kahing/goofys/internal.(*Buffer).readLoop(0xc43ec47100, 0xc48c2c0f00)
    /home/khc/Code/go/src/github.com/kahing/goofys/internal/buffer_pool.go:334 +0x99
    panic: runtime error: index out of range
    /home/khc/Code/go/src/github.com/kahing/goofys/internal/buffer_pool.go:311 +0x35
    created by github.com/kahing/goofys/internal.Buffer.Init
    /home/khc/Code/go/src/github.com/kahing/goofys/internal/buffer_pool.go:264 +0x1ce
    
  • Is this a bug in Goofys or FUSE?

    Is this a bug in Goofys or FUSE?

    I'm having a problem with some of my processes that are using S3 files on a goofys mount.

    I have also tried S3FS and experience the exact same problem. I'm not yet sure if this is a Goofys/S3FS problem or a FUSE problem. I'm hoping you can help.

    We have an S3 bucket with many large files (1000's of files. 200->800mb each).

    We have a multithreaded process that:

    • Untar one of the tgzs to /tmp on the local filesystem.
    • Parses the the text files inside.

    So our use case for goofys is pure read.

    Several times a day one of these processes will get stuck in the untar and hang forever.

    ps shows a process in state "D" [xavierpayne@devbox ~]$ ps aux | grep xlog_create dashv 23394 0.0 0.0 12136 956 ? D 01:29 0:10 /opt/dashv/bin/processfile /mnt/s3/20170221-000203.tar.gz [xavierpayne@devbox ~]$

    State D means that the process is uninterruptable. So "kill -9" will not work (in fact no kill command will work).

    I tried to cat the status:

    [xavierpayne@devbox ~]$ cat /proc/23394/status Name: processfile State: D (disk sleep) Tgid: 23394 Ngid: 0 Pid: 23394 PPid: 23393 TracerPid: 15471 Uid: 2031 2031 2031 2031 Gid: 502 502 502 502 FDSize: 64 Groups: 502 NStgid: 23394 NSpid: 23394 NSpgid: 22855 NSsid: 22855 VmPeak: 12136 kB VmSize: 12136 kB VmLck: 0 kB VmPin: 0 kB VmHWM: 956 kB VmRSS: 956 kB VmData: 224 kB VmStk: 136 kB VmExe: 12 kB VmLib: 3512 kB VmPTE: 48 kB VmPMD: 12 kB VmSwap: 0 kB HugetlbPages: 0 kB Threads: 1 SigQ: 1/64117 SigPnd: 0000000000040100 ShdPnd: 0000000000000100 SigBlk: 0000000000000000 SigIgn: 0000000001001000 SigCgt: 0000000180004007 CapInh: 0000000000000000 CapPrm: 0000000000000000 CapEff: 0000000000000000 CapBnd: 0000003fffffffff CapAmb: 0000000000000000 Seccomp: 0 Cpus_allowed: 7fff Cpus_allowed_list: 0-14 Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001 Mems_allowed_list: 0 voluntary_ctxt_switches: 3386 nonvoluntary_ctxt_switches: 86 [xavierpayne@devbox ~]$

    The important part is at the very top and at the bottom: State D is "Disk Sleep". So I have confirmed it's IO (FUSE) voluntary_ctxt_switches and nonvoluntary_ctxt_switches crudely indicate how many times the process has been interrupted. I ran a couple of checks a few minutes apart and the number for each never changed. This verifies the process is stuck and not being interrupted. Usually uninterruptable code means it's a kernel system call. So I confirmed:

    [xavierpayne@devbox ~]$ sudo cat /proc/23394/syscall 0 0x6 0x7fffd74cca30 0x4000 0x0 0x1000 0x7fffd74cca30 0x7fffd74bc968 0x7f71f4193bb0:

    The first number (0) is the number of the system call. Looking it up here: http://blog.rchapman.org/posts/Linux_System_Call_Table_for_x86_64/ I see that it's "sys_read".

    Not helpful by itself. I need to know what's calling it. So I took a look at the stack for the process:

    [xavierpayne@devbox ~]$ sudo cat /proc/23394/stack [] request_wait_answer+0xf8/0x250 [fuse] [] __fuse_request_send+0x67/0x90 [fuse] [] fuse_request_send+0x27/0x30 [fuse] [] fuse_send_readpages.isra.30+0xd2/0x120 [fuse] [] fuse_readpages+0xdf/0x100 [fuse] [] __do_page_cache_readahead+0x174/0x200 [] ondemand_readahead+0x135/0x260 [] page_cache_async_readahead+0x6c/0x70 [] generic_file_read_iter+0x378/0x590 [] fuse_file_read_iter+0x4c/0x70 [fuse] [] __vfs_read+0xa7/0xd0 [] vfs_read+0x7f/0x130 [] SyS_read+0x46/0xa0 [] entry_SYSCALL_64_fastpath+0x12/0x71 [] 0xffffffffffffffff [xavierpayne@devbox ~]$

    It looks like a fuse call to: request_wait_answer is never returning when trying to populate the read-ahead-cache. Googling it I see this is can happen when using a filesystem that relies on networking.

    Both Goofys and S3FS share this exact same problem so either they have the same bug (Shouldn't this scenario be handled as a page fault?) OR its a problem with FUSE.

    I'll openly admit I'm in a bit over my head here. Hoping someone with more knowledge of this can chime in and help lead me to a resolution.

    Otherwise I'm dead in the water.

  • Why is the speed so slow?

    Why is the speed so slow?

    I use goofys , by following command:

    1. download the executable file (goofys), then *chmod 775 goofys*

    2. cat ~/.aws/credentials [default] aws_access_key_id = *********** aws_secret_access_key = ********

    3. ./goofys --endpoint ******** --region *****

    mount an S3 bucket successful ,But upload speed is only 7MB/s 。 I do not kown this why 。 Is thers any thing package ,I need install ? image

    Last Time,I use goofys ,it's speed is 300MB/s ,which Cloud host‘s Memory is 32G 。 image

  • bucket mount prefix doesn't work if prefixed with slash

    bucket mount prefix doesn't work if prefixed with slash

    I am using CMS to upload, copy and add files into the mounted directory, say /asset/files/. s3fs has no issue doing these jobs, but goofys can create files in S3 only when upload. When I try to copy a file inside /asset/files/, it creates nothing in the new destination. I used --debug_fuse to check the issue. I found that when I try to copy a file, no action of creating file is recorded in the debug log. What else I could do to tackle this? Thanks.

  • Flush() doesn't block until its done, and other IO ops can execute in between

    Flush() doesn't block until its done, and other IO ops can execute in between

    Hi Team,

    looks like i have many errors and not able to figure out further.

    [root@img01 bin]# grep "no such file or directory" /var/log/messages | wc -l 12844

    Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df703 connection.go:476] -> Error: "invalid argument" Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df704 connection.go:395] <- GetInodeAttributes (inode 72157) Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df704 connection.go:474] -> OK Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df705 connection.go:395] <- OpenFile (inode 72157) Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df705 connection.go:474] -> OK Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df706 connection.go:395] <- ReadFile (inode 72157, handle 92893, offset 0, 4096 bytes) Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df706 connection.go:476] -> Error: "no such file or directory" Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df707 connection.go:395] <- FlushFile (inode 72157) Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df707 connection.go:474] -> OK Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df708 connection.go:395] <- ReadFile (inode 72157, handle 92893, offset 0, 4096 bytes) Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df708 connection.go:476] -> Error: "no such file or directory" Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df709 connection.go:395] <- ReleaseFileHandle Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df709 connection.go:474] -> OK Mar 15 10:34:33 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df70a connection.go:395] <- LookUpInode (parent 936, name "6713_sku_1280x720Padded_9caf092d7a7dd2466a5c8c215d006561.jpg") Mar 15 10:34:34 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df70a connection.go:476] -> Error: "no such file or directory" Mar 15 10:34:34 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df70b connection.go:395] <- FlushFile (inode 72157) Mar 15 10:34:34 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df70b connection.go:476] -> Error: "invalid argument" Mar 15 10:34:34 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df70c connection.go:395] <- GetInodeAttributes (inode 72157) Mar 15 10:34:34 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df70c connection.go:474] -> OK Mar 15 10:34:34 img01 rsyslogd-2177: imuxsock begins to drop messages from pid 26093 due to rate-limiting Mar 15 10:34:40 img01 rsyslogd-2177: imuxsock lost 322 messages from pid 26093 due to rate-limiting Mar 15 10:34:40 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df750 connection.go:395] <- LookUpInode (parent 16, name "2") Mar 15 10:34:40 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df750 connection.go:474] -> OK Mar 15 10:34:40 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df751 connection.go:395] <- LookUpInode (parent 275, name "3") Mar 15 10:34:40 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df751 connection.go:474] -> OK Mar 15 10:34:40 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df752 connection.go:395] <- LookUpInode (parent 278, name "c") Mar 15 10:34:41 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df752 connection.go:474] -> OK Mar 15 10:34:41 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df753 connection.go:395] <- LookUpInode (parent 43060, name "6713_sku_ATOM_SIZE_MACRO_23c6d55b8d500f3bac05c71835198cea.jpg") Mar 15 10:34:41 img01 /bin/goofys[26093]: fuse.DEBUG Op 0x003df753 connection.go:476] -> Error: "no such file or directory"

    [root@img01 bin]# ./goofys --version goofys version 0.0.5

    /etc/fstab: goofys#size-img01 /img01 fuse _netdev,allow_other,--region='ap-southeast-1',--file-mode=0666,--debug_fuse,--debug_s3,--storage-class="REDUCED_REDUNDANCY" 0 0

    Please help me to see if anything i need to make changes.

    Thanks, Vadiraj

  • fstab ignores allow_other

    fstab ignores allow_other

    Here is my fstab entry:

    `goofys#cpirepsarchive   /mnt/cpirepsarchive.s3 fuse    allow_other,--file-mode=0777,--dir-mode=0777,--uid="33,--gid="33",--storage-class="REDUCED_REDUNDANCY"    0       0`
    

    Here is the strace:

    `ubuntu@ip-10-0-0-27:~$ sudo strace -f mount /mnt/mountpoint >& xout
    ^C
    ubuntu@ip-10-0-0-27:~$ grep exec xout
    execve("/bin/mount", ["mount", "/mnt/mountpoint"], [/* 15 vars */]) = 0
    [pid  6240] execve("/sbin/mount.fuse", ["/sbin/mount.fuse", "goofys#bucketname", "/mnt/mountpoint", "-o", "rw,allow_other,--file-mode=0777,"...], [/* 11 vars */]) = 0
    [pid  6240] execve("/bin/sh", ["/bin/sh", "-c", "'goofys' 'bucketname' '/mnt/"...], [/* 12 vars */]) = 0
    [pid  6241] execve("/usr/bin/goofys", ["goofys", "bucketname", "/mnt/mountpoint", "-o", "rw,allow_other,--file-mode=0777,"...], [/* 13 vars */]) = 0
    [pid  6244] execve("/home/ubuntu/.gvm/pkgsets/go1.5.1/global/bin/goofys", ["/home/ubuntu/.gvm/pkgsets/go1.5."..., "bucketname", "/mnt/mountpoint", "-o", "rw,allow_other,--file-mode=0777,"...], [/* 14 vars */]) = 0
    [pid  6250] execve("/bin/fusermount", ["fusermount", "-o", "default_permissions,fsname=cpire"..., "--", "/mnt/mountpoint"], [/* 16 vars */] <unfinished ...>
    [pid  6250] <... execve resumed> )      = 0
    [pid  6253] execve("/bin/mount", ["/bin/mount", "--no-canonicalize", "-i", "-f", "-t", "fuse", "-o", "rw,default_permissions,allow_oth"..., "bucketname", "/mnt/mountpoint"], [/* 0 vars */]) = 0`
    
  • Add support to Assume an IAM Role with an External ID

    Add support to Assume an IAM Role with an External ID

    Hi All,

    It would be great if goofys could assume a role to get access to a bucket owned by a third party: http://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-user_externalid.html

    The Role ARN to be assumed and an External ID should be provided to the tool to mount the S3 bucket. To do this the STS API must be used to get the temporary security credentials. This is an example with Go:

    https://docs.aws.amazon.com/sdk-for-go/api/service/sts/#example_STS_AssumeRole_shared00

  • Syntax Error

    Syntax Error

    Hi, on a new Debian 11 virtual machine, I installed Goofys to connect to object storage in Contabo. I already connected this storage to another server, works perfectly. I have setup everything. Here is the error: /usr/local/bin/goofys: 3: Syntax error: ")" unexpected

    Any help ? Thanks

  • Fix potential memory consumption and OOMs by fixing logic in `getCgroupAvailableMem`

    Fix potential memory consumption and OOMs by fixing logic in `getCgroupAvailableMem`

    os.Stat is never returning err which satisfies os.IsExist(err) == true the only way to make sure that file exists using os.Stat is to check err == nil

    if _, err := os.Stat("/file/that/exists"); os.IsExist(err) {
      // will never trigger!
    
      // why? because os.Stat runs normally if file exists.
      // it's expected behaviour for os.Stat, so it doesn't throw an error
    
      // os.IsExist() does not receive an error ( it's nil ), so it can't tell you
      // if the error message was "file not found"
    }
    

    from https://pkg.go.dev/os#File.Stat

    Stat returns the FileInfo structure describing file. If there is an error, it will be of type *PathError.
    

    also I installed go mods from scratch and it required additional method for FusePanicLogger to satisfy fuseutil.FileSystem I added the method

  • Stop file read after passing the Unencrypted length

    Stop file read after passing the Unencrypted length

    Is it possible to "stop" the file read after reading the: "Unencrypted length" of bytes, which will be usually less than the encrypted length as presented in the "ls" / stat.

    There are many client-side AES-256 implementations. The S3 stored binary is rounded up to the block size, whereas the real content will be usually shorter. The info about unencrypted length is often stored in the metadata in e.g. x-amz-unencrypted-content-length and is available after headObject or getObject. Unencrypted length isn't available via standard listing methods e.g. listObjectsV2.

    I am experimenting with Goofys and modified the: internal/backend_s3.go:GetBlob() so it can decrypt the content, however if I don't pad the last block (so it matches the "stat" size) the GetBlob seem to be called again (until it returns the amount of data to satisfy the size advertised in the ls)

    I don't know the Goofys/FUSE and file system internals and I am not sure who requests (during file read process) file contents up to the last byte. Would there be possibility to make Goofys/FUSE/filesystem aware during the execution (technically after first getObject response received with metadata) that file length is actually different then the one reported to ls, so we can return the file exactly as it is, instead of trying some padding tricks? Perhaps there is some EOF character I could return?

    Sorry for my ignorance, but file systems and Go are entirely new things to me from the dev perspective.

  • Failed to compile from source neither on arm64 or x86

    Failed to compile from source neither on arm64 or x86

    Hi,

    I am trying to compile the goofys on Graviton2, as there are no arm64 package available. However when I follow the steps to build from source, following error messages shown up

    github.com/kahing/goofys/api

    api/api.go:124:56: cannot use common.FusePanicLogger{...} (type common.FusePanicLogger) as type fuseutil.FileSystem in argument to fuseutil.NewFileSystemServer: common.FusePanicLogger does not implement fuseutil.FileSystem (missing BatchForget method)

    Then I try to use x86 instance, but failed as well, with same error message

    I also have tried different go version including 1.17, 1.18, 1.19 but no lucky

    Please help to verify this

    Thanks, Vincent

Kitten is a distributed file system optimized for small file storage, inspired by Facebook's Haystack.
Kitten is a distributed file system optimized for small file storage, inspired by Facebook's Haystack.

Kitten is a distributed file system optimized for small file storage, inspired by Facebook's Haystack.

Aug 18, 2022
Cross-platform file system notifications for Go.

File system notifications for Go fsnotify utilizes golang.org/x/sys rather than syscall from the standard library. Ensure you have the latest version

Jan 2, 2023
A user-space file system for interacting with Google Cloud Storage

gcsfuse is a user-space file system for interacting with Google Cloud Storage. Current status Please treat gcsfuse as beta-quality software. Use it fo

Dec 29, 2022
The Swift Virtual File System

*** This project is not maintained anymore *** The Swift Virtual File System SVFS is a Virtual File System over Openstack Swift built upon fuse. It is

Dec 11, 2022
Cross-platform file system notifications for Go.

File system notifications for Go fsnotify utilizes golang.org/x/sys rather than syscall from the standard library. Ensure you have the latest version

Aug 7, 2017
A FileSystem Abstraction System for Go
A FileSystem Abstraction System for Go

A FileSystem Abstraction System for Go Overview Afero is a filesystem framework providing a simple, uniform and universal API interacting with any fil

Dec 31, 2022
SeaweedFS a fast distributed storage system for blobs, objects, files, and data lake, for billions of files
SeaweedFS a fast distributed storage system for blobs, objects, files, and data lake, for billions of files

SeaweedFS is a fast distributed storage system for blobs, objects, files, and data lake, for billions of files! Blob store has O(1) disk seek, cloud tiering. Filer supports Cloud Drive, cross-DC active-active replication, Kubernetes, POSIX FUSE mount, S3 API, S3 Gateway, Hadoop, WebDAV, encryption, Erasure Coding.

Jan 8, 2023
a high-performance, POSIX-ish Amazon S3 file system written in Go
a high-performance, POSIX-ish Amazon S3 file system written in Go

Goofys allows you to mount an S3 bucket as a filey system.

Dec 23, 2022
GeeseFS is a high-performance, POSIX-ish S3 (Yandex, Amazon) file system written in Go
GeeseFS is a high-performance, POSIX-ish S3 (Yandex, Amazon) file system written in Go

GeeseFS is a high-performance, POSIX-ish S3 (Yandex, Amazon) file system written in Go Overview GeeseFS allows you to mount an S3 bucket as a file sys

Jan 1, 2023
go-fastdfs 是一个简单的分布式文件系统(私有云存储),具有无中心、高性能,高可靠,免维护等优点,支持断点续传,分块上传,小文件合并,自动同步,自动修复。Go-fastdfs is a simple distributed file system (private cloud storage), with no center, high performance, high reliability, maintenance free and other advantages, support breakpoint continuation, block upload, small file merge, automatic synchronization, automatic repair.(similar fastdfs).
go-fastdfs 是一个简单的分布式文件系统(私有云存储),具有无中心、高性能,高可靠,免维护等优点,支持断点续传,分块上传,小文件合并,自动同步,自动修复。Go-fastdfs is a simple distributed file system (private cloud storage), with no center, high performance, high reliability, maintenance free and other advantages, support breakpoint continuation, block upload, small file merge, automatic synchronization, automatic repair.(similar fastdfs).

中文 English 愿景:为用户提供最简单、可靠、高效的分布式文件系统。 go-fastdfs是一个基于http协议的分布式文件系统,它基于大道至简的设计理念,一切从简设计,使得它的运维及扩展变得更加简单,它具有高性能、高可靠、无中心、免维护等优点。 大家担心的是这么简单的文件系统,靠不靠谱,可不

Jan 8, 2023
🔑A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout (LSM+WAL) similar to Riak.

bitcask A high performance Key/Value store written in Go with a predictable read/write performance and high throughput. Uses a Bitcask on-disk layout

Sep 26, 2022
Amazon ECS Container Agent: a component of Amazon Elastic Container Service
Amazon ECS Container Agent: a component of Amazon Elastic Container Service

Amazon ECS Container Agent The Amazon ECS Container Agent is a component of Amazon Elastic Container Service (Amazon ECS) and is responsible for manag

Dec 28, 2021
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is an open-source POSIX file system built on top of Redis and object storage (e.g. Amazon S3), designed and optimized for cloud native environ

Jan 2, 2023
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is an open-source POSIX file system built on top of Redis and object storage

Jan 5, 2023
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Jan 4, 2023
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Jan 1, 2023
the pluto is a gateway new time, high performance, high stable, high availability, easy to use

pluto the pluto is a gateway new time, high performance, high stable, high availability, easy to use Acknowledgments thanks nbio for providing low lev

Sep 19, 2021
Squzy - is a high-performance open-source monitoring, incident and alert system written in Golang with Bazel and love.

Squzy - opensource monitoring, incident and alerting system About Squzy - is a high-performance open-source monitoring and alerting system written in

Dec 12, 2022