GeeseFS is a high-performance, POSIX-ish S3 (Yandex, Amazon) file system written in Go

GeeseFS is a high-performance, POSIX-ish S3 (Yandex, Amazon) file system written in Go

Overview

GeeseFS allows you to mount an S3 bucket as a file system.

FUSE file systems based on S3 typically have performance problems, especially with small files and metadata operations.

GeeseFS attempts to solve these problems by using aggressive parallelism and asynchrony.

POSIX Compatibility Matrix

GeeseFS rclone Goofys S3FS gcsfuse
Read after write + + - + +
Partial writes + + - + +
Truncate + - - + +
chmod/chown - - - + -
fsync + - - + +
Symlinks + - - + +
xattr + - + + -
Directory renames + + * + +
readdir & changes + + - + +

* Directory renames are allowed in Goofys for directories with no more than 1000 entries and the limit is hardcoded

List of non-POSIX behaviors/limitations for GeeseFS:

  • does not store file mode/owner/group, use --(dir|file)-mode or --(uid|gid) options
  • does not support hard links
  • does not support special files (block/character devices, named pipes)
  • does not support locking
  • ctime, atime is always the same as mtime
  • file modification time can't be set by user (for example with cp --preserve or utimes(2))

In addition to the items above:

  • default file size limit is 1.03 TB, achieved by splitting the file into 1000x 5MB parts, 1000x 25 MB parts and 8000x 125 MB parts. You can change part sizes, but AWS's own limit is anyway 5 TB.

Owner & group, modification times and special files are in fact supportable with Yandex S3 because it has listings with metadata. Feel free to post issues if you want it. :-)

Stability

GeeseFS is stable enough to pass most of xfstests which are applicable, including dirstress/fsstress stress-tests (generic/007, generic/011, generic/013).

Performance Features

GeeseFS rclone Goofys S3FS gcsfuse
Parallel readahead + - + + -
Parallel multipart uploads + - + + -
No readahead on random read + - + - +
Server-side copy on append + - - * +
Server-side copy on update + - - * -
xattrs without extra RTT +* - - - +
Fast recursive listings + - * - +
Asynchronous write + + - - -
Asynchronous delete + - - - -
Asynchronous rename + - - - -
Disk cache for reads + * - + +
Disk cache for writes + * - + -

* Recursive listing optimisation in Goofys is buggy and may skip files under certain conditions

* S3FS uses server-side copy, but it still downloads the whole file to update it. And it's buggy too :-)

* rclone mount has VFS cache, but it can only cache whole files. And it's also buggy - it often hangs on write.

* xattrs without extra RTT only work with Yandex S3 (--list-type=ext-v1).

Installation

  • Pre-built binaries:
    • Linux amd64. You may also need to install fuse-utils first.
    • Mac amd64, arm64. You also need osxfuse/macfuse for GeeseFS to work.
  • Or build from source with Go 1.13 or later:
$ go get github.com/yandex-cloud/geesefs

Usage

$ cat ~/.aws/credentials
[default]
aws_access_key_id = AKID1234567890
aws_secret_access_key = MY-SECRET-KEY
$ $GOPATH/bin/geesefs <bucket> <mountpoint>
$ $GOPATH/bin/geesefs [--endpoint https://...] <bucket:prefix> <mountpoint> # if you only want to mount objects under a prefix

You can also supply credentials via the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

To mount an S3 bucket on startup make sure the credential is configured for root and add this to /etc/fstab:

bucket    /mnt/mountpoint    fuse.geesefs    _netdev,allow_other,--file-mode=0666,--dir-mode=0777    0    0

You can also use a different path to the credentials file by adding ,--shared-config=/path/to/credentials.

See also: Instruction for Azure Blob Storage.

Benchmarks

See bench/README.md.

Configuration

There's a lot of tuning you can do. Consult geesefs -h to view the list of options.

License

Licensed under the Apache License, Version 2.0

See LICENSE and AUTHORS

Compatibility with S3

geesefs works with:

  • Yandex Object Storage (default)
  • Amazon S3
  • Ceph (and also Ceph-based Digital Ocean Spaces, DreamObjects, gridscale etc)
  • Minio
  • OpenStack Swift
  • Azure Blob Storage (even though it's not S3)

It should also work with any other S3 that implements multipart uploads and multipart server-side copy (UploadPartCopy).

The following backends are inherited from Goofys code and still exist, but are broken:

  • Google Cloud Storage
  • Azure Data Lake Gen1
  • Azure Data Lake Gen2

References

Owner
Yandex.Cloud
Yandex.Cloud is a public cloud platform that offers scalable computing, secure storage for unlimited amounts of data and machine learning tools.
Yandex.Cloud
Comments
  • Подозрение на утечку памяти

    Подозрение на утечку памяти

    Версия 0.30.9

    Сценарий, на котором хорошо воспроизводится: раз в час find обходит бакет (около 230 000 файлов), удаляет попавшие под условие. В таком режиме за 5 дней процесс geesefs отъедает 25Гб памяти и убивается ООМ. Использованные опции:

    -o rw,allow_other,--file-mode=0666,--dir-mode=0777,--uid=yyy,--gid=xxx,--shared-config=/etc/passwd-geesefs,--endpoint=http://host:port,--http-timeout=10s,--retry-interval=5s,--list-type=2,dev,suid,--debug,--log-file=/tmp/log111.txt

    Нашел тикет https://github.com/yandex-cloud/geesefs/issues/23 , прочитал про переменную PPROF, но почему-то порт не открывается, в дебаг-логе только "2022/04/28 13:42:11.549886 main.INFO File system has been successfully mounted." Почитал в /proc/21217/environ , переменная есть:

    xargs --null --max-args=1 echo < /proc/21217/environ | grep PPROF PPROF=6060

    S3 не от яндекса.

  • main.ERROR stacktrace from panic: runtime error: slice bounds out of range [:2] with capacity 0

    main.ERROR stacktrace from panic: runtime error: slice bounds out of range [:2] with capacity 0

    main.ERROR stacktrace from panic: runtime error: slice bounds out of range [:2] with capacity 0 #012goroutine 277553 [running]:#012runtime/debug.Stack(0xc000615970, 0x10af760, 0xc017d0a000)#012#011/opt/hostedtoolcache/go/1.16.8/x64/src/runtime/debug/stack.go:24 +0x9f#012github.com/yandex-cloud/geesefs/api/common.LogPanic(0xc000615f38)#012#011/home/runner/work/geesefs/geesefs/api/common/panic_logger.go:32 +0x76#012panic(0x10af760, 0xc017d0a000)#012#011/opt/hostedtoolcache/go/1.16.8/x64/src/runtime/panic.go:965 +0x1b9#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc0018fa600)#012#011/home/runner/work/geesefs/geesefs/internal/dir.go:840 +0x165#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc0018fa300)#012#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc0018f9e00)#012#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc000b62d80)#012#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc0013ec180)#012#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc000ea9500)#012#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*Inode).removeAllChildrenUnlocked(0xc00070d680)#012#011/home/runner/work/geesefs/geesefs/internal/dir.go:828 +0xd3#012github.com/yandex-cloud/geesefs/internal.(*DirHandle).ReadDir(0xc012fd4020, 0x0, 0x0, 0xc001356060, 0x1, 0x1)#012#011/home/runner/work/geesefs/geesefs/internal/dir.go:572 +0x210#012github.com/yandex-cloud/geesefs/internal.(*Goofys).ReadDir(0xc000416000, 0x133b0b8, 0xc013d9a150, 0xc003696840, 0x0, 0xc016902d20)#012#011/home/runner/work/geesefs/geesefs/internal/goofys.go:1127 +0x265#012github.com/yandex-cloud/geesefs/api/common.FusePanicLogger.ReadDir(0x134b380, 0xc000416000, 0x133b0b8, 0xc013d9a150, 0xc003696840, 0x0, 0x0)#012#011/home/runner/work/geesefs/geesefs/api/common/panic_logger.go:101 +0x8c#012github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).handleOp(0xc0003bf3c0, 0xc00048e9c0, 0x133b0b8, 0xc013d9a150, 0xf4b320, 0xc003696840)#012#011/home/runner/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:183 +0xc87#012created by github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).ServeOps#012#011/home/runner/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:123 +0x1a5

    fuse.ERROR *fuseops.ReadDirOp error: input/output error

  • geesefs started to panic with R2 recently

    geesefs started to panic with R2 recently

    Hello, yesterday I've started getting issues in syslog, and some programs stuck

    Oct 27 06:40:29 ip-172-31-87-196 /usr/bin/geesefs[1444]: main.ERROR stacktrace from panic: deref inode 1896 (rpm/stable/repodata) by 4 from 2 #012goroutine 431264 [running]:#012runtime/debug.Stack(0xc0adce2bd8, 0xf7baa0, 0xc05aa5f4b0)#012#011/opt/hostedtoolcache/go/1.16.15/x64/src/runtime/debug/stack.go:24 +0x9f#012github.com/yandex-cloud/geesefs/api/common.LogPanic(0xc0adce2f10)#012#011/home/runner/work/geesefs/geesefs/api/common/panic_logger.go:32 +0x76#012panic(0xf7baa0, 0xc05aa5f4b0)#012#011/opt/hostedtoolcache/go/1.16.15/x64/src/runtime/panic.go:965 +0x1b9#012github.com/yandex-cloud/geesefs/internal.(*Inode).DeRef(0xc063d27680, 0x4, 0x768)#012#011/home/runner/work/geesefs/geesefs/internal/handles.go:361 +0x3a8#012github.com/yandex-cloud/geesefs/internal.(*Goofys).ForgetInode(0xc000408240, 0x133cad8, 0xc05aa63fb0, 0xc0597c1cb0, 0x0, 0x18)#012#011/home/runner/work/geesefs/geesefs/internal/goofys.go:1080 +0xd4#012github.com/yandex-cloud/geesefs/api/common.FusePanicLogger.ForgetInode(0x134cdc0, 0xc000408240, 0x133cad8, 0xc05aa63fb0, 0xc0597c1cb0, 0x0, 0x0)#012#011/home/runner/work/geesefs/geesefs/api/common/panic_logger.go:61 +0x89#012github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).handleOp(0xc0003b5260, 0xc0003589c0, 0x133cad8, 0xc05aa63fb0, 0xf4af20, 0xc05b8070e0)#012#011/home/runner/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:160 +0xb58#012created by github.com/jacobsa/fuse/fuseutil.(*fileSystemServer).ServeOps#012#011/home/runner/go/pkg/mod/github.com/vitalif/[email protected]/fuseutil/file_system.go:123 +0x1a5#012
    Oct 27 06:40:29 ip-172-31-87-196 /usr/bin/geesefs[1444]: fuse.ERROR *fuseops.BatchForgetOp error: input/output error
    

    The command how the bucket is mounted:

    /usr/bin/geesefs packages /home/ubuntu/r2 -o rw,user_id=1000,group_id=1000,--cheap,--file-mode=0666,--dir-mode=0777,--endpoint=https://****.r2.cloudflarestorage.com,--shared-config=/home/ubuntu/.r2_auth,--memory-limit=2050,--gc-interval=100,--max-flushers=2,--max-parallel-parts=3,--max-parallel-copy=2,dev,suid

  • fuse.ERROR writeMessage: invalid argument [80 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]

    fuse.ERROR writeMessage: invalid argument [80 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]

    Добрый день.

    При попытке подключиться к Yandex Object Storage получаю такую ошибку:

    -----------------------------------------------------
    2021/09/30 16:19:52.769225 fuse.DEBUG Op 0x00000001        connection.go:411] <- init
    2021/09/30 16:19:52.769257 fuse.DEBUG Op 0x00000001        connection.go:500] -> OK ()
    2021/09/30 16:19:52.769278 fuse.ERROR writeMessage: invalid argument [80 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0]
    2021/09/30 16:19:52.769294 main.INFO File system has been successfully mounted.
    2021/09/30 16:19:52.785066 s3.DEBUG DEBUG: Response s3/ListMultipartUploads Details:
    ---[ RESPONSE ]--------------------------------------
    

    В итоге папка подключается, но:

    ls -lha
    ls: cannot access share: Connection refused
    
    d??????????  ? ?    ?       ?            ? share
    

    Что с дефолтными параметрами запуска:

    ./geesefs -f --debug_s3 --debug_fuse --debug share /mnt/share
    

    что при изменения каких-либо параметров, в частности, --dir-mode и --file-mode, результат и ошибка не меняются.

    При этом, запуск

    ./goofys -f --debug_s3 --debug_fuse share /mnt/share
    

    приводит к корректному подключению:

    -----------------------------------------------------
    2021/09/30 16:34:03.086817 fuse.DEBUG Op 0x00000001        connection.go:408] <- init
    2021/09/30 16:34:03.086842 fuse.DEBUG Op 0x00000001        connection.go:491] -> OK ()
    2021/09/30 16:34:03.086871 main.INFO File system has been successfully mounted.
    2021/09/30 16:34:03.098758 s3.DEBUG DEBUG: Response s3/ListMultipartUploads Details:
    ---[ RESPONSE ]--------------------------------------
    
    drwxr-xr-x.  2 root root 4.0K Sep 30 16:34 share
    

    Не могли бы указать причину и способ устранения? Спасибо!

  • R2 cloudflare has some issue with directories listing

    R2 cloudflare has some issue with directories listing

    Hello again.

    The directories' listing in R2 is kind of strange. Here's a screenshot how it's visible in the web interface This object is unnamed

    image

    And it's how it's visible on https://packages.clickhouse.com/rpm/lts/repodata/, made by a custom worker https://github.com/ClickHouse/clickhouse-website-worker/blob/e4fe5d64838609d551a2d50863f9090d39eecc88/src/r2.ts#L145

    image

    geesefs behaves very strange here

    image

    Here's the log file geesefs-debug.log.xz.gz

    upd

    After some time the requested file is accessible again image

    update 2 After moving/copying directory back and forth, I see the files. But after remounted, it's not there again

    image

    geesefs-debug.log.xz.gz

  • Disk cache doesn't seem to work

    Disk cache doesn't seem to work

    Filesystem mounted with command:

    /opt/geesefs/bin/geesefs --cache /large/geesefs-cache --dir-mode 0750 --file-mode 0640 --cache-file-mode 0640 --cache-to-disk-hits 1 --memory-limit 4000 --max-flushers 32 --max-parallel-parts 32 --part-sizes 25 -f my-bucket /mnt/my-bucket
    

    I can access files in /mnt/my-bucket, no errors are reported, but nothing is stored in /large/geesefs-cache no matter how many times file is accessed.

    Is it broken or I am doing something wrong?

    $ /opt/geesefs/bin/geesefs --version
    geesefs version 0.30.8
    
  • Unable to run within container

    Unable to run within container

    When trying to run geesefs within a container the error no such file or directory is returned.

    For example

    $ ls -la 
    total 29712
    drwxrwxr-x  2 ubuntu ubuntu     4096 Jun 18 08:00 .
    drwxr-xr-x 37 ubuntu ubuntu    20480 Jun 18 07:59 ..
    -rwxrwxr-x  1 root   root   30397828 May 24 10:59 geesefs-linux-amd64
    
    $ docker run --rm -w $PWD -v $PWD:$PWD -it --privileged busybox ls -la 
    total 29696
    drwxrwxr-x    2 1000     1000          4096 Jun 18 08:00 .
    drwxr-xr-x    3 root     root          4096 Jun 18 08:06 ..
    -rwxrwxr-x    1 root     root      30397828 May 24 10:59 geesefs-linux-amd64
    
    $ docker run --rm -w $PWD -v $PWD:$PWD -it --privileged busybox ./geesefs-linux-amd64
    exec ./geesefs-linux-amd64: no such file or directory
    

    I suspect geesefs is trying to resolve some dynamically linked library and it's failing with that generic error message.

  • Deadlock when renaming non empty folder

    Deadlock when renaming non empty folder

    Hi, we experience blocking for geesefs mounted filesystem when renaming a folder (300 files, 125 GB). It was discussed on support thread at yandex cloud, ticket number 166384642671346, but as also related to geesefs itself we came here.

    The behavior: after entering mv match/Editing/MIX_Match_DEL match/Editing/MIX_Match_DELME it hangs, ls match/ succeed but ls match/Editing hangs. Another folders are renamed successfully, so it is only single folder that is affected with this issue been discovered (storage.yandexcloud.net/match/Editing/MIX_Match_DEL).

    > geesefs --version
    geesefs version 0.31.8
    

    The stack trace that you may need, taken while filesystem is affected with this issue: grs.txt.

    The log for geesefs itself contains a production data, that should not be published here. Is there chance that you would access to corresponding support thread (166384642671346) at yandex cloud?

  • Wasabi S3 compatibility issue: s3/ListObjects status code 500

    Wasabi S3 compatibility issue: s3/ListObjects status code 500

    Wasabi is another cloud provider of S3 at very competitive speeds and prices. Unfortunately, there seems to be a compatibility issue with geesefs which is not present with goofys or s3fs, related to a part of the S3 spec they must be differing in somehow.

    To reproduce:

    geesefs --endpoint https://s3.wasabisys.com mybucket /mnt
    
    mkdir /mnt/a
    ls /mnt/a     [SUCCEEDS]
    ls /mnt       [ALWAYS SUCCEEDS]
    
    geesefs SIGINT
    geesefs --endpoint https://s3.wasabisys.com mybucket /mnt
    
    ls /mnt/a     [FAILS --> ls: reading directory '/mnt/a': Resource temporarily unavailable]
    ls /mnt       [ALWAYS SUCCEEDS]
    ls /mnt/a     [SUCCEEDS]
    

    No matter how many directories deep, unless the directory above it was read first (or recently??), it fails with the status code 500 errors. On that failure, these errors occur:

    2022/08/03 19:15:16.014305 s3.DEBUG DEBUG: Validate Response s3/ListObjects failed, attempt 2/3, error InternalError: We encountered an internal error.  Please retry the operation again later.
            status code: 500, request id: <redacted>, host id: <redacted>
    2022/08/03 19:15:16.210893 s3.DEBUG DEBUG: Request s3/ListObjects Details:
    ---[ REQUEST POST-SIGN ]-----------------------------
    GET /demo8?marker=a.%F4%8F%BF%BF&prefix= HTTP/1.1
    Host: s3.wasabisys.com
    User-Agent: GeeseFS/0.31.5 (go1.16.15; linux; amd64)
    Accept-Encoding: identity
    Authorization: AWS4-HMAC-SHA256 Credential=<redacted>/20220803/us-east-1/s3/aws4_request, SignedHeaders=accept-encoding;host;x-amz-content-sha256;x-amz-date, Signature=<redacted>
    X-Amz-Content-Sha256: <redacted>
    X-Amz-Date: <redacted>
    
    
    -----------------------------------------------------
    2022/08/03 19:15:16.326267 s3.DEBUG DEBUG: Response s3/ListObjects Details:
    ---[ RESPONSE ]--------------------------------------
    HTTP/1.1 500 Internal Server Error
    Connection: close
    Transfer-Encoding: chunked
    Content-Type: application/xml
    Date: Wed, 03 Aug 2022 23:15:16 GMT
    Server: WasabiS3/7.5.1035-2022-06-08-c4b39686a7 (head07)
    X-Amz-Bucket-Region: us-east-1
    X-Amz-Id-2: <redacted>
    X-Amz-Request-Id: <redacted>
    

    I have also reached out to Wasabi support to inquire about this issue, but given your understanding of geesefs's internals, if you could provide a solution to above, or some sort of compatibility mode, it would be very greatly appreciated. Wasabi does offer a free trial which may aid debugging, and I would be more than happy to do anything possible on my side to help.

  • macos cannot copy to mounted directory

    macos cannot copy to mounted directory

    Hi everyone!

    macOS 10.15.7 (19H1615)

    I've installed prebuilt binary for mac/amd64.

    In one window doing this

    sudo geesefs -f --endpoint https://storage.yandexcloud.net/ my-videos videos
    2022/04/20 16:28:07.687106 main.INFO File system has been successfully mounted.
    

    On the other window

    ls
    ls: videos: No such file or directory
    
  • Trust evaluate failures

    Trust evaluate failures

    I'm attempting to use geesefs on an M1 Mac. I've downloaded the appropriate binary, made it executable, moved it to /usr/local/bin, and installed macfuse.

    When I run geesefs geesefs {bucket_name} {mount_point, I get main.FATAL Unable to mount file system, see syslog for details.

    In the console, I see a number of Trust evaluate failure: [leaf TemporalValidity] entries. How can I resolve these so I can use geesefs?

    image
  • Socket is not connected

    Socket is not connected

    Using 0.34.2 on Mac M1 (MacOS Monterey 12.5.1) I'm getting Socket is not connected while fsync'ing the folder where I copied a bunch of files. Geesefs process is exiting at this point, and I'm loosing the ability to resume uploads after that.

    Ran geesefs with options --debug_s3 --debug_fuse. Here is the log.

  • `geesefs --help` prints to stderr and returns exit value 1

    `geesefs --help` prints to stderr and returns exit value 1

    Describe the results you received:

    $ geesefs --help 2> /dev/null
    $ echo $?
    1
    

    Describe the results you expected:

    I would have expected to see the help text written to stdout and the exit value to be zero.

    Extra information:

    $ geesefs --version
    geesefs version 0.34.2
    $
    
    $ geesefs 2>&1 | grep -- --help
       --help, -h               Print this help text and exit successfully.
    $
    
  • error

    error

    Привет, при монтирование ошибка (OS Debian 11): 2022/11/30 02:16:24.864671 main.FATAL Unable to mount file system, see syslog for details syslog:

    main.ERROR Unable to setup backend: SharedConfigLoadError: failed to load config file, /root/.aws/credentials#012caused by: INIParseError: expected '['
    main.FATAL Mounting file system: Mount: initialization failed
    

    .aws/credentials:

    [default]
    aws_access_key_id = aws_access_key_id 
    aws_secret_access_key = aws_secret_access_key
    
  • external cache invalidation

    external cache invalidation

    Hi, there are several issues in our environment related to geesefs 1 minute dir cache timeout that is by default.

    One is file uploading. When user uploaded a file with one replica and then got onto another replica when he have to check if the file was successfully uploaded. At moment we delay uploading for one minute but it is not optimal way.

    Another issue is in user local cache. There is local cache for remote file system provided by geeses. We handle event (with our services) when new file arrive, but geesefs do not know in time about that file. User check if remote file system have new items, it reports that No, and client cache directory contents for a quite long time((

    What do you think if it is possible to provide mechanics invalidate directory cache with external application to decrease appearance of new items and still provide fast subsequent listings? I have as follows:

    1. i think it is possible to use ipc, i d never implemented something with this protocol but sure it is possible to communicate between processes and client would request geesefs to invalidate internal cache for a folder contents.
    2. by placing magic file into a folder, e.g. .invalidate.me or .cache.forget that would be handled by geesefs
    3. we already setup S3 Trigger in Yandex to handle changes in s3 buckets, and what if we send messages to sqs message queues in that trigger so them would be handled with geesefs? So i imagine queue url would be passed to gessefs as parameter.
  • The mounted CloudFlare R2 sooner or later stucks completely

    The mounted CloudFlare R2 sooner or later stucks completely

    During the last week, I've experienced multiple stuck with a mounted R2 bucket. It happens always in the end during the writing.

    The command to mount I've tried does not have something special, I've tried w/ and w/o --cheap option

    geesefs --endpoint=https://id.r2.cloudflarestorage.com --shared-config=/home/ubuntu/.r2_auth --memory-limit=550 bucket r2
    

    The shared storage is the following:

    [default]
    aws_access_key_id = ***
    aws_secret_access_key =***
    

    The rsync process stuck around 11:22, but can't say more precise

    building file list ... 
    1402 files to consider
    cannot delete non-empty directory: deb/dists/stable/main/binary-arm64
    cannot delete non-empty directory: deb/dists/stable/main/binary-arm64
    cannot delete non-empty directory: deb/dists/stable/main/binary-amd64
    cannot delete non-empty directory: deb/dists/stable/main/binary-amd64
    cannot delete non-empty directory: deb/dists/stable/main
    cannot delete non-empty directory: deb/dists/stable/main
    cannot delete non-empty directory: deb/dists/stable
    cannot delete non-empty directory: deb/dists/stable
    cannot delete non-empty directory: deb/dists/lts/main/binary-arm64
    cannot delete non-empty directory: deb/dists/lts/main/binary-arm64
    cannot delete non-empty directory: deb/dists/lts/main/binary-amd64
    cannot delete non-empty directory: deb/dists/lts/main/binary-amd64
    cannot delete non-empty directory: deb/dists/lts/main
    cannot delete non-empty directory: deb/dists/lts/main
    cannot delete non-empty directory: deb/dists/lts
    cannot delete non-empty directory: deb/dists/lts
    cannot delete non-empty directory: deb/dists
    cannot delete non-empty directory: rpm/lts/repodata
    deb/pool/main/c/clickhouse/clickhouse-client_22.7.6.74_amd64.deb
             75,152 100%   40.42MB/s    0:00:00 (xfr#1, to-chk=1107/1402)
    deb/pool/main/c/clickhouse/clickhouse-client_22.7.6.74_arm64.deb
             75,146 100%    6.51MB/s    0:00:00 (xfr#2, to-chk=1106/1402)
    deb/pool/main/c/clickhouse/clickhouse-client_22.8.6.71_amd64.deb
             75,274 100%    4.79MB/s    0:00:00 (xfr#3, to-chk=1105/1402)
    deb/pool/main/c/clickhouse/clickhouse-client_22.8.6.71_arm64.deb
             75,274 100%    1.84MB/s    0:00:00 (xfr#4, to-chk=1104/1402)
    deb/pool/main/c/clickhouse/clickhouse-client_22.9.3.18_amd64.deb
             86,612 100%    1.84MB/s    0:00:00 (xfr#5, to-chk=1099/1402)
    deb/pool/main/c/clickhouse/clickhouse-client_22.9.3.18_arm64.deb
             86,622 100%    1.59MB/s    0:00:00 (xfr#6, to-chk=1098/1402)
    deb/pool/main/c/clickhouse/clickhouse-common-static-dbg_22.7.6.74_amd64.deb
        872,235,938 100%   18.30MB/s    0:00:45 (xfr#7, to-chk=1079/1402)
    deb/pool/main/c/clickhouse/clickhouse-common-static-dbg_22.7.6.74_arm64.deb
        648,380,416  79%    7.38MB/s    0:00:22 # it's stuck here
    

    Here's a log file geesefs.log

  • Feature request: set file's `mtime`

    Feature request: set file's `mtime`

    Hello, thanks for a great tool, it's much faster than s3fs

    We have a feature request that would make it a magnitude more useful with rsync.

    Now I have to sync files as rsync --no-times --size-only --delete src dst, and it has an obvious flaw. It works more or less for big files, but when the file's size remains the same it won't be updated.

Read data from rss, convert in pdf and send to kindle. Amazon automatically convert them in azw3.

Kindle-RSS-PDF-AZW3 The Kindle RSS PDF AZW3 is a personal project. The Kindle RSS PDF AZW3 is a personal project. I received a Kindle for Christmas, a

Jan 10, 2022
High Performance, Kubernetes Native Object Storage
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storag

Jan 2, 2023
Bigfile -- a file transfer system that supports http, rpc and ftp protocol https://bigfile.site
Bigfile -- a file transfer system that supports http, rpc and ftp protocol   https://bigfile.site

Bigfile ———— a file transfer system that supports http, rpc and ftp protocol 简体中文 ∙ English Bigfile is a file transfer system, supports http, ftp and

Dec 31, 2022
File system event notification library on steroids.

notify Filesystem event notification library on steroids. (under active development) Documentation godoc.org/github.com/rjeczalik/notify Installation

Dec 31, 2022
Pluggable, extensible virtual file system for Go

vfs Package vfs provides a pluggable, extensible, and opinionated set of file system functionality for Go across a number of file system types such as

Jan 3, 2023
Cross-platform file system notifications for Go.

File system notifications for Go fsnotify utilizes golang.org/x/sys rather than syscall from the standard library. Ensure you have the latest version

Jan 1, 2023
Dragonfly is an intelligent P2P based image and file distribution system.
Dragonfly is an intelligent P2P based image and file distribution system.

Dragonfly Note: The master branch may be in an unstable or even broken state during development. Please use releases instead of the master branch in o

Jan 9, 2023
Plik is a scalable & friendly temporary file upload system ( wetransfer like ) in golang.

Want to chat with us ? Telegram channel : https://t.me/plik_root_gg Plik Plik is a scalable & friendly temporary file upload system ( wetransfer like

Jan 2, 2023
File system for GitHub
File system for GitHub

HUBFS · File System for GitHub HUBFS is a read-only file system for GitHub and Git. Git repositories and their contents are represented as regular dir

Dec 28, 2022
A virtual file system for small to medium sized datasets (MB or GB, not TB or PB). Like Docker, but for data.

AetherFS assists in the production, distribution, and replication of embedded databases and in-memory datasets. You can think of it like Docker, but f

Feb 9, 2022
Encrypted File System in Go

Getting Started: Setup the environment: Install GoLang: $ sudo apt update $ sudo apt upgrade $ sudo apt install libssl-dev gcc pkg-config $ sudo apt

Apr 30, 2022
A rudimentary go program that allows you to mount a mongo database as a FUSE file system

This is a rudimentary go program that allows you to mount a mongo database as a

Dec 29, 2021
Gokrazy mkfs: a program to create an ext4 file system on the gokrazy perm partition

gokrazy mkfs This program is intended to be run on gokrazy only, where it will c

Dec 12, 2022
Goful is a CUI file manager written in Go.
Goful is a CUI file manager written in Go.

Goful Goful is a CUI file manager written in Go. Works on cross-platform such as gnome-terminal and cmd.exe. Displays multiple windows and workspaces.

Dec 28, 2022
goelftools is library written in Go for parsing ELF file.

goelftools goelftools is library written in Go for parsing ELF file. This library is inspired by pyelftools and rbelftools. Motivation The motivation

Dec 5, 2022
File uploader with support for multiple hosts and progress reporting written in Go.
File uploader with support for multiple hosts and progress reporting written in Go.

go-upload File uploader with support for multiple hosts and progress reporting written in Go. Windows, Linux, macOS and Android binaries Usage Upload

Dec 18, 2022
RIFF file extractor written in Go.
RIFF file extractor written in Go.

RIFF-Extractor RIFF file extractor written in Go. This was written for Dying Light 2, but should also work for other games. I wasn't able to find any

Aug 1, 2022
A PDF document generator with high level support for text, drawing and images

GoFPDF document generator Package go-pdf/fpdf implements a PDF document generator with high level support for text, drawing and images. Features UTF-8

Jan 4, 2023