JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS Logo

Build Status Join Slack Go Report Chinese Docs

JuiceFS is an open-source POSIX file system built on top of Redis and object storage (e.g. Amazon S3), designed and optimized for cloud native environment. By using the widely adopted Redis and S3 as the persistent storage, JuiceFS serves as a stateless middleware to enable many applications to share data easily.

The highlighted features are:

  • Fully POSIX-compatible: JuiceFS is a fully POSIX-compatible file system. Existing applications can work with it without any change. See pjdfstest result below.
  • Outstanding Performance: The latency can be as low as a few milliseconds and the throughput can be expanded to nearly unlimited. See benchmark result below.
  • Cloud Native: By utilizing cloud object storage, you can scale storage and compute independently, a.k.a. disaggregated storage and compute architecture.
  • Sharing: JuiceFS is a shared file storage that can be read and written by many clients.
  • Global File Locks: JuiceFS supports both BSD locks (flock) and POSIX record locks (fcntl).
  • Data Compression: By default JuiceFS uses LZ4 to compress all your data, you could also use Zstandard instead.

Architecture | Getting Started | Administration | POSIX Compatibility | Performance Benchmark | Supported Object Storage | Status | Roadmap | Reporting Issues | Contributing | Community | Usage Tracking | License | Credits | FAQ


Architecture

JuiceFS Architecture

JuiceFS relies on Redis to store file system metadata. Redis is a fast, open-source, in-memory key-value data store and very suitable for storing the metadata. All the data will store into object storage through JuiceFS client.

JuiceFS Storage Format

The storage format of one file in JuiceFS consists of three levels. The first level called "Chunk". Each chunk has fixed size, currently it is 64MiB and cannot be changed. The second level called "Slice". The slice size is variable. A chunk may have multiple slices. The third level called "Block". Like chunk, its size is fixed. By default one block is 4MiB and you could modify it when formatting a volume (see following section). At last, the block will be compressed and encrypted (optional) store into object storage.

Getting Started

Precompiled binaries

You can download precompiled binaries from releases page.

Building from source

You need first installing Go 1.13+, then run following commands:

$ git clone https://github.com/juicedata/juicefs.git
$ cd juicefs
$ make

Dependency

A Redis server (>= 2.2) is needed for metadata, please follow Redis Quick Start.

macFUSE is also needed for macOS.

The last one you need is object storage. There are many options for object storage, local disk is the easiest one to get started.

Format a volume

Assume you have a Redis server running locally, we can create a volume called test using it to store metadata:

$ ./juicefs format localhost test

It will create a volume with default settings. If there Redis server is not running locally, the address could be specified using URL, for example, redis://user:[email protected]:6379/1, the password can also be specified by environment variable REDIS_PASSWORD to hide it from command line options.

As JuiceFS relies on object storage to store data, you can specify a object storage using --storage, --bucket, --access-key and --secret-key. By default, it uses a local directory to serve as an object store, for all the options, please see ./juicefs format -h.

For the details about how to setup different object storage, please read the guide.

Mount a volume

Once a volume is formatted, you can mount it to a directory, which is called mount point.

$ ./juicefs mount -d localhost ~/jfs

After that you can access the volume just like a local directory.

To get all options, just run ./juicefs mount -h.

If you wanna mount JuiceFS automatically at boot, please read the guide.

Command Reference

There is a command reference to see all options of the subcommand.

Kubernetes

There is a Kubernetes CSI driver to use JuiceFS in Kubernetes easily.

Hadoop Java SDK

If you wanna use JuiceFS in Hadoop, check Hadoop Java SDK.

Administration

POSIX Compatibility

JuiceFS passed all of the 8813 tests in latest pjdfstest.

All tests successful.

Test Summary Report
-------------------
/root/soft/pjdfstest/tests/chown/00.t          (Wstat: 0 Tests: 1323 Failed: 0)
  TODO passed:   693, 697, 708-709, 714-715, 729, 733
Files=235, Tests=8813, 233 wallclock secs ( 2.77 usr  0.38 sys +  2.57 cusr  3.93 csys =  9.65 CPU)
Result: PASS

Besides the things covered by pjdfstest, JuiceFS provides:

  • Close-to-open consistency. Once a file is closed, the following open and read can see the data written before close. Within same mount point, read can see all data written before it.
  • Rename and all other metadata operations are atomic guaranteed by Redis transaction.
  • Open files remain accessible after unlink from same mount point.
  • Mmap is supported (tested with FSx).
  • Fallocate with punch hole support.
  • Extended attributes (xattr).
  • BSD locks (flock).
  • POSIX record locks (fcntl).

Performance Benchmark

Throughput

Performed a sequential read/write benchmark on JuiceFS, EFS and S3FS by fio, here is the result:

Sequential Read Write Benchmark

It shows JuiceFS can provide 10X more throughput than the other two, read more details.

Metadata IOPS

Performed a simple mdtest benchmark on JuiceFS, EFS and S3FS by mdtest, here is the result:

Metadata Benchmark

It shows JuiceFS can provide significantly more metadata IOPS than the other two, read more details.

Analyze performance

There is a virtual file called .accesslog in the root of JuiceFS to show all the operations and the time they takes, for example:

$ cat /jfs/.accesslog
2021.01.15 08:26:11.003330 [uid:0,gid:0,pid:4403] write (17669,8666,4993160): OK <0.000010>
2021.01.15 08:26:11.003473 [uid:0,gid:0,pid:4403] write (17675,198,997439): OK <0.000014>
2021.01.15 08:26:11.003616 [uid:0,gid:0,pid:4403] write (17666,390,951582): OK <0.000006>

The last number on each line is the time (in seconds) current operation takes. We can use this to debug and analyze performance issues. We will provide more tools to analyze it.

Supported Object Storage

  • Amazon S3
  • Google Cloud Storage
  • Azure Blob Storage
  • Alibaba Cloud Object Storage Service (OSS)
  • Tencent Cloud Object Storage (COS)
  • QingStor Object Storage
  • Ceph RGW
  • MinIO
  • Local disk
  • Redis

For the detailed list, see this document.

Status

It's considered as beta quality, the storage format is not stabilized yet. It's not recommended to deploy it into production environment. Please test it with your use cases and give us feedback.

Roadmap

  • Stabilize storage format
  • S3 gateway
  • Windows client
  • Encryption at rest
  • Other databases for metadata

Reporting Issues

We use GitHub Issues to track community reported issues. You can also contact the community for getting answers.

Contributing

Thank you for your contribution! Please refer to the CONTRIBUTING.md for more information.

Community

Welcome to join the Discussions and the Slack channel to connect with JuiceFS team members and other users.

Usage Tracking

JuiceFS by default collects anonymous usage data. It only collects core metrics (e.g. version number), no user or any sensitive data will be collected. You could review related code here.

These data help us understand how the community is using this project. You could disable reporting easily by command line option --no-usage-report:

$ ./juicefs mount --no-usage-report

License

JuiceFS is open-sourced under GNU AGPL v3.0, see LICENSE.

Credits

The design of JuiceFS was inspired by Google File System, HDFS and MooseFS, thanks to their great work.

FAQ

Why doesn't JuiceFS support XXX object storage?

JuiceFS already supported many object storage, please check the list first. If this object storage is compatible with S3, you could treat it as S3. Otherwise, try reporting issue.

Can I use Redis cluster?

The simple answer is no. JuiceFS uses transaction to guarantee the atomicity of metadata operations, which is not well supported in cluster mode. Sentinal or other HA solution for Redis are needed.

Owner
Juicedata, Inc
Builds the best file system for cloud.
Juicedata, Inc
Comments
  • [MariaDB] Error 1366: Incorrect string value

    [MariaDB] Error 1366: Incorrect string value

    What happened:

    While rsync from local disk to juicefs mount, it suddenly (after several hours) stopped with

    Failed to sync with 11 errors: last error was: open /mnt/juicefs/folder/file.xls: input/output error

    On the jfs log, i can find these:

    juicefs[187516] <ERROR>: error: Error 1366: Incorrect string value: '\xE9sa sa...' for column `jfsdata`.`jfs_edge`.`name` at row 1
    goroutine 43510381 [running]:
    runtime/debug.Stack()
            /usr/local/go/src/runtime/debug/stack.go:24 +0x65
    github.com/juicedata/juicefs/pkg/meta.errno({0x2df8860, 0xc0251d34a0})
            /go/src/github.com/juicedata/juicefs/pkg/meta/utils.go:76 +0xc5
    github.com/juicedata/juicefs/pkg/meta.(*dbMeta).doMknod(0xc0000e0c40, {0x7fca7e165300, 0xc00ed28040}, 0x399f6f, {0xc025268160, 0x1f}, 0x1, 0x1b4, 0x0, 0x0, ...)
            /go/src/github.com/juicedata/juicefs/pkg/meta/sql.go:1043 +0x29e
    github.com/juicedata/juicefs/pkg/meta.(*baseMeta).Mknod(0xc0000e0c40, {0x7fca7e165300, 0xc00ed28040}, 0x399f6f, {0xc025268160, 0x1f}, 0xc0, 0x7b66, 0x7fca, 0x0, ...)
            /go/src/github.com/juicedata/juicefs/pkg/meta/base.go:594 +0x275
    github.com/juicedata/juicefs/pkg/meta.(*baseMeta).Create(0xc0000e0c40, {0x7fca7e165300, 0xc00ed28040}, 0x26b5620, {0xc025268160, 0x2847500}, 0x8040, 0xed2, 0x8241, 0xc025267828, ...)
            /go/src/github.com/juicedata/juicefs/pkg/meta/base.go:601 +0x109
    github.com/juicedata/juicefs/pkg/vfs.(*VFS).Create(0xc000140640, {0x2e90348, 0xc00ed28040}, 0x399f6f, {0xc025268160, 0x1f}, 0x81b4, 0x22a4, 0xc0)
            /go/src/github.com/juicedata/juicefs/pkg/vfs/vfs.go:357 +0x256
    github.com/juicedata/juicefs/pkg/fuse.(*fileSystem).Create(0xc000153900, 0xc024980101, 0xc022a48a98, {0xc025268160, 0x1f}, 0xc022a48a08)
            /go/src/github.com/juicedata/juicefs/pkg/fuse/fuse.go:221 +0xcd
    github.com/hanwen/go-fuse/v2/fuse.doCreate(0xc022a48900, 0xc022a48900)
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/opcode.go:163 +0x68
    github.com/hanwen/go-fuse/v2/fuse.(*Server).handleRequest(0xc00179c000, 0xc022a48900)
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/server.go:483 +0x1f3
    github.com/hanwen/go-fuse/v2/fuse.(*Server).loop(0xc00179c000, 0x20)
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/server.go:456 +0x110
    created by github.com/hanwen/go-fuse/v2/fuse.(*Server).readRequest
            /go/pkg/mod/github.com/juicedata/go-fuse/[email protected]/fuse/server.go:323 +0x534
     [utils.go:76]
    

    Nothing was logged at MariaDB side.

    Environment:

    • juicefs version 1.0.0-beta2+2022-03-04T03:00:41Z.9e26080
    • Ubuntu 20.04
    • MariaDB 10.4
  • failed to create fs on oos

    failed to create fs on oos

    What happened: i can't create fs on oos What you expected to happen: i can create fs on oos How to reproduce it (as minimally and precisely as possible): juicefs format --storage oos --bucket https://cyn.oos-hz.ctyunapi.cn
    --access-key xxxxxx
    --secret-key xxxxxxx
    redis://:[email protected]:6379/1
    myjfs Anything else we need to know?

    Environment:

    • JuiceFS version (use juicefs --version) or Hadoop Java SDK version: juicefs version 1.0.0-beta3+2022-05-05.0fb9155
    • Cloud provider or hardware configuration running JuiceFS:
    • OS (e.g cat /etc/os-release): Centos 7
    • Kernel (e.g. uname -a): Linux ecs-df87 3.10.0-514.el7.x86_64 #1 SMP Tue Nov 22 16:42:41 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage (cloud provider and region, or self maintained): oos
    • Metadata engine info (version, cloud provider managed or self maintained): redis:6
    • Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage):
    • Others:
  • Performance degenerates a lot when reading from multiple threads compared with a single thread (when running Clickhouse)

    Performance degenerates a lot when reading from multiple threads compared with a single thread (when running Clickhouse)

    What happened:

    Hi folks,

    We are trying to run ClickHouse benchmark on juicefs (with OSS as the underlying object storage), and under the settings that juicefs has already cached the whole file to the local disk we notice a huge performance gap (compared with running the benchmark on Local SSD) when executing ClickHouse with 4 threads, but such degeneration doesn't happen if we limit the ClickHouse thread to 1.

    More specifically, we are running the clickhouse benchmark with scale factor 1000, and playing query 29th query (the involved table Referer sizes around 24Gi, the query is a full table scan operation), and given clickhouse 100Gi local SSD as the cache directory.

    After serveral runs to make sure the involved file are fully cached locally by juicefs, we notices the following performance numbers

    | threads | ssd runtime (seconds) | juicefs runtime (seconds) | |:-------:|:----------------------:|:--------------------------:| | 4 | 24 | 56 | | 1 | 88 | 100 |

    You could see that the juicefs suffers much more performance degenerated when the workload executing in a multiple thread fashion. Is that behavour expected for juicefs?

    Thanks!

    What you expected to happen:

    The performance gap shouldn't be such large for 4 thread settings.

    How to reproduce it (as minimally and precisely as possible):

    Playing the clickhouse benchmark inside a juicefs mounted directory.

    Anything else we need to know?

    Environment:

    • JuiceFS version (use juicefs --version) or Hadoop Java SDK version: juicefs version 1.0.0-beta2+2022-03-04T03:00:41Z.9e26080
    • Cloud provider or hardware configuration running JuiceFS: aliyun ecs.i3g.2xlarge, (local ssd instance with 4 physical cores and 32Gi memory)
    • OS (e.g cat /etc/os-release): Ubuntu 20.04.3 LTS
    • Kernel (e.g. uname -a): Linux mk1 5.4.0-100-generic #113-Ubuntu SMP Thu Feb 3 18:43:29 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage (cloud provider and region, or self maintained): OSS
    • Metadata engine info (version, cloud provider managed or self maintained): redis
    • Network connectivity (JuiceFS to metadata engine, JuiceFS to object storage): localhost, redis and juicefs are deployed on the same instance
    • Others: clickhouse latest version
  • Out of Memory 0.13.1

    Out of Memory 0.13.1

    What happened: juicefs eating every last bit of memory + swap then gets killed by oomkiller

    What you expected to happen: shouldnt it use less memory?

    How to reproduce it (as minimally and precisely as possible): store 8x104gb size files in juicefs, do rand reads on it, watch how memory consistently climbs until exhaustion

    Anything else we need to know? image

    Environment:

    • JuiceFS version (use ./juicefs --version) or Hadoop Java SDK version: juicefs version 0.13.1 (2021-05-27T08:14:30Z 1737d4e)
    • Cloud provider or hardware configuration running JuiceFS: 4 core, 16gb ram, backblaze b2
    • OS (e.g: cat /etc/os-release): ubuntu 20.04.2 LTS (Focal Fossa)
    • Kernel (e.g. uname -a): 5.4.0-73-generic #82-Ubuntu SMP Wed Apr 14 17:39:42 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage (cloud provider and region): Backblaze B2 Europe
    • Redis info (version, cloud provider managed or self maintained): self maintained latset, from docker
    • Network connectivity (JuiceFS to Redis, JuiceFS to object storage): juicefs and redis are locally, 5gbit to backblaze b2
    • Others:
  • Performance 3x ~ 8x slower than s5cmd (for large files)

    Performance 3x ~ 8x slower than s5cmd (for large files)

    While comparing basic read/write operations, it appears than s5cmd is 3x ~ 8x faster than juicefs

    What happened:

    #### WRITE IO ####

    $ time cp 1gb_file.txt /mnt/juicefs0/
    real	0m50.859s
    user	0m0.016s
    sys	0m1.365s
    
    $ time s5cmd cp 1gb_file.txt s3://bucket/path/
    real	0m20.614s
    user	0m9.411s
    sys	0m3.232s
    

    #### READ IO ####

    $ time cp /mnt/juicefs0/1gb_file.txt .
    real	0m45.539s
    user	0m0.014s
    sys	0m1.578s
    
    $ time s5cmd cp s3://bucket/path/1gb_file.txt .
    real	0m6.074s
    user	0m1.186s
    sys	0m2.504s
    

    Environment:

    • JuiceFS version or Hadoop Java SDK version: juicefs version 0.12.1 (2021-04-15T08:18:25Z 7b4df23)
    • Cloud provider or hardware configuration running JuiceFS: Linode 1 GB VM
    • OS: Fedora 33 (Server Edition)
    • Kernel: Linux 5.11.12-200.fc33.x86_64 #1 SMP Thu Apr 8 02:34:17 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
    • Object storage: Linode
    • Redis info: Redis 6.2.1
    • Network connectivity (JuiceFS to Redis, JuiceFS to object storage): redis (local), S3 Object Storage (Linode)
  • Errors on WebDAV operations, but files are created

    Errors on WebDAV operations, but files are created

    Juicefs (1.0.0-rc1) outputs a 401, but bucket is created (folder 'jfs' is created at the root path) on server (so, credentials are OK). Happy to provide more debug info if needed (note i have no admin access on WebdDAV server).

    # juicefs format --storage webdav --bucket https://web.dav.server/ --access-key 307399 --secret-key tTxX12NPMyy --trash-days 0 redis://127.0.0.1:6379/1 jfs
    
    2022/06/17 11:45:19.373300 juicefs[799909] <INFO>: Meta address: redis://127.0.0.1:6379/1 [interface.go:397]
    2022/06/17 11:45:19.375116 juicefs[799909] <INFO>: Ping redis: 82.354µs [redis.go:2869]
    2022/06/17 11:45:19.375437 juicefs[799909] <INFO>: Data use webdav://web.dav.server/jfs/ [format.go:420]
    2022/06/17 11:45:19.529334 juicefs[799909] <WARNING>: List storage webdav://web.dav.server/jfs/ failed: 401 Unauthorized: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>401 Unauthorized</title>
    </head><body>
    <h1>Unauthorized</h1>
    <p>This server could not verify that you
    are authorized to access the document
    requested.  Either you supplied the wrong
    credentials (e.g., bad password), or your
    browser doesn't understand how to supply
    the credentials required.</p>
    </body></html> [format.go:438]
    2022/06/17 11:45:19.535569 juicefs[799909] <INFO>: Volume is formatted as {Name:jfs UUID:05363857-2fce-42dc-94e2-4d41c33172d0 Storage:webdav Bucket:https://web.dav.server/ AccessKey:307399 SecretKey:removed BlockSize:4096 Compression:none Shards:0 HashPrefix:false Capacity:0 Inodes:0 EncryptKey: KeyEncrypted:true TrashDays:0 MetaVersion:1 MinClientVersion: MaxClientVersion:} [format.go:458]
    

    or trying to destroy it:

    # juicefs destroy redis://127.0.0.1:6379/1 05363857-2fce-42dc-94e2-4d41c33172d0
    
    2022/06/17 11:50:24.072353 juicefs[800639] <INFO>: Meta address: redis://127.0.0.1:6379/1 [interface.go:397]
    2022/06/17 11:50:24.073182 juicefs[800639] <INFO>: Ping redis: 67.778µs [redis.go:2869]
     volume name: jfs
     volume UUID: 05363857-2fce-42dc-94e2-4d41c33172d0
    data storage: webdav://web.dav.server/jfs/
      used bytes: 0
     used inodes: 0
    WARNING: The target volume will be destoried permanently, including:
    WARNING: 1. ALL objects in the data storage: webdav://web.dav.server/jfs/
    WARNING: 2. ALL entries in the metadata engine: redis://127.0.0.1:6379/1
    Proceed anyway? [y/N]: y
    2022/06/17 11:50:25.693697 juicefs[800639] <FATAL>: list all objects: 401 Unauthorized: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>401 Unauthorized</title>
    </head><body>
    <h1>Unauthorized</h1>
    <p>This server could not verify that you
    are authorized to access the document
    requested.  Either you supplied the wrong
    credentials (e.g., bad password), or your
    browser doesn't understand how to supply
    the credentials required.</p>
    </body></html> [destroy.go:158]
    

    On files operations, files (chunks folder is populated) but this is logged:

    2022/06/17 12:02:54.141831 juicefs[802424] <INFO>: Mounting volume jfs at /mnt/jfs ... [mount_unix.go:181]
    2022/06/17 12:04:31.053268 juicefs[802424] <WARNING>: Upload chunks/0/0/7_0_140: 403 Forbidden: <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN">
    <html><head>
    <title>403 Forbidden</title>
    </head><body>
    <h1>Forbidden</h1>
    <p>You don't have permission to access this resource.</p>
    </body></html> (try 1) [cached_store.go:462]
    

    Accessing WebDAV directly (outside juicefs) we can see correct file operations: image

  • freezing when tikv is down

    freezing when tikv is down

    7 pd-server node, kill 2 pd-server node, juicefs freezing.

    [2021/09/02 15:57:58.152 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.35:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:57:59.021 +08:00] [ERROR] [client.go:599] ["[pd] getTS error"] [dc-location=global] [error="[PD:client:ErrClientGetTSO]rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"] [stack="github.com/tikv/pd/client.(*client).handleDispatcher\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:599"] [2021/09/02 15:57:59.022 +08:00] [ERROR] [pd.go:234] ["updateTS error"] [txnScope=global] [error="rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster"] [errorVerbose="rpc error: code = Unknown desc = [PD:tso:ErrGenerateTimestamp]generate timestamp failed, requested pd is not leader of cluster\ngithub.com/tikv/pd/client.(*client).processTSORequests\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:717\ngithub.com/tikv/pd/client.(*client).handleDispatcher\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:587\nruntime.goexit\n\t/snap/go/7954/src/runtime/asm_amd64.s:1371\ngithub.com/tikv/pd/client.(*tsoRequest).Wait\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:913\ngithub.com/tikv/pd/client.(*client).GetTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/[email protected]/client/client.go:933\ngithub.com/tikv/client-go/v2/util.InterceptedPDClient.GetTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/util/pd_interceptor.go:79\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).getTimestamp\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:141\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS.func1\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:232\nsync.(*Map).Range\n\t/snap/go/7954/src/sync/map.go:345\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:230\nruntime.goexit\n\t/snap/go/7954/src/runtime/asm_amd64.s:1371"] [stack="github.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS.func1\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:234\nsync.(*Map).Range\n\t/snap/go/7954/src/sync/map.go:345\ngithub.com/tikv/client-go/v2/oracle/oracles.(*pdOracle).updateTS\n\t/root/hanson/go/pkg/mod/github.com/tikv/client-go/[email protected]/oracle/oracles/pd.go:230"] [2021/09/02 15:57:59.317 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:00.608 +08:00] [WARN] [prewrite.go:198] ["slow prewrite request"] [startTS=427443780657872897] [region="{ region id: 4669, ver: 35, confVer: 1007 }"] [attempts=280] [2021/09/02 15:58:04.317 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:09.318 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:14.319 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:15.831 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.35:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:20.832 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.35:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:25.834 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"] [2021/09/02 15:58:30.834 +08:00] [WARN] [client_batch.go:497] ["init create streaming fail"] [target=10.188.19.36:20160] [forwardedHost=] [error="context deadline exceeded"]

    What happened:

    What you expected to happen:

    How to reproduce it (as minimally and precisely as possible):

    Anything else we need to know?

    Environment:

    • JuiceFS version (use ./juicefs --version) or Hadoop Java SDK version:
    • Cloud provider or hardware configuration running JuiceFS:
    • OS (e.g: cat /etc/os-release):
    • Kernel (e.g. uname -a):
    • Object storage (cloud provider and region):
    • Redis info (version, cloud provider managed or self maintained):
    • Network connectivity (JuiceFS to Redis, JuiceFS to object storage):
    • Others:
  • Deleted tag cause misbehaviour of `go get`

    Deleted tag cause misbehaviour of `go get`

    What happened:

    go get: added github.com/juicedata/juicefs v0.14.0
    

    What you expected to happen:

    go get: added github.com/juicedata/juicefs v0.13.1
    

    How to reproduce it (as minimally and precisely as possible):

    go get github.com/juicedata/juicefs
    

    Anything else we need to know? it looks like you probably deleted that tag, which is not supported by proxy.golang.org (see the FAQ)

    also, the current latest tag, v0.14-dev is not valid semver, thus is being ignored and mangled by the toolchain: (try v0.14.0-alpha)

    [[email protected] ~]$ go get github.com/juicedata/[email protected]
    go: downloading github.com/juicedata/juicefs v0.13.2-0.20210527090717-42ac85ce406c
    go get: downgraded github.com/juicedata/juicefs v0.14.0 => v0.13.2-0.20210527090717-42ac85ce406c
    
  • Metadata Dump doesn't complete on MySQL

    Metadata Dump doesn't complete on MySQL

    What happened:

    Dump is incomplete.

    What you expected to happen:

    Dump completes successfully.

    How to reproduce it (as minimally and precisely as possible):

    Observe the dumped entries.

    juicefs dump "mysql://mount:[email protected](10.1.0.9:3306)/mount" /tmp/juicefs.dump
    
    2022/05/26 15:13:07.548210 juicefs[1785019] <WARNING>: no found chunk target for inode 854691 indx 0 [sql.go:2747]
    2022/05/26 15:13:07.548245 juicefs[1785019] <WARNING>: no found chunk target for inode 854709 indx 0 [sql.go:2747]
    2022/05/26 15:13:07.548264 juicefs[1785019] <WARNING>: no found chunk target for inode 854726 indx 0 [sql.go:2747]
    2022/05/26 15:13:07.548288 juicefs[1785019] <WARNING>: no found chunk target for inode 4756339 indx 0 [sql.go:2747]
     Snapshot keys count: 4727518 / 4727518 [==============================================================]  done
    Dumped entries count: 1370 / 1370 [==============================================================]  done
    

    Anything else we need to know?

    Environment:

    # juicefs --version
    juicefs version 1.0.0-dev+2022-05-26.54ddc5c4
    
    {
      "Setting": {
        "Name": "mount",
        "UUID": "5cd22d3c-12d6-4be4-9a52-19e753c416e9",
        "Storage": "s3",
        "Bucket": "https://data-jfs%d.s3.de",
        "AccessKey": "d61bc82480ab49fb8d",
        "SecretKey": "removed",
        "BlockSize": 4096,
        "Compression": "zstd",
        "Shards": 512,
        "HashPrefix": false,
        "Capacity": 0,
        "Inodes": 0,
        "EncryptKey": "/YShECK6Tirb0uHljlK8PIJ12C4Fj2idW5hbzARwYaGDGoSU>
        "KeyEncrypted": true,
        "TrashDays": 90,
        "MetaVersion": 1,
        "MinClientVersion": "",
        "MaxClientVersion": ""
      },
      "Counters": {
        "usedSpace": 4568750424064,
        "usedInodes": 4723758,
        "nextInodes": 4923302,
        "nextChunk": 4062001,
        "nextSession": 137,
        "nextTrash": 667
      },
    }
    
  • I | invalid character 'S' after object key on `juicefs load`

    I | invalid character 'S' after object key on `juicefs load`

    I've tried to dump from MySQL and load to Redis, but i got this output on load:

    $ juicefs dump "mysql://jfs:[email protected](127.0.0.1:3306)/jfs_data" dump.json
    
    <INFO>: Meta address: mysql://jfs:****@(127.0.0.1:3306)/jfs_data [interface.go:383]
     Snapshot keys count: 10101347 / 10101347 [==============================================================]  done
     Dumped entries count: 3580869 / 3580869 [==============================================================]  done
    <INFO>: Dump metadata into dump.json succeed [dump.go:76]
    
    $ juicefs load redis://127.0.0.1:6379 dump.json
    
    <INFO>: Meta address: redis://127.0.0.1:6379 [interface.go:383]
    <INFO>: Ping redis: 194.003µs [redis.go:2497]
    I | invalid character 'S' after object key
    
  • Support RocksDB as Metadata Engine

    Support RocksDB as Metadata Engine

    This could be supported, at least, for standalone deployments. There are some Go wrappers out there too.

    http://rocksdb.org/ and http://myrocks.io/ and https://mariadb.com/kb/en/myrocks/

    High Performance

    RocksDB uses a log structured database engine, written entirely in C++, for maximum performance. Keys and values are just arbitrarily-sized byte streams.

    Optimized for Fast Storage

    RocksDB is optimized for fast, low latency storage such as flash drives and high-speed disk drives. RocksDB exploits the full potential of high read/write rates offered by flash or RAM.

    Adaptable

    RocksDB is adaptable to different workloads. From database storage engines such as MyRocks to application data caching to embedded workloads, RocksDB can be used for a variety of data needs.

    Basic and Advanced Database Operations

    RocksDB provides basic operations such as opening and closing a database, reading and writing to more advanced operations such as merging and compaction filters.

  • ltp syscall middle test failed

    ltp syscall middle test failed

    What happened: https://github.com/juicedata/juicefs/runs/7347698805?check_suite_focus=true

    <<<test_start>>> tag=msync04 stime=1657831727 cmdline="msync04" contacts="" analysis=exit <<<test_output>>> tst_device.c:88: TINFO: Found free device 3 '/dev/loop3' tst_supported_fs_types.c:89: TINFO: Kernel supports ext2 tst_supported_fs_types.c:51: TINFO: mkfs.ext2 does exist tst_supported_fs_types.c:89: TINFO: Kernel supports ext3 tst_supported_fs_types.c:51: TINFO: mkfs.ext3 does exist tst_supported_fs_types.c:89: TINFO: Kernel supports ext4 tst_supported_fs_types.c:51: TINFO: mkfs.ext4 does exist tst_supported_fs_types.c:89: TINFO: Kernel supports xfs tst_supported_fs_types.c:51: TINFO: mkfs.xfs does exist tst_supported_fs_types.c:89: TINFO: Kernel supports btrfs tst_supported_fs_types.c:51: TINFO: mkfs.btrfs does exist tst_supported_fs_types.c:89: TINFO: Kernel supports vfat tst_supported_fs_types.c:51: TINFO: mkfs.vfat does exist tst_supported_fs_types.c:115: TINFO: Filesystem exfat is not supported tst_supported_fs_types.c:119: TINFO: FUSE does support ntfs tst_supported_fs_types.c:51: TINFO: mkfs.ntfs does exist tst_supported_fs_types.c:147: TINFO: Skipping tmpfs as requested by the test tst_test.c:1431: TINFO: Testing on ext2 tst_test.c:932: TINFO: Formatting /dev/loop3 with ext2 opts='' extra opts='' mke2fs 1.45.5 (07-Jan-2020) tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly tst_test.c:1431: TINFO: Testing on ext3 tst_test.c:932: TINFO: Formatting /dev/loop3 with ext3 opts='' extra opts='' mke2fs 1.45.5 (07-Jan-2020) tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly tst_test.c:1431: TINFO: Testing on ext4 tst_test.c:932: TINFO: Formatting /dev/loop3 with ext4 opts='' extra opts='' mke2fs 1.45.5 (07-Jan-2020) tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly tst_test.c:1431: TINFO: Testing on xfs tst_test.c:932: TINFO: Formatting /dev/loop3 with xfs opts='' extra opts='' tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:59: TFAIL: Expected dirty bit to be set after writing to mmap()-ed area tst_test.c:1431: TINFO: Testing on btrfs tst_test.c:932: TINFO: Formatting /dev/loop3 with btrfs opts='' extra opts='' tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly tst_test.c:1431: TINFO: Testing on vfat tst_test.c:932: TINFO: Formatting /dev/loop3 with vfat opts='' extra opts='' tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly tst_test.c:1431: TINFO: Testing on ntfs tst_test.c:932: TINFO: Formatting /dev/loop3 with ntfs opts='' extra opts='' The partition start sector was not specified for /dev/loop3 and it could not be obtained automatically. It has been set to 0. The number of sectors per track was not specified for /dev/loop3 and it could not be obtained automatically. It has been set to 0. The number of heads was not specified for /dev/loop3 and it could not be obtained automatically. It has been set to 0. To boot from a device, Windows needs the 'partition start sector', the 'sectors per track' and the 'number of heads' to be set. Windows will not be able to boot from this device. tst_test.c:946: TINFO: Trying FUSE... tst_test.c:1363: TINFO: Timeout per run is 0h 05m 00s msync04.c:72: TPASS: msync() working correctly

    Summary: passed 6 failed 1 broken 0 skipped 0 warnings 0 What you expected to happen: The case passed . How to reproduce it (as minimally and precisely as possible): Follow the action. Anything else we need to know? No.

  • add / update fstab entry with --update-fstab

    add / update fstab entry with --update-fstab

    • [x] abs path check for sqlite and similars
    • [x] correctly handle slice arguments, even though they don't exist in juicefs mount yet
    • [x] do not run in containers
    • [x] unit tests

    PR code will produce the following result:

    image

    closes https://github.com/juicedata/juicefs/issues/2432

  • pkg/gateway: Support specifying umask for directories

    pkg/gateway: Support specifying umask for directories

    Currently, gateway does not respect the umask flag specified in the start command, instead, it hard codes 755 for all the directories it creates. I have added an option, umask-dir, which will dictate the umask for all the mkdir calls. I keep them as separate flags in case some users would like their umasks different for folders and directories.

    The following screenshots show different behaviors when gateway is started with umask-dir 022 vs 000. image

Next generation distributed, event-driven, parallel config management!
Next generation distributed, event-driven, parallel config management!

mgmt: next generation config management! About: Mgmt is a real-time automation tool. It is familiar to existing configuration management software, but

Aug 6, 2022
An edge-native container management system for edge computing
An edge-native container management system for edge computing

SuperEdge is an open source container management system for edge computing to manage compute resources and container applications in multiple edge regions. These resources and applications, in the current approach, are managed as one single Kubernetes cluster. A native Kubernetes cluster can be easily converted to a SuperEdge cluster.

Aug 8, 2022
cloud-native local storage management system
cloud-native local storage management system

Open-Local是由多个组件构成的本地磁盘管理系统,目标是解决当前 Kubernetes 本地存储能力缺失问题。通过Open-Local,使用本地存储会像集中式存储一样简单。

Aug 9, 2022
GoDrive: A cloud storage system similar to Dropbox or Google Drive, with resilient
GoDrive: A cloud storage system similar to Dropbox or Google Drive, with resilient

Cloud Storage Service Author: Marisa Tania, Ryan Tjakrakartadinata Professor: Matthew Malensek See project spec here: https://www.cs.usfca.edu/~mmalen

Dec 7, 2021
Experiment - Sync files to S3, fast. Go package and CLI.

gosync I want to be the fastest way to concurrently sync files and directories to/from S3. Gosync will concurrently transfer your files to and from S3

Dec 6, 2021
Production-Grade Container Scheduling and Management
Production-Grade Container Scheduling and Management

Kubernetes (K8s) Kubernetes, also known as K8s, is an open source system for managing containerized applications across multiple hosts. It provides ba

Aug 5, 2022
Run the same Docker images in AWS Lambda and AWS ECS
Run the same Docker images in AWS Lambda and AWS ECS

serverlessish tl;dr Run the exact same image for websites in Lambda as you do in ECS, Kubernetes, etc. Just add this to your Dockerfile, listen on por

Apr 2, 2022
Cloud cost estimates for Terraform in your CLI and pull requests 💰📉
Cloud cost estimates for Terraform in your CLI and pull requests 💰📉

Infracost shows cloud cost estimates for Terraform projects. It helps developers, devops and others to quickly see the cost breakdown and compare different options upfront.

Aug 12, 2022
Fleex allows you to create multiple VPS on cloud providers and use them to distribute your workload.
Fleex allows you to create multiple VPS on cloud providers and use them to distribute your workload.

Fleex allows you to create multiple VPS on cloud providers and use them to distribute your workload. Run tools like masscan, puredns, ffuf, httpx or anything you need and get results quickly!

Jul 18, 2022
☁️🏃 Get up and running with Go on Google Cloud.

Get up and running with Go and gRPC on Google Cloud Platform, with this lightweight, opinionated, batteries-included service SDK.

Aug 1, 2022
Elkeid is a Cloud-Native Host-Based Intrusion Detection solution project to provide next-generation Threat Detection and Behavior Audition with modern architecture.
Elkeid is a Cloud-Native Host-Based Intrusion Detection solution project to provide next-generation Threat Detection and Behavior Audition with modern architecture.

Elkeid is a Cloud-Native Host-Based Intrusion Detection solution project to provide next-generation Threat Detection and Behavior Audition with modern architecture.

Aug 4, 2022
Sample apps and code written for Google Cloud in the Go programming language.
Sample apps and code written for Google Cloud in the Go programming language.

Google Cloud Platform Go Samples This repository holds sample code written in Go that demonstrates the Google Cloud Platform. Some samples have accomp

Aug 1, 2022
Use Google Cloud KMS as an io.Reader and rand.Source.

Google Cloud KMS Go io.Reader and rand.Source This package provides a struct that implements Go's io.Reader and math/rand.Source interfaces, using Goo

Nov 10, 2021
A Cloud Native Buildpack that contributes SDKMAN and uses it to install dependencies like the Java Virtual Machine

gcr.io/paketo-buildpacks/sdkman A Cloud Native Buildpack that contributes SDKMAN and uses it to install dependencies like the Java Virtual Machine. Be

Jan 8, 2022
Microshift is a research project that is exploring how OpenShift1 Kubernetes can be optimized for small form factor and edge computing.

Microshift is a research project that is exploring how OpenShift1 Kubernetes can be optimized for small form factor and edge computing.

Nov 1, 2021
Contentrouter - Protect static content via Firebase Hosting with Cloud Run and Google Cloud Storage

contentrouter A Cloud Run service to gate static content stored in Google Cloud

Jan 2, 2022
A Cloud Foundry cli plugin that offers a faster and customizable alternative for cf apps

Panzer cf cli plugin A plugin for faster interaction (less API calls) with Cloud Foundry, and choose the columns you want in your output. Instead of "

Feb 14, 2022
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is an open-source POSIX file system built on top of Redis and object storage

Aug 4, 2022
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Aug 4, 2022
JuiceFS is a distributed POSIX file system built on top of Redis and S3.
JuiceFS is a distributed POSIX file system built on top of Redis and S3.

JuiceFS is a high-performance POSIX file system released under GNU Affero General Public License v3.0. It is specially optimized for the cloud-native

Aug 9, 2022