"rsync for cloud storage" - Google Drive, S3, Dropbox, Backblaze B2, One Drive, Swift, Hubic, Wasabi, Google Cloud Storage, Yandex Files

rclone logo

Website | Documentation | Download | Contributing | Changelog | Installation | Forum

Build Status Go Report Card GoDoc Docker Pulls

Rclone

Rclone ("rsync for cloud storage") is a command-line program to sync files and directories to and from different cloud storage providers.

Storage providers

Please see the full list of all storage providers and their features

Features

  • MD5/SHA-1 hashes checked at all times for file integrity
  • Timestamps preserved on files
  • Partial syncs supported on a whole file basis
  • Copy mode to just copy new/changed files
  • Sync (one way) mode to make a directory identical
  • Check mode to check for file hash equality
  • Can sync to and from network, e.g. two different cloud accounts
  • Optional large file chunking (Chunker)
  • Optional transparent compression (Compress)
  • Optional encryption (Crypt)
  • Optional FUSE mount (rclone mount)
  • Multi-threaded downloads to local disk
  • Can serve local or remote files over HTTP/WebDav/FTP/SFTP/dlna

Installation & documentation

Please see the rclone website for:

Downloads

License

This is free software under the terms of MIT the license (check the COPYING file included in this package).

Owner
rclone
Github organization for development of rclone and related projects
rclone
Comments
  • Any plan to add support to Google Photos?

    Any plan to add support to Google Photos?

    If possible, please add a support to upload photo/video files to Google Photos directly!

    Although it's possible to add a "Google Photos" folder in Google Drive, and all your Google Photos will be there (organized by date folder), However, photos uploaded into this folder does not seems to reflect into Google Photos.

    Also, if we upload in "High Quality" than there will be unlimited storage size for photos and Video. I am not sure the "down-sizing" is done locally or remotely by Google Photos server, however...

    I realize Google Photo is not a good place to organize photos but it's a good place to share photos with others. And with a stock of 300k+ photos I really don't want to have my PC running for God-knows-how-long for the upload.... It's the job of RPi!

  • Google Drive (encrypted):

    Google Drive (encrypted): "failed to authenticate decrypted block - bad password?" on files during reading

    What is your rclone version (eg output from rclone -V)

    rclone 1.35

    Which OS you are using and how many bits (eg Windows 7, 64 bit)

    Devuan Linux 1.0 (systemd-free fork of Debian Jessie).

    Which cloud storage system are you using? (eg Google Drive)

    Google Drive, with the built-in rclone encryption.

    The command you were trying to run (eg rclone copy /tmp remote:tmp)

    rclone -v --dump-headers --log-file=LOGFILE copy egd:REDACTED/REDACTED/REDACTED/REDACTED.mp4 /tmp/REDACTED.mp4

    A log from the command with the -v flag (eg output from rclone -v copy /tmp remote:tmp)

    Please find it attached: LOGFILE.txt

    Note 1: This is related to #677, which is closed and I cannot reopen. Note 2: These errors are 100% reproducible.

    Cheers, Durval.

  • rclone still using too much memory

    rclone still using too much memory

    https://github.com/ncw/rclone/issues/2157

    Referencing the above ticket.
    My version rclone v1.40-034-g06a8d301Ξ²

    • os/arch: linux/amd64
    • go version: go1.10

    Still seeing hte above issue but it happens less frequently. My setup is exactly the same as that ticket but i've now upgraded the version. What can I help to provide to troubleshoot if it is the same issue or a different one?

    Ive just increased the --attr-timeout 5s just to try it. I'll see if that helps as a shot in the dark.

  • Are we safe? Amazon Cloud Drive

    Are we safe? Amazon Cloud Drive

    I mean could be there the same scenarion that they disable rclone app from amazon? or does the rclone handle it other way than acd_cli did?

    ACD_CLI weird:

    https://github.com/yadayada/acd_cli/pull/562 - "I created this pull request only to ask what happend to acd_cli's issues page?! It just vanished! "

  • CRITICAL: Amazon Drive does not work anymore with rclone 429:

    CRITICAL: Amazon Drive does not work anymore with rclone 429: "429 Too Many Requests" / Rate exceeded

    It seems Amazon Drive blocked rclone, I tested it on 4 different servers, tried reauth the app but no success.

    Any rclone command will deliver the following errors:

    2017/05/18 11:19:14 DEBUG : pacer: Rate limited, sleeping for 666.145821ms (1 consecutive low level retries)
    2017/05/18 11:19:14 DEBUG : pacer: low level retry 1/10 (error HTTP code 429: "429 Too Many Requests": response body: "{\"message\":\"Rate exceeded\"}")
    
  • Can't connect to SharePoint Online team sites such as https://orgname.sharepoint.com/sites/Site-Name

    Can't connect to SharePoint Online team sites such as https://orgname.sharepoint.com/sites/Site-Name

    I’ve been able to successfully connect to the default https://orgname-my.sharepoint.com/ personal SharePoint Site...

    $ rclone lsd sp3:
    -1 2017-01-04 22:16:34         0 Attachments
    -1 2015-01-23 11:13:10         0 Shared with Everyone
    

    But I’m having difficultly figuring out how to connect to team sites on URLs such a: https://orgname.sharepoint.com/sites/Site-Name etc.

    The β€œrclone config” guided process doesn’t let you set the resource_url when setting it up. So I’ve tried editing ~/.config/rclone.conf using a few different methods, changing the resource_url and then reauthorizing, I've tried a number of different addresses like...

    For the main/default team site:

    https://orgname.sharepoint.com/ 
    https://orgname.sharepoint.com/Shared Documents
    

    For separate team sites, or what Microsoft call "site collections":

    https://orgname.sharepoint.com/sites/Site-Name
    https://orgname.sharepoint.com/sites/Site-Name/
    https://orgname.sharepoint.com/sites/Site-Name/Shared Documents
    https://orgname.sharepoint.com/sites/Site-Name/Shared Documents/
    https://orgname.sharepoint.com/sites/Site-Name/Shared%20Documents
    https://orgname.sharepoint.com/sites/Site-Name/Shared%20Documents/
    

    I'm not sure which address format I'm meant to use? (for either the main team site, or all the other ones under /sites/)

    I always get the error:

    $ rclone -vv lsd sp3:
    2017/10/25 03:17:18 DEBUG : Using config file from "/home/user/.config/rclone/rclone.conf"
    2017/10/25 03:17:18 DEBUG : rclone: Version "v1.38" starting with parameters ["rclone" "-vv" "lsd" "sp3:"]
    2017/10/25 03:17:19 Failed to create file system for "sp3:": failed to get root: 401 Unauthorized: 
    

    (there's nothing after that last colon)

    Does anyone know how I access team SharePoint sites?

    My rclone version is:

    rclone v1.38
    - os/arch: linux/amd64
    - go version: go1.9
    

    ...on Manjaro 64bit, installed from the distro's repos.

    I'm choosing the "business" option when asked in rclone config.

  • Two-way (bidirectional) synchronization

    Two-way (bidirectional) synchronization

    I'm sorry if this is answered elsewhere but I couldn't find it in that case.

    I want to replace my current Owncloud+Owncloud client (Linux)+FolderSync(Android) setup with Drive+Rclone+FolderSync. But there is one thing I can't figure out how to do with rclone β€” smart two-way deletion synchronization. Which means: if a file was present on both server (Drive) and local machine, and then was deleted on either of them, the file will be eventually removed on both regardless of which direction you run sync first. Same, if a file was added on either server or client, it will be uploaded to the other one.

    Can rclone do that, and if doesn't is there a chance of such functionality in future?

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
  • Manage folders

    Manage folders

    Some of rclones remote fs do understand the concept of folders, eg

    • drive
    • local
    • dropbox

    Make an optional interfaces (eg Mkdir, Rmdir) for these FS to manage the creation and deletion of folders. This would enable empty folders, and deletion of empty folders on sync.

  • On-the-fly encryption support

    On-the-fly encryption support

    I've seen a comment in the thread about ACD support regarding plans for an encryption mechanism in rclone. Could you please elaborate on that? When could this possibly become available?

  • Support for OpenDrive storage

    Support for OpenDrive storage

    I was just looking at OpenDrive as a potential storage provider. They offer pretty competitive prices already, but they also claim to do competitor price matching, so may be a viable alternative to ACD's unlimited storage.

    Their API documentation is linked here: https://www.opendrive.com/api

    They also claim to have (only beta so far) support for Webdav, so Webdav support (#580) may avoid the need for native support.

  • [GDrive + FUSE] 403 Forbidden Errors - API daily limit exceeded

    [GDrive + FUSE] 403 Forbidden Errors - API daily limit exceeded

    As discussed in the forum (https://forum.rclone.org/t/google-drive-vs-acd-for-plex/471), users are getting 403 forbidden errors and unable to file access when using rclone FUSE mount. This is especially with using Plex to access the mount. Appears to be related to exceeding daily API access: https://developers.google.com/drive/v3/web/handle-errors . Users will get a temporary ban from access files via rclone FUSE mount or download files. Access to the Google Drive website still seems to work and upload still works without issue. It seems the only viable solution is to have a local cache as mentioned in #897.

    What is your rclone version (eg output from rclone -V) v1.34-75-gcbfec0dΞ² Which OS you are using and how many bits (eg Windows 7, 64 bit) Linux Ubuntu Which cloud storage system are you using? (eg Google Drive) Google Drive The command you were trying to run (eg rclone copy /tmp remote:tmp) rclone copy --verbose --no-traverse gdrive:test/jellyfish-40-mbps-hd-h264.mkv ~/tmp A log from the command with the -v flag (eg output from rclone -v copy /tmp remote:tmp)

    2016/12/26 05:43:12 Local file system at /home/xxxxxx/tmp: Modify window is 1ms
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for checks to finish
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for transfers to finish
    2016/12/26 05:43:13 jellyfish-40-mbps-hd-h264.mkv: Failed to copy: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Attempt 1/3 failed with 1 errors and: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for checks to finish
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for transfers to finish
    2016/12/26 05:43:13 jellyfish-40-mbps-hd-h264.mkv: Failed to copy: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Attempt 2/3 failed with 1 errors and: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for checks to finish
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for transfers to finish
    2016/12/26 05:43:13 jellyfish-40-mbps-hd-h264.mkv: Failed to copy: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Attempt 3/3 failed with 1 errors and: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Failed to copy: failed to open source object: bad response: 403: 403 Forbidden 
    
  • Changes polling not working for OneDrive Personal

    Changes polling not working for OneDrive Personal "Shared with me" folders

    What is the problem you are having with rclone?

    When a "Shared with me" folder (a folder shared with you, but owned by others) on OneDrive Personal is mounted with rclone, --poll-interval will not work.

    A quick look suggests that this is because the current implementation builds the delta request with the rclone user's drive ID rather than the drive ID of the owner of the shared folder. With Graph Explorer it seems to me that we should be able to poll for changes if we use the sharer's drive ID to build the request.

    What is your rclone version (output from rclone version)

    v1.62.0-beta.6672.98fa93f6d

    Which cloud storage system are you using? (e.g. Google Drive)

    OneDrive Personal

    The command you were trying to run (e.g. rclone copy /tmp remote:tmp)

    rclone mount <OneDrive Personal>:<folder> <mount point> where <folder> is the shared folder.

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
  • storj: implement public link

    storj: implement public link

    What is the purpose of this change?

    Implement the generation of a public link for the Storj backend.

    Was the change discussed in an issue or in the forum before?

    No.

    Checklist

    • [x] I have read the contribution guidelines.
    • [x] I have added tests for all changes in this PR if appropriate.
    • [x] I have added documentation for the changes if appropriate.
    • [x] All commit messages are in house style.
    • [x] I'm done, this Pull Request is ready for review :-)
  • lib/oauthutil: don't retry for probably fatal errors

    lib/oauthutil: don't retry for probably fatal errors

    According to the OAuth2 RFC (https://www.rfc-editor.org/rfc/rfc6749#section-5.2), the server should return a 400 (or maybe sometimes 401) for a list of unrecoverable errors, such as expired refresh tokens. Rclone shouldn't still make 5 attempts to get a token in those cases. @ncw do you see any potential problem with this?

    Checklist

    • [x] I have read the contribution guidelines.
    • [ ] ~~I have added tests for all changes in this PR if appropriate.~~
    • [ ] ~~I have added documentation for the changes if appropriate.~~
    • [x] All commit messages are in house style.
    • [x] I'm done, this Pull Request is ready for review :-)
  • s3: add GCS to provider list

    s3: add GCS to provider list

    What is the purpose of this change?

    Add GCS as a provider to workaround a specific issue with that storage.

    Was the change discussed in an issue or in the forum before?

    https://github.com/rclone/rclone/issues/6670

    Checklist

    • [ ] I have read the contribution guidelines.
    • [ ] I have added tests for all changes in this PR if appropriate.
    • [ ] I have added documentation for the changes if appropriate.
    • [ ] All commit messages are in house style.
    • [ ] I'm done, this Pull Request is ready for review :-)
  • rclone rcd --addr unknown flag

    rclone rcd --addr unknown flag

    rclone version rclone 1.61.1-termux

    • os/version: unknown
    • os/kernel: 4.19.157-perf+ (aarch64)
    • os/type: android
    • os/arch: arm64
    • go/version: go1.19.4
    • go/linking: dynamic
    • go/tags: noselfupdate

    rclone rcd --addr :5527 Error: unknown flag: --addr Usage: rclone rcd * [flags]

    Flags: -h, --help help for rcd

    Use "rclone [command] --help" for more information about a command. Use "rclone help flags" for to see the global flags. Use "rclone help backends" for a list of supported services.

    2023/01/08 06:06:33 Fatal error: unknown flag: --addr

  • HTTP: support NetGear shares

    HTTP: support NetGear shares

    The associated forum post URL from https://forum.rclone.org

    N/A (this is a half-bug half-feature-request)

    What is your current rclone version (output from rclone version)?

    rclone v1.53.3-DEV

    • os/arch: linux/386
    • go version: go1.16.15

    What problem are you are trying to solve?

    RClone currently does not support NetGear router shares at http://192.168.1.1/shares and it absolutely is bothering me big time.

    How do you think rclone should be changed to solve that?

    Please add support for NetGear Open Directory styles, the biggest issue we will face is that NetGear does not provide the difference between folders and files, so we probably are forced to get the headers of all subdirectories when we open a share.

    [REDACTED]’s-iPhone-13-Noob:~# rclone ls --http-url http ://192.168.1.1/shares :http: -vv 2023/01/08 02:47:51 DEBUG : rclone: Version "v1.53.3-DEV" starting with parameters ["rclone" "ls" "--http-url" "http://192.168.1.1/shares" ":http:" "-vv"] 2023/01/08 02:47:51 DEBUG : Creating backend with remote ":http:" 2023/01/08 02:47:51 DEBUG : Using config file from "/root/.config/rclone/rclone.conf" 2023/01/08 02:47:51 DEBUG : T_Drive: skipping because of error: failed to stat: HTTP Error 400: 400 Bad Request 2023/01/08 02:47:51 DEBUG : 2 go routines active

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

SFTPGo - Fully featured and highly configurable SFTP server with optional FTP/S and WebDAV support - S3, Google Cloud Storage, Azure Blob

Jan 4, 2023
QingStor Object Storage service support for go-storage

go-services-qingstor QingStor Object Storage service support for go-storage. Install go get github.com/minhjh/go-service-qingstor/v3 Usage import ( "

Dec 13, 2021
Storj is building a decentralized cloud storage network
Storj is building a decentralized cloud storage network

Ongoing Storj v3 development. Decentralized cloud object storage that is affordable, easy to use, private, and secure.

Jan 8, 2023
Rook is an open source cloud-native storage orchestrator for Kubernetes

Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse set of storage solutions to natively integrate with cloud-native environments.

Oct 25, 2022
s3git: git for Cloud Storage. Distributed Version Control for Data.
s3git: git for Cloud Storage. Distributed Version Control for Data.

s3git: git for Cloud Storage. Distributed Version Control for Data. Create decentralized and versioned repos that scale infinitely to 100s of millions of files. Clone huge PB-scale repos on your local SSD to make changes, commit and push back. Oh yeah, it dedupes too and offers directory versioning.

Dec 27, 2022
Cloud-Native distributed storage built on and for Kubernetes
Cloud-Native distributed storage built on and for Kubernetes

Longhorn Build Status Engine: Manager: Instance Manager: Share Manager: Backing Image Manager: UI: Test: Release Status Release Version Type 1.1 1.1.2

Jan 1, 2023
An encrypted object storage system with unlimited space backed by Telegram.

TGStore An encrypted object storage system with unlimited space backed by Telegram. Please only upload what you really need to upload, don't abuse any

Nov 28, 2022
tstorage is a lightweight local on-disk storage engine for time-series data
tstorage is a lightweight local on-disk storage engine for time-series data

tstorage is a lightweight local on-disk storage engine for time-series data with a straightforward API. Especially ingestion is massively opt

Jan 1, 2023
storage interface for local disk or AWS S3 (or Minio) platform

storage interface for local disk or AWS S3 (or Minio) platform

Apr 19, 2022
Terraform provider for the Minio object storage.

terraform-provider-minio A Terraform provider for Minio, a self-hosted object storage server that is compatible with S3. Check out the documenation on

Dec 1, 2022
A Redis-compatible server with PostgreSQL storage backend

postgredis A wild idea of having Redis-compatible server with PostgreSQL backend. Getting started As a binary: ./postgredis -addr=:6380 -db=postgres:/

Nov 8, 2021
CSI for S3 compatible SberCloud Object Storage Service

sbercloud-csi-obs CSI for S3 compatible SberCloud Object Storage Service This is a Container Storage Interface (CSI) for S3 (or S3 compatible) storage

Feb 17, 2022
Void is a zero storage cost large file sharing system.

void void is a zero storage cost large file sharing system. License Copyright Β© 2021 Changkun Ou. All rights reserved. Unauthorized using, copying, mo

Nov 22, 2021
This is a simple file storage server. User can upload file, delete file and list file on the server.
This is a simple file storage server.  User can upload file,  delete file and list file on the server.

Simple File Storage Server This is a simple file storage server. User can upload file, delete file and list file on the server. If you want to build a

Jan 19, 2022
High Performance, Kubernetes Native Object Storage
High Performance, Kubernetes Native Object Storage

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under GNU Affero General Public License v3.0. It is API compatible with Ama

Jan 2, 2023
Perkeep (nΓ©e Camlistore) is your personal storage system for life: a way of storing, syncing, sharing, modelling and backing up content.

Perkeep is your personal storage system. It's a way to store, sync, share, import, model, and back up content. Keep your stuff for life. For more, see

Dec 26, 2022
Storage Orchestration for Kubernetes

What is Rook? Rook is an open source cloud-native storage orchestrator for Kubernetes, providing the platform, framework, and support for a diverse se

Dec 29, 2022
A High Performance Object Storage released under Apache License
A High Performance Object Storage released under Apache License

MinIO Quickstart Guide MinIO is a High Performance Object Storage released under Apache License v2.0. It is API compatible with Amazon S3 cloud storag

Sep 30, 2021
GoDrive: A cloud storage system similar to Dropbox or Google Drive, with resilient
GoDrive: A cloud storage system similar to Dropbox or Google Drive, with resilient

Cloud Storage Service Author: Marisa Tania, Ryan Tjakrakartadinata Professor: Matthew Malensek See project spec here: https://www.cs.usfca.edu/~mmalen

Dec 7, 2021
Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers.
Rclone (

Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers.

Jan 5, 2023