Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers.

rclone logo

Website | Documentation | Download | Contributing | Changelog | Installation | Forum

Build Status Go Report Card GoDoc Docker Pulls

Rclone

Rclone ("rsync for cloud storage") is a command line program to sync files and directories to and from different cloud storage providers.

Storage providers

Please see the full list of all storage providers and their features

Features

  • MD5/SHA-1 hashes checked at all times for file integrity
  • Timestamps preserved on files
  • Partial syncs supported on a whole file basis
  • Copy mode to just copy new/changed files
  • Sync (one way) mode to make a directory identical
  • Check mode to check for file hash equality
  • Can sync to and from network, e.g. two different cloud accounts
  • Optional large file chunking (Chunker)
  • Optional transparent compression (Compress)
  • Optional encryption (Crypt)
  • Optional cache (Cache)
  • Optional FUSE mount (rclone mount)
  • Multi-threaded downloads to local disk
  • Can serve local or remote files over HTTP/WebDav/FTP/SFTP/dlna

Installation & documentation

Please see the rclone website for:

Downloads

License

This is free software under the terms of MIT the license (check the COPYING file included in this package).

Owner
rclone
Github organization for development of rclone and related projects
rclone
Comments
  • Any plan to add support to Google Photos?

    Any plan to add support to Google Photos?

    If possible, please add a support to upload photo/video files to Google Photos directly!

    Although it's possible to add a "Google Photos" folder in Google Drive, and all your Google Photos will be there (organized by date folder), However, photos uploaded into this folder does not seems to reflect into Google Photos.

    Also, if we upload in "High Quality" than there will be unlimited storage size for photos and Video. I am not sure the "down-sizing" is done locally or remotely by Google Photos server, however...

    I realize Google Photo is not a good place to organize photos but it's a good place to share photos with others. And with a stock of 300k+ photos I really don't want to have my PC running for God-knows-how-long for the upload.... It's the job of RPi!

  • Google Drive (encrypted):

    Google Drive (encrypted): "failed to authenticate decrypted block - bad password?" on files during reading

    What is your rclone version (eg output from rclone -V)

    rclone 1.35

    Which OS you are using and how many bits (eg Windows 7, 64 bit)

    Devuan Linux 1.0 (systemd-free fork of Debian Jessie).

    Which cloud storage system are you using? (eg Google Drive)

    Google Drive, with the built-in rclone encryption.

    The command you were trying to run (eg rclone copy /tmp remote:tmp)

    rclone -v --dump-headers --log-file=LOGFILE copy egd:REDACTED/REDACTED/REDACTED/REDACTED.mp4 /tmp/REDACTED.mp4

    A log from the command with the -v flag (eg output from rclone -v copy /tmp remote:tmp)

    Please find it attached: LOGFILE.txt

    Note 1: This is related to #677, which is closed and I cannot reopen. Note 2: These errors are 100% reproducible.

    Cheers, Durval.

  • rclone still using too much memory

    rclone still using too much memory

    https://github.com/ncw/rclone/issues/2157

    Referencing the above ticket.
    My version rclone v1.40-034-g06a8d301Ξ²

    • os/arch: linux/amd64
    • go version: go1.10

    Still seeing hte above issue but it happens less frequently. My setup is exactly the same as that ticket but i've now upgraded the version. What can I help to provide to troubleshoot if it is the same issue or a different one?

    Ive just increased the --attr-timeout 5s just to try it. I'll see if that helps as a shot in the dark.

  • Are we safe? Amazon Cloud Drive

    Are we safe? Amazon Cloud Drive

    I mean could be there the same scenarion that they disable rclone app from amazon? or does the rclone handle it other way than acd_cli did?

    ACD_CLI weird:

    https://github.com/yadayada/acd_cli/pull/562 - "I created this pull request only to ask what happend to acd_cli's issues page?! It just vanished! "

  • CRITICAL: Amazon Drive does not work anymore with rclone 429:

    CRITICAL: Amazon Drive does not work anymore with rclone 429: "429 Too Many Requests" / Rate exceeded

    It seems Amazon Drive blocked rclone, I tested it on 4 different servers, tried reauth the app but no success.

    Any rclone command will deliver the following errors:

    2017/05/18 11:19:14 DEBUG : pacer: Rate limited, sleeping for 666.145821ms (1 consecutive low level retries)
    2017/05/18 11:19:14 DEBUG : pacer: low level retry 1/10 (error HTTP code 429: "429 Too Many Requests": response body: "{\"message\":\"Rate exceeded\"}")
    
  • Can't connect to SharePoint Online team sites such as https://orgname.sharepoint.com/sites/Site-Name

    Can't connect to SharePoint Online team sites such as https://orgname.sharepoint.com/sites/Site-Name

    I’ve been able to successfully connect to the default https://orgname-my.sharepoint.com/ personal SharePoint Site...

    $ rclone lsd sp3:
    -1 2017-01-04 22:16:34         0 Attachments
    -1 2015-01-23 11:13:10         0 Shared with Everyone
    

    But I’m having difficultly figuring out how to connect to team sites on URLs such a: https://orgname.sharepoint.com/sites/Site-Name etc.

    The β€œrclone config” guided process doesn’t let you set the resource_url when setting it up. So I’ve tried editing ~/.config/rclone.conf using a few different methods, changing the resource_url and then reauthorizing, I've tried a number of different addresses like...

    For the main/default team site:

    https://orgname.sharepoint.com/ 
    https://orgname.sharepoint.com/Shared Documents
    

    For separate team sites, or what Microsoft call "site collections":

    https://orgname.sharepoint.com/sites/Site-Name
    https://orgname.sharepoint.com/sites/Site-Name/
    https://orgname.sharepoint.com/sites/Site-Name/Shared Documents
    https://orgname.sharepoint.com/sites/Site-Name/Shared Documents/
    https://orgname.sharepoint.com/sites/Site-Name/Shared%20Documents
    https://orgname.sharepoint.com/sites/Site-Name/Shared%20Documents/
    

    I'm not sure which address format I'm meant to use? (for either the main team site, or all the other ones under /sites/)

    I always get the error:

    $ rclone -vv lsd sp3:
    2017/10/25 03:17:18 DEBUG : Using config file from "/home/user/.config/rclone/rclone.conf"
    2017/10/25 03:17:18 DEBUG : rclone: Version "v1.38" starting with parameters ["rclone" "-vv" "lsd" "sp3:"]
    2017/10/25 03:17:19 Failed to create file system for "sp3:": failed to get root: 401 Unauthorized: 
    

    (there's nothing after that last colon)

    Does anyone know how I access team SharePoint sites?

    My rclone version is:

    rclone v1.38
    - os/arch: linux/amd64
    - go version: go1.9
    

    ...on Manjaro 64bit, installed from the distro's repos.

    I'm choosing the "business" option when asked in rclone config.

  • Two-way (bidirectional) synchronization

    Two-way (bidirectional) synchronization

    I'm sorry if this is answered elsewhere but I couldn't find it in that case.

    I want to replace my current Owncloud+Owncloud client (Linux)+FolderSync(Android) setup with Drive+Rclone+FolderSync. But there is one thing I can't figure out how to do with rclone β€” smart two-way deletion synchronization. Which means: if a file was present on both server (Drive) and local machine, and then was deleted on either of them, the file will be eventually removed on both regardless of which direction you run sync first. Same, if a file was added on either server or client, it will be uploaded to the other one.

    Can rclone do that, and if doesn't is there a chance of such functionality in future?

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
  • Manage folders

    Manage folders

    Some of rclones remote fs do understand the concept of folders, eg

    • drive
    • local
    • dropbox

    Make an optional interfaces (eg Mkdir, Rmdir) for these FS to manage the creation and deletion of folders. This would enable empty folders, and deletion of empty folders on sync.

  • On-the-fly encryption support

    On-the-fly encryption support

    I've seen a comment in the thread about ACD support regarding plans for an encryption mechanism in rclone. Could you please elaborate on that? When could this possibly become available?

  • Support for OpenDrive storage

    Support for OpenDrive storage

    I was just looking at OpenDrive as a potential storage provider. They offer pretty competitive prices already, but they also claim to do competitor price matching, so may be a viable alternative to ACD's unlimited storage.

    Their API documentation is linked here: https://www.opendrive.com/api

    They also claim to have (only beta so far) support for Webdav, so Webdav support (#580) may avoid the need for native support.

  • [GDrive + FUSE] 403 Forbidden Errors - API daily limit exceeded

    [GDrive + FUSE] 403 Forbidden Errors - API daily limit exceeded

    As discussed in the forum (https://forum.rclone.org/t/google-drive-vs-acd-for-plex/471), users are getting 403 forbidden errors and unable to file access when using rclone FUSE mount. This is especially with using Plex to access the mount. Appears to be related to exceeding daily API access: https://developers.google.com/drive/v3/web/handle-errors . Users will get a temporary ban from access files via rclone FUSE mount or download files. Access to the Google Drive website still seems to work and upload still works without issue. It seems the only viable solution is to have a local cache as mentioned in #897.

    What is your rclone version (eg output from rclone -V) v1.34-75-gcbfec0dΞ² Which OS you are using and how many bits (eg Windows 7, 64 bit) Linux Ubuntu Which cloud storage system are you using? (eg Google Drive) Google Drive The command you were trying to run (eg rclone copy /tmp remote:tmp) rclone copy --verbose --no-traverse gdrive:test/jellyfish-40-mbps-hd-h264.mkv ~/tmp A log from the command with the -v flag (eg output from rclone -v copy /tmp remote:tmp)

    2016/12/26 05:43:12 Local file system at /home/xxxxxx/tmp: Modify window is 1ms
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for checks to finish
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for transfers to finish
    2016/12/26 05:43:13 jellyfish-40-mbps-hd-h264.mkv: Failed to copy: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Attempt 1/3 failed with 1 errors and: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for checks to finish
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for transfers to finish
    2016/12/26 05:43:13 jellyfish-40-mbps-hd-h264.mkv: Failed to copy: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Attempt 2/3 failed with 1 errors and: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for checks to finish
    2016/12/26 05:43:13 Local file system at /home/xxxxxx/tmp: Waiting for transfers to finish
    2016/12/26 05:43:13 jellyfish-40-mbps-hd-h264.mkv: Failed to copy: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Attempt 3/3 failed with 1 errors and: failed to open source object: bad response: 403: 403 Forbidden
    2016/12/26 05:43:13 Failed to copy: failed to open source object: bad response: 403: 403 Forbidden 
    
  • config/create-API interprets backslash-paths as escaping

    config/create-API interprets backslash-paths as escaping

    What is the problem you are having with rclone?

    https://rclone.org/local/#paths-on-windows says that the path separator can be \ or / on windows, which is not working at config/create. Reproduceable on Windows with:

    1. rclone --rc-no-auth rcd
    2. POST http://localhost:5572/config/create?name=Normal&type=local&parameters={}
    3. POST http://localhost:5572/config/create?name=Crypted&type=crypt&parameters={"remote":"Normal:C:\path1\path2"}
    {
    	"error": "key \"parameters\": invalid character 'p' in string escape code",
    	"input": {
    		"name": "Crypted",
    		"parameters": "{\"remote\":\"Normal:C:\\path1\\path2\"}",
    		"type": "crypt"
    	},
    	"path": "config/create",
    	"status": 400
    }
    

    It is working if the path is escaped or / is used instead:

    • POST http://localhost:5572/config/create?name=Crypted&type=crypt&parameters={"remote":"Normal:C:\path1\path2"}
    • POST http://localhost:5572/config/create?name=Crypted&type=crypt&parameters={"remote":"Normal:C:/path1/path2"}

    mount/mount supports :

    • POST http://localhost:5572/mount/mount?mountPoint=X:&fs=Normal:C:\path1\path2

    What is your rclone version (output from rclone version)

    rclone v1.61.1

    Which OS you are using and how many bits (e.g. Windows 7, 64 bit)

    Windows 11, 64 bit

    The command you were trying to run (e.g. rclone copy /tmp remote:tmp)

    POST http://localhost:5572/config/create?name=Crypted&type=crypt&parameters={"remote":"Normal:C:\path1\path2"}

    A log from the command with the -vv flag (e.g. output from rclone -vv copy /tmp remote:tmp)

    rclone -vv --rc-no-auth rcd
    2023/01/06 17:10:10 DEBUG : rc: "config/create": with parameters map[name:Crypted parameters:{"remote":"Normal:C:\path1\path2"} type:crypt]
    2023/01/06 17:10:10 ERROR : rc: "config/create": error: key "parameters": invalid character 'p' in string escape code
    

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
  • SMB remote returns different file list than e.g. `smbclient`

    SMB remote returns different file list than e.g. `smbclient`

    What is the problem you are having with rclone?

    I am accessing SMB file share on Windows server and rclone gives me different file list than when I list the files using for instance linux smbclient.

    I found that the issues is in go-smb2 library that Rclone is using for SMB functionality and has relation to following issue:

    https://github.com/hirochachacha/go-smb2/issues/40

    The author of go-smb2 library acknoledges the problem, but hesitates to fix it, because it implies breaking change in the library public API. (The library does not have access to original hostname).

    I created patch based on suggestions and attempts from the issue above that fixes the problem. So I think that either somebody from Rclone community might find it useful or evetually it could be integrated into the Rclone itself somehow.

    Please excuse me because the patch is quite "raw", I'm not a go coder, I just wanted to make it work. You can surely find the better way how to integrate this change :)

    File go-smb2.patch
    diff --git a/client.go b/client.go
    index e0f7b36..f0ad2e3 100644
    --- a/client.go
    +++ b/client.go
    @@ -30,8 +30,8 @@ type Dialer struct {
     // It returns a session. It doesn't support NetBIOS transport.
     // This implementation doesn't support multi-session on the same TCP connection.
     // If you want to use another session, you need to prepare another TCP connection at first.
    -func (d *Dialer) Dial(tcpConn net.Conn) (*Session, error) {
    -	return d.DialContext(context.Background(), tcpConn)
    +func (d *Dialer) Dial(tcpConn net.Conn, host string) (*Session, error) {
    +	return d.DialContext(context.Background(), tcpConn, host)
     }
     
     // DialContext performs negotiation and authentication using the provided context.
    @@ -39,7 +39,7 @@ func (d *Dialer) Dial(tcpConn net.Conn) (*Session, error) {
     // If you want to use the same context, call Session.WithContext manually.
     // This implementation doesn't support multi-session on the same TCP connection.
     // If you want to use another session, you need to prepare another TCP connection at first.
    -func (d *Dialer) DialContext(ctx context.Context, tcpConn net.Conn) (*Session, error) {
    +func (d *Dialer) DialContext(ctx context.Context, tcpConn net.Conn, host string) (*Session, error) {
     	if ctx == nil {
     		panic("nil context")
     	}
    @@ -69,13 +69,14 @@ func (d *Dialer) DialContext(ctx context.Context, tcpConn net.Conn) (*Session, e
     		return nil, err
     	}
     
    -	return &Session{s: s, ctx: context.Background(), addr: tcpConn.RemoteAddr().String()}, nil
    +	return &Session{s: s, ctx: context.Background(), host: host, addr: tcpConn.RemoteAddr().String()}, nil
     }
     
     // Session represents a SMB session.
     type Session struct {
     	s    *session
     	ctx  context.Context
    +	host string
     	addr string
     }
     
    @@ -83,7 +84,7 @@ func (c *Session) WithContext(ctx context.Context) *Session {
     	if ctx == nil {
     		panic("nil context")
     	}
    -	return &Session{s: c.s, ctx: ctx, addr: c.addr}
    +	return &Session{s: c.s, ctx: ctx, host: c.host, addr: c.addr}
     }
     
     // Logoff invalidates the current SMB session.
    @@ -99,7 +100,11 @@ func (c *Session) Mount(sharename string) (*Share, error) {
     	sharename = normPath(sharename)
     
     	if !strings.ContainsRune(sharename, '\\') {
    -		sharename = fmt.Sprintf(`\\%s\%s`, c.addr, sharename)
    +		if c.host != "" {
    +			sharename = fmt.Sprintf(`\\%s\%s`, c.host, sharename)
    +		} else {
    +			sharename = fmt.Sprintf(`\\%s\%s`, c.addr, sharename)
    +		}
     	}
     
     	if err := validateMountPath(sharename); err != nil {
    @@ -116,6 +121,9 @@ func (c *Session) Mount(sharename string) (*Share, error) {
     
     func (c *Session) ListSharenames() ([]string, error) {
     	servername := c.addr
    +	if c.host != "" {
    +		servername = c.host
    +	}
     
     	fs, err := c.Mount(fmt.Sprintf(`\\%s\IPC$`, servername))
     	if err != nil {
    diff --git a/session.go b/session.go
    index 0292f9e..5feade5 100644
    --- a/session.go
    +++ b/session.go
    @@ -265,6 +265,8 @@ type session struct {
     	decrypter cipher.AEAD
     
     	// applicationKey []byte
    +
    +	serverName string
     }
     
     func (s *session) logoff(ctx context.Context) error {
    
    File rclone.patch
    diff --git a/backend/smb/connpool.go b/backend/smb/connpool.go
    index d4bed787c..f4b45a984 100644
    --- a/backend/smb/connpool.go
    +++ b/backend/smb/connpool.go
    @@ -40,7 +40,7 @@ func (f *Fs) dial(ctx context.Context, network, addr string) (*conn, error) {
     		},
     	}
     
    -	session, err := d.DialContext(ctx, tconn)
    +	session, err := d.DialContext(ctx, tconn, addr)
     	if err != nil {
     		return nil, err
     	}
    
    File Dockerfile
    FROM golang
    
    COPY *.patch /usr/src/
    RUN set -o errexit \
      && mkdir -p /go/src/github.com/rclone \
      && git clone -b v1.61.1 https://github.com/rclone/rclone.git /go/src/github.com/rclone/rclone
    WORKDIR /go/src/github.com/rclone/rclone/
    
    RUN set -o errexit \
      && git apply < /usr/src/rclone.patch \
      && go get \
      && ( set -o errexit; \
           cd /go/pkg/mod/github.com/hirochachacha/[email protected]; \
           git apply < /usr/src/go-smb2.patch \
         ) \
      && CGO_ENABLED=0 \
      make
    RUN ./rclone version
    

    What is your rclone version (output from rclone version)

    1.61.1

    Which OS you are using and how many bits (e.g. Windows 7, 64 bit)

    Ubuntu 20.04, 64bit

    Which cloud storage system are you using? (e.g. Google Drive)

    smb

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
  • Rclone ignores `--ca-cert <path_to_cert>` option (`ftp` remote or `serve sftp`)

    Rclone ignores `--ca-cert ` option (`ftp` remote or `serve sftp`)

    What is the problem you are having with rclone?

    Rclone seems to ignore my --ca-cert <path_to_cert> option while using ftp remote either directly or via serve sftp. But as soon as I add given CA certificate to /usr/local/share/ca-certificates and run update-ca-certificates everything works OK.

    My main usecase is that I want to expose FTPS server as SFTP server via Rclone. I'm using --auth-proxy. Ideal solution for me would be if I could specify --ca-cert dynamically from auth proxy. Would that be possible?

    What is your rclone version (output from rclone version)

    1.61.1

    Which OS you are using and how many bits (e.g. Windows 7, 64 bit)

    Ubuntu 20.04, 64bit

    Which cloud storage system are you using? (e.g. Google Drive)

    ftps

    The command you were trying to run (e.g. rclone copy /tmp remote:tmp)

    rclone -vv --ca-cert /conf/ssl-cert-san.pem ls ftps:
    rclone -vv ls ftps: --ca-cert /conf/ssl-cert-san.pem
    rclone -vv --ca-cert /conf/ssl-cert-san.pem serve sftp ftps:
    

    config

    [ftps]
    type = ftp
    host = XX.XX.XXX.XX
    user = ftpuser
    port = 3021
    pass = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
    tls = true
    disable_tls13 = true
    

    A log from the command with the -vv flag (e.g. output from rclone -vv copy /tmp remote:tmp)

    rclone -vv --ca-cert /conf/ssl-cert-san.pem ls ftps:
    2023/01/06 14:59:02 DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "-vv" "--ca-cert" "/conf/ssl-cert-san.pem" "ls" "ftps:"]
    2023/01/06 14:59:02 DEBUG : Creating backend with remote "ftps:"
    2023/01/06 14:59:02 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
    2023/01/06 14:59:02 DEBUG : ftps://XX.XX.XXX.XX:3021: Connecting to FTP server
    2023/01/06 14:59:02 DEBUG : ftps://XX.XX.XXX.XX:3021: dial("tcp","XX.XX.XXX.XX:3021")
    2023/01/06 14:59:02 DEBUG : ftps://XX.XX.XXX.XX:3021: > dial: conn=*tls.Conn, err=<nil>
    2023/01/06 14:59:02 Failed to create file system for "ftps:": NewFs: failed to make FTP connection to "XX.XX.XXX.XX:3021": x509: certificate signed by unknown authority
    
    

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
  •  SignatureDoesNotMatch: release 1.60.0 broke the copy using S3 backend with Other (GCS) as provider

    SignatureDoesNotMatch: release 1.60.0 broke the copy using S3 backend with Other (GCS) as provider

    The associated forum post URL from https://forum.rclone.org

    N/A

    What is the problem you are having with rclone?

    The copy of a file using S3 backend with Other provider (Google Cloud Storage) started to fail with the 1.60.0 release.

    What is your rclone version (output from rclone version)

    rclone v1.60.0
    - os/version: darwin 12.5.1 (64 bit)
    - os/kernel: 21.6.0 (x86_64)
    - os/type: darwin
    - os/arch: amd64
    - go/version: go1.19.2
    - go/linking: dynamic
    - go/tags: cmount
    

    Which OS you are using and how many bits (e.g. Windows 7, 64 bit)

    macOS 12.5.1, x86_64

    Which cloud storage system are you using? (e.g. Google Drive)

    S3 with Other as provider.

    The command you were trying to run (e.g. rclone copy /tmp remote:tmp)

    ./rclone --config gcs_over_s3.conf copy source:/bucket/file .
    

    With a config like:

    [source]
    type = s3
    access_key_id = XXX
    provider = Other
    secret_access_key = XXX
    endpoint = https://storage.googleapis.com
    

    A log from the command with the -vv flag (e.g. output from rclone -vv copy /tmp remote:tmp)

    Failed to copy: failed to open source object: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your Google secret key and signing method.
    

    Looking at the result of --dump headers:

    • HEAD request still works
    • GET request fails

    When I compare the output between 1.60.0 and 1.59.2 for the GET request the only difference I can see (despite dates,...) is the order of request headers..

    FAIL

    Host: storage.googleapis.com
    User-Agent: rclone/v1.60.0
    Accept-Encoding: gzip
    Authorization: XXXX
    X-Amz-Content-Sha256: XXXX
    X-Amz-Date: 20230106T081010Z
    

    WORK

    Host: storage.googleapis.com
    User-Agent: rclone/v1.59.2
    Authorization: XXXX
    X-Amz-Content-Sha256: XXXXXX
    X-Amz-Date: 20230106T080942Z
    Accept-Encoding: gzip
    

    The X-Amz-Content-Sha256 is the same value for both request.

    The headers of the HEAD request (works) of v1.60.0:

    Host: storage.googleapis.com
    User-Agent: rclone/v1.60.0
    Authorization: XXXX
    X-Amz-Content-Sha256: XXXXXX
    X-Amz-Date: 20230106T081010Z
    

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
  • Mounted OneDrive - Invalid Parent Path '/Livefolders'

    Mounted OneDrive - Invalid Parent Path '/Livefolders'

    The associated forum post URL from https://forum.rclone.org

    https://forum.rclone.org/t/mounted-onedrive-invalid-parent-path/35106

    What is the problem you are having with rclone?

    If a OneDrive folder is mounted to the system via rclone, file operations on the folder can error, seemingly related to the polling function that checks OneDrive for changes. The full context and the surrounding setup are available in the linked forum thread, however the issue boils down to this error message: 07:51:25.493716 ERROR : OneDrive root '': Could not get item full path: invalid parent path: /Livefolders

    Commands used to reproduce the error where the following:

    root@danserver:~# cd /mnt/odr-bkp/
    root@danserver:/mnt/odr-bkp# mkdir test
    root@danserver:/mnt/odr-bkp# cd test
    root@danserver:/mnt/odr-bkp/test# echo "test" > testfile.txt
    root@danserver:/mnt/odr-bkp/test# cat testfile.txt
    test
    root@danserver:/mnt/odr-bkp/test# cd ..
    root@danserver:/mnt/odr-bkp# rm -rf test
    

    See full debug log below.

    @olefrost kindly identified that the error seems to occur because of an unexpected filepath returned by OneDrive. While in his testing environment, the root filepath is "/drive/root:", it is /Livefolders for myself. When getItemFullPath() tries to split the path at the :, the error is thrown.

    This error logically does not occur when polling is disabled via --poll-interval=0.

    What is your rclone version (output from rclone version)

    rclone v1.61.1

    Which OS you are using and how many bits (e.g. Windows 7, 64 bit)

    Debian 11.6 (64 bit)

    Which cloud storage system are you using? (e.g. Google Drive)

    OneDrive

    The command you were trying to run (e.g. rclone copy /tmp remote:tmp)

    n/a, see above

    A log from the command with the -vv flag (e.g. output from rclone -vv copy /tmp remote:tmp)

    https://gist.github.com/dandln/7558115c0c69d706379787b9d2710642

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
  • `error reading source directory: directory not found` for directories containing specific characters

    `error reading source directory: directory not found` for directories containing specific characters

    The associated forum post URL from https://forum.rclone.org

    None

    What is the problem you are having with rclone?

    When I was trying to transfer files from webdav remote to a drive remote, I noticed rclone is creating duplicated directories on Google Drive over and over again.

    Further work to dig in to reveal the cause, I noticed the β€› character (U+201B, SINGLE HIGH-REVERSED-9 QUOTATION MARK) is contained in the directory name. And indeed, it breaks rclone's escaping mechanisms.

    Here are the minimum code that reproduces this situation:

    mkdir -p testname/γŠγ―γ‚ˆγ†β€›γ”γ–γ„γΎγ™/
    echo hello > testname/γŠγ―γ‚ˆγ†β€›γ”γ–γ„γΎγ™/test.txt
    rclone copy ./testname/ testremote:testname/ -P
    
    $ rclone copy ./testname/ testremote:testname/ -P
    2023-01-04 06:55:58 ERROR : γŠγ―γ‚ˆγ†β€›β€›γ”γ–γ„γΎγ™: error reading source directory: directory not found
    2023-01-04 06:55:58 ERROR : Attempt 1/3 failed with 1 errors and: directory not found
    2023-01-04 06:55:58 ERROR : γŠγ―γ‚ˆγ†β€›β€›γ”γ–γ„γΎγ™: error reading source directory: directory not found
    2023-01-04 06:55:58 ERROR : Attempt 2/3 failed with 1 errors and: directory not found
    2023-01-04 06:55:59 ERROR : γŠγ―γ‚ˆγ†β€›β€›γ”γ–γ„γΎγ™: error reading source directory: directory not found
    2023-01-04 06:55:59 ERROR : Attempt 3/3 failed with 1 errors and: directory not found
    Transferred:              0 B / 0 B, -, 0 B/s, ETA -
    Errors:                 1 (retrying may help)
    Elapsed time:         1.9s
    2023/01/04 06:55:59 Failed to copy: directory not found
    
    $ rclone copy ./testname/ ./testname2/ -P
    2023-01-04 06:59:19 ERROR : γŠγ―γ‚ˆγ†β€›β€›γ”γ–γ„γΎγ™: error reading source directory: directory not found
    2023-01-04 06:59:19 ERROR : Attempt 1/3 failed with 1 errors and: directory not found
    2023-01-04 06:59:19 ERROR : γŠγ―γ‚ˆγ†β€›β€›γ”γ–γ„γΎγ™: error reading source directory: directory not found
    2023-01-04 06:59:19 ERROR : Attempt 2/3 failed with 1 errors and: directory not found
    2023-01-04 06:59:19 ERROR : γŠγ―γ‚ˆγ†β€›β€›γ”γ–γ„γΎγ™: error reading source directory: directory not found
    2023-01-04 06:59:19 ERROR : Attempt 3/3 failed with 1 errors and: directory not found
    Transferred:              0 B / 0 B, -, 0 B/s, ETA -
    Errors:                 1 (retrying may help)
    Elapsed time:         0.0s
    2023/01/04 06:59:19 Failed to copy: directory not found
    

    The directory used for testing can be copied normally using cp:

    $ cp -r ./testname/ ./testname2/
    $ ls testname2
    γŠγ―γ‚ˆγ†β€›γ”γ–γ„γΎγ™
    $ rm -rf testname2/  
    

    Here is the part of the log for original invocation (WebDAV -> Drive)

    $ rclone copy -P --bwlimit=30M webdav: drive:destdir/ --transfers 3 -v
    2023/01/04 06:47:03 INFO  : Starting bandwidth limiter at 30Mi Byte/s
    2023-01-04 06:47:07 NOTICE: [redacted]γ‹β€›β€›οΌŸ [2014[redacted]: Duplicate directory found in destination - ignoring
    2023-01-04 06:47:07 NOTICE: [redacted]γ‹β€›β€›οΌŸ [2014[redacted]: Duplicate directory found in destination - ignoring
    
    $ rclone lsd webdav: | grep か
              -1 2022-09-26 02:28:51        -1 [redacted]γ‹β€›οΌŸ [2014[redacted]
    

    What is your rclone version (output from rclone version)

    $ rclone version
    rclone v1.60.1
    - os/version: ubuntu 22.04 (64 bit)
    - os/kernel: 5.15.0-33-generic (x86_64)
    - os/type: linux
    - os/arch: amd64
    - go/version: go1.19.3
    - go/linking: static
    - go/tags: none
    

    Which OS you are using and how many bits (e.g. Windows 7, 64 bit)

    $ uname -a
    Linux misc1 5.15.0-33-generic #34-Ubuntu SMP Wed May 18 13:34:26 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
    

    Which cloud storage system are you using? (e.g. Google Drive)

    WebDAV -> Google Drive Local -> Google Drive (1st example) Local -> Local (2nd example)

    The command you were trying to run (e.g. rclone copy /tmp remote:tmp)

    rclone copy ./testname/ testremote:testname/ -P (see above for full reproduction steps)

    A log from the command with the -vv flag (e.g. output from rclone -vv copy /tmp remote:tmp)

    $ rclone copy ./testname/ testremote:testname/ -vvP
    2023/01/04 07:07:29 DEBUG : rclone: Version "v1.60.1" starting with parameters ["rclone" "copy" "./testname/" "testremote:testname/" "-vvP"]
    2023/01/04 07:07:29 DEBUG : Creating backend with remote "./testname/"
    2023/01/04 07:07:29 DEBUG : Using config file from "/home/lesmi/.config/rclone/rclone.conf"
    2023/01/04 07:07:29 DEBUG : fs cache: renaming cache item "./testname/" to be canonical "/home/lesmi/tmp/testname"
    2023/01/04 07:07:29 DEBUG : Creating backend with remote "testremote:testname/"
    2023/01/04 07:07:30 DEBUG : fs cache: renaming cache item "testremote:testname/" to be canonical "testremote:testname"
    2023-01-04 07:07:31 ERROR : γŠγ―γ‚ˆγ†β€›β€›γ”γ–γ„γΎγ™: error reading source directory: directory not found
    2023-01-04 07:07:31 DEBUG : Google drive root 'testname': Waiting for checks to finish
    2023-01-04 07:07:31 DEBUG : Google drive root 'testname': Waiting for transfers to finish
    2023-01-04 07:07:31 ERROR : Attempt 1/3 failed with 1 errors and: directory not found
    2023-01-04 07:07:31 ERROR : γŠγ―γ‚ˆγ†β€›β€›γ”γ–γ„γΎγ™: error reading source directory: directory not found
    2023-01-04 07:07:31 DEBUG : Google drive root 'testname': Waiting for checks to finish
    2023-01-04 07:07:31 DEBUG : Google drive root 'testname': Waiting for transfers to finish
    2023-01-04 07:07:31 ERROR : Attempt 2/3 failed with 1 errors and: directory not found
    2023-01-04 07:07:31 ERROR : γŠγ―γ‚ˆγ†β€›β€›γ”γ–γ„γΎγ™: error reading source directory: directory not found
    2023-01-04 07:07:31 DEBUG : Google drive root 'testname': Waiting for checks to finish
    2023-01-04 07:07:31 DEBUG : Google drive root 'testname': Waiting for transfers to finish
    2023-01-04 07:07:31 ERROR : Attempt 3/3 failed with 1 errors and: directory not found
    Transferred:              0 B / 0 B, -, 0 B/s, ETA -
    Errors:                 1 (retrying may help)
    Elapsed time:         2.4s
    2023/01/04 07:07:31 INFO  : 
    Transferred:              0 B / 0 B, -, 0 B/s, ETA -
    Errors:                 1 (retrying may help)
    Elapsed time:         2.4s
    
    2023/01/04 07:07:31 DEBUG : 4 go routines active
    2023/01/04 07:07:31 Failed to copy: directory not found
    

    How to use GitHub

    • Please use the πŸ‘ reaction to show that you are affected by the same issue.
    • Please don't comment if you have no relevant information to add. It's just extra noise for everyone subscribed to this issue.
    • Subscribe to receive notifications on status change and new comments.
F2 is a cross-platform command-line tool for batch renaming files and directories quickly and safely. Written in Go!
F2 is a cross-platform command-line tool for batch renaming files and directories quickly and safely. Written in Go!

F2 is a cross-platform command-line tool for batch renaming files and directories quickly and safely.

Dec 31, 2022
Command-line tool to organize large directories of media files recursively by date, detecting duplicates.

go-media-organizer Command-line tool written in Go to organise all media files in a directory recursively by date, detecting duplicates.

Jan 6, 2022
The simple and easy-to-use program designed to watch user activity for Cloud Providers.

Cloud Agent The simple and easy-to-use program is designed to watch user activity and possible orphan clusters for Cloud Providers: Gardener GCP (work

Jun 6, 2022
A simple tool which you can use to move through your directories from the command line

Fe What is Fe ? Fe is a simple tool which you can use to move through your direc

Jan 1, 2022
Watcher - A simple command line app to watch files in a directory for changes and run a command when files change!

Watcher - Develop your programs easily Watcher watches all the files present in the directory it is run from of the directory that is specified while

Mar 27, 2022
K8s local storage sync for stateful set's using microk8s-hostpath storage classe

Local Storage Sync for microk8s-hostpath The goal is to be able to sync stateful sets between the different nodes of a cluster to allow the data to be

Nov 1, 2022
tmux-wormhole - download files and directories with tmux!
tmux-wormhole - download files and directories with tmux!

tmux-wormhole Use tmux and magic wormhole to get things from your remote computer to your tmux. If tmux has DISPLAY set, open the file locally! Demo U

Nov 9, 2022
:zap: boilerplate template manager that generates files or directories from template repositories
:zap: boilerplate template manager that generates files or directories from template repositories

Boilr Are you doing the same steps over and over again every time you start a new programming project? Boilr is here to help you create projects from

Jan 6, 2023
A command line tool that builds and (re)starts your web application everytime you save a Go or template fileA command line tool that builds and (re)starts your web application everytime you save a Go or template file

# Fresh Fresh is a command line tool that builds and (re)starts your web application everytime you save a Go or template file. If the web framework yo

Nov 22, 2021
An open-source GitLab command line tool bringing GitLab's cool features to your command line
An open-source GitLab command line tool bringing GitLab's cool features to your command line

GLab is an open source GitLab CLI tool bringing GitLab to your terminal next to where you are already working with git and your code without switching

Dec 30, 2022
A command line tool to prompt for a value to be included in another command line.

readval is a command line tool which is designed for one specific purposeβ€”to prompt for a value to be included in another command line. readval prints

Dec 22, 2021
Reads from existing Cloud Providers (reverse Terraform) and generates your infrastructure as code on Terraform configuration
Reads from existing Cloud Providers (reverse Terraform) and generates your infrastructure as code on Terraform configuration

TerraCognita Imports your current Cloud infrastructure to an Infrastructure As Code Terraform configuration (HCL) or/and to a Terraform State. At Cycl

Dec 30, 2022
Run your MapReduce workloads as a single binary on a single machine with multiple CPUs and high memory. Pricing of a lot of small machines vs heavy machines is the same on most cloud providers.

gomap Run your MapReduce workloads as a single binary on a single machine with multiple CPUs and high memory. Pricing of a lot of small machines vs he

Sep 16, 2022
Command-line program to download videos from YouTube.com and other video sites

youtube-dl - download videos from youtube.com or other video platforms INSTALLATION DESCRIPTION OPTIONS CONFIGURATION OUTPUT TEMPLATE FORMAT SELECTION

Jan 9, 2023
Oct 1, 2022
The sntr command-line program gives you convenient access to Sentry directly from your terminal.

sntr: all of Sentry at your fingertips The sntr command-line program gives you convenient access to Sentry directly from your terminal. Disclaimer: th

Jan 31, 2022
minectl πŸ—Ί is a cli for creating Minecraft (java or bedrock) server on different cloud provider.

minectl ?? minectl️️ is a cli for creating Minecraft (java or bedrock) server on different cloud provider. It is a private side project of me, to lear

Jan 3, 2023
Command-line tool to load csv and excel (xlsx) files and run sql commands
Command-line tool to load csv and excel (xlsx) files and run sql commands

csv-sql supports loading and saving results as CSV and XLSX files with data processing with SQLite compatible sql commands including joins.

Nov 2, 2022
CLI to support with downloading and compiling terraform providers for Mac with M1 chip

m1-terraform-provider-helper A CLI to help with managing the installation and compilation of terraform providers when running a new M1 Mac. Motivation

Jan 2, 2023