Copying or moving one small file to google drive very slow

What is the problem you are having with rclone?

slow startup time when copying/deleting/moving only one file. It takes double digit number of seconds to do one operation on one file.

example below is with crypt remote on top, but the results are identical without crypt. Any magic to make it faster? What is it doing for dozens of seconds before even starting to copy?

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.0
- os/version: centos 7.9.2009 (64 bit)
- os/kernel: 4.18.0-553.6.1.el8.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.23.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --drive-chunk-size 128M --ignore-checksum move /tmp/jbu-STATS2-20250119191950-2e.tar.bz2 archive2:ftp-extracts/jbu/STATS2/20250119191950

Config

rclone --config /etc/rclone.conf config redacted
[archive2]
type = crypt
remote = archive2_team_drive:
password = XXX
password2 = XXX
server_side_across_configs = true
filename_encryption = off
directory_name_encryption = false

[archive2_team_drive]
team_drive = XXX
type = drive
scope = drive
use_trash = false
chunk_size = 128M
acknowledge_abuse = true
server_side_across_configs = true
stop_on_upload_limit = true
stop_on_download_limit = true
service_account_credentials = XXX
### Double check the config for sensitive info before posting publicly

A log from the command that you were trying to run with the -vv flag

2025/01/20 21:08:49 DEBUG : 12 go routines active
2025/01/20 21:08:50 DEBUG : rclone: Version "v1.69.0" starting with parameters ["rclone" "--config" "/etc/rclone.conf" "--drive-chunk-size" "128M" "--ignore-checksum" "--log-level" "DEBUG" "--log-file" "/tmp/extracts.rclone.debug.log" "move" "/tmp/jbu-STATS2-20250119191950-2e.tar.bz2" "archive2:ftp-extracts/jbu/STATS2/20250119191950"]
2025/01/20 21:08:50 DEBUG : Creating backend with remote "/tmp/jbu-STATS2-20250119191950-2e.tar.bz2"
2025/01/20 21:08:50 DEBUG : Using config file from "/etc/rclone.conf"
2025/01/20 21:08:50 DEBUG : fs cache: renaming child cache item "/tmp/jbu-STATS2-20250119191950-2e.tar.bz2" to be canonical for parent "/tmp"
2025/01/20 21:08:50 DEBUG : Creating backend with remote "archive2:ftp-extracts/jbu/STATS2/20250119191950"
2025/01/20 21:08:50 DEBUG : Creating backend with remote "archive2_team_drive:ftp-extracts/jbu/STATS2/20250119191950.bin"
2025/01/20 21:08:50 DEBUG : archive2_team_drive: detected overridden config - adding "{OgfZc}" suffix to name
2025/01/20 21:09:02 DEBUG : rclone: Version "v1.69.0" starting with parameters ["rclone" "--config" "/etc/rclone.conf" "--drive-chunk-size" "128M" "--ignore-checksum" "--log-level" "DEBUG" "--log-file" "/tmp/extracts.rclone.debug.log" "move" "/tmp/jbu-STATS2-20250119191950-2e.tar.bz2" "archive2:ftp-extracts/jbu/STATS2/20250119191950"]
2025/01/20 21:09:02 DEBUG : Creating backend with remote "/tmp/jbu-STATS2-20250119191950-2e.tar.bz2"
2025/01/20 21:09:02 DEBUG : Using config file from "/etc/rclone.conf"
2025/01/20 21:09:02 DEBUG : fs cache: renaming child cache item "/tmp/jbu-STATS2-20250119191950-2e.tar.bz2" to be canonical for parent "/tmp"
2025/01/20 21:09:02 DEBUG : Creating backend with remote "archive2:ftp-extracts/jbu/STATS2/20250119191950"
2025/01/20 21:09:02 DEBUG : Creating backend with remote "archive2_team_drive:ftp-extracts/jbu/STATS2/20250119191950.bin"
2025/01/20 21:09:02 DEBUG : archive2_team_drive: detected overridden config - adding "{OgfZc}" suffix to name
2025/01/20 21:09:14 DEBUG : fs cache: renaming cache item "archive2_team_drive:ftp-extracts/jbu/STATS2/20250119191950.bin" to be canonical "archive2_team_drive{OgfZc}:ftp-extracts/jbu/STATS2/20250119191950.bin"
2025/01/20 21:09:14 DEBUG : Creating backend with remote "archive2_team_drive:ftp-extracts/jbu/STATS2/20250119191950"
2025/01/20 21:09:14 DEBUG : archive2_team_drive: detected overridden config - adding "{OgfZc}" suffix to name
2025/01/20 21:09:24 DEBUG : fs cache: renaming cache item "archive2_team_drive:ftp-extracts/jbu/STATS2/20250119191950" to be canonical "archive2_team_drive{OgfZc}:ftp-extracts/jbu/STATS2/20250119191950"
2025/01/20 21:09:25 DEBUG : jbu-STATS2-20250119191950-2e.tar.bz2: Need to transfer - File not found at Destination
2025/01/20 21:09:39 INFO  : jbu-STATS2-20250119191950-2e.tar.bz2: Copied (new)
2025/01/20 21:09:39 INFO  : jbu-STATS2-20250119191950-2e.tar.bz2: Deleted
2025/01/20 21:09:39 INFO  : 
Transferred:      728.968 KiB / 728.968 KiB, 100%, 52.068 KiB/s, ETA 0s
Checks:                 1 / 1, 100%
Deleted:                1 (files), 0 (dirs), 728.749 KiB (freed)
Renamed:                1
Transferred:            1 / 1, 100%
Elapsed time:        14.3s



that log is somewhat confusing.

can you pick a text file, zero byte in size.
then rclone copy that single file , without using crypt remote.

for a deeper look, try --dump=headers

yeah, 10 seconds to put a 0 sized file, among other things. I've tried another drive and its under two seconds. Why would this be? Does it depend on the other activity on the same drive? I thought google just has rate limiting which then return errors if you exceed them, not slow down?

2025/01/21 07:18:17 DEBUG : HTTP REQUEST (req 0xc000696d80)
2025/01/21 07:18:17 DEBUG : POST /upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Csha1Checksum%
2Csha256Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2Cshort
cutDetails%2CexportLinks%2CresourceKey&keepRevisionForever=false&prettyPrint=false&supportsAllDrives=true&uploadType=m
ultipart HTTP/1.1
Host: www.googleapis.com
User-Agent: rclone/v1.66.0
Transfer-Encoding: chunked
Authorization: XXXX
Content-Type: multipart/related; boundary=06d3e0c4b4f733d68f8722c8de7ceb7a04f642138a202a3be8b494dfb12c
X-Goog-Api-Client: gl-go/1.22.1 gdcl/0.156.0
Accept-Encoding: gzip

2025/01/21 07:18:17 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2025/01/21 07:18:27 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2025/01/21 07:18:27 DEBUG : HTTP RESPONSE (req 0xc000696d80)
2025/01/21 07:18:27 DEBUG : HTTP/1.1 200 OK

that is an old version of rclone, can rclone selfupdate

yeah, no change

2025/01/22 15:23:25 DEBUG : HTTP REQUEST (req 0xc001088f00)
2025/01/22 15:23:25 DEBUG : POST /upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Csha1Checksum%2Csha256Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks%2CresourceKey&keepRevisionForever=false&prettyPrint=false&supportsAllDrives=true&uploadType=multipart HTTP/1.1
Host: www.googleapis.com
User-Agent: rclone/v1.69.0
Transfer-Encoding: chunked
Authorization: XXXX
Content-Type: multipart/related; boundary=67ec534cdbe6f8fad6ea2ab702dbb2c7b211b4754f43c41dc6266b23e5d7
X-Goog-Api-Client: gl-go/1.23.4 gdcl/0.211.0
Accept-Encoding: gzip

2025/01/22 15:23:25 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2025/01/22 15:23:33 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2025/01/22 15:23:33 DEBUG : HTTP RESPONSE (req 0xc001088f00)
2025/01/22 15:23:33 DEBUG : HTTP/1.1 200 OK
Content-Length: 559
Access-Control-Allow-Credentials: true
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Type: application/json; charset=UTF-8
Date: Wed, 22 Jan 2025 14:23:33 GMT
Expires: Mon, 01 Jan 1990 00:00:00 GMT
Pragma: no-cache
Server: ESF
Vary: Origin, X-Origin
X-Content-Type-Options: nosniff
X-Frame-Options: SAMEORIGIN
X-Guploader-Uploadid: AFIdbgSQAPHdT3RaUujxDwvCkdWWvUxDw_H4w-pInrZIGtosH1t5zJCh1QuXKb5Arv0z5k41NDPAFqg
X-Xss-Protection: 0

I think it's a network problem.
You can check this with

ping -c 4 www.googleapis.com
time curl -vkL https://www.googleapis.com

not sure what the issue is. at first, i thought it might be a network issue, such as dns.

for that other faster drive, maybe you can post a dump log?

perhaps re-run that first, slow command on another internet connection.

I have tried everything everywhere and the metrics always look the same no matter for which variable I control (location, account, drive, number of files in a drive, god knows what I haven't tried), here is a screenshot of Google's own dashboard, all my own measures look exactly like this, what before new year's took 2s now takes 4 times as long:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.