Rclone +gdrive slow copy

Hi,

I'm having trouble with very slow upload. I'm testing using rclone copy for now, my goal is to use mount.

I have 750 Mbps upload bandwidth and my server from where I'm doing all the test has enough memory and free memory, file come from an SSD, so hardware bottleneck should not be a problem.

I had a cache, but reading the forum I removed it, I'll decide later if I need it, for now I concentrate of this upload speed problem. I don't have a download problem. The file "file.txt" is a 1G random file generated using dd.

My intent with this is to send my media library there and have Emby (not Plex sorry) use the mount point as a library location.

Here is the command and the result

rclone copy file.txt gdrive: -P --stats=1s --drive-chunk-size=128M -vv
2020/03/29 00:47:06 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "copy" "file.txt" "gdrive:" "-P" "--stats=1s" "--drive-chunk-size=128M" "-vv"]
2020/03/29 00:47:06 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2020-03-29 00:47:06 DEBUG : file.txt: Need to transfer - File not found at Destination
2020-03-29 00:47:06 DEBUG : file.txt: Sending chunk 0 length 134217728
2020-03-29 00:47:23 DEBUG : file.txt: Sending chunk 134217728 length 134217728
2020-03-29 00:47:44 DEBUG : file.txt: Sending chunk 268435456 length 134217728
2020-03-29 00:48:06 DEBUG : file.txt: Sending chunk 402653184 length 134217728
2020-03-29 00:48:28 DEBUG : file.txt: Sending chunk 536870912 length 134217728
2020-03-29 00:48:50 DEBUG : file.txt: Sending chunk 671088640 length 134217728
2020-03-29 00:49:09 DEBUG : file.txt: Sending chunk 805306368 length 134217728
2020-03-29 00:49:30 DEBUG : file.txt: Sending chunk 939524096 length 134217728
2020-03-29 00:49:54 DEBUG : file.txt: MD5 = 7a02274aaf1a8cb209f5ed0e95fef600 OK
2020-03-29 00:49:54 INFO  : file.txt: Copied (new)
Transferred:            1G / 1 GBytes, 100%, 6.109 MBytes/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:      2m47.6s
2020/03/29 00:49:54 INFO  :
Transferred:            1G / 1 GBytes, 100%, 6.109 MBytes/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:      2m47.6s
2020/03/29 00:49:54 DEBUG : 5 go routines active
2020/03/29 00:49:54 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rclone" "copy" "file.txt" "gdrive:" "-P" "--stats=1s" "--drive-chunk-size=128M" "-vv"]

Here is the same file downloaded to my server

rclone copy gdrive:file.txt . -P --stats=1s -vv
2020/03/29 00:56:27 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "copy" "gdrive:file.txt" "." "-P" "--stats=1s" "-vv"]
2020/03/29 00:56:27 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2020-03-29 00:56:28 DEBUG : file.txt: Need to transfer - File not found at Destination
2020-03-29 00:56:28 DEBUG : file.txt: Starting multi-thread copy with 4 parts of size 256M
2020-03-29 00:56:28 DEBUG : file.txt: multi-thread copy: stream 4/4 (805306368-1073741824) size 256M starting
2020-03-29 00:56:28 DEBUG : file.txt: multi-thread copy: stream 2/4 (268435456-536870912) size 256M starting
2020-03-29 00:56:28 DEBUG : file.txt: multi-thread copy: stream 1/4 (0-268435456) size 256M starting
2020-03-29 00:56:28 DEBUG : file.txt: multi-thread copy: stream 3/4 (536870912-805306368) size 256M starting
2020-03-29 00:56:36 DEBUG : file.txt: multi-thread copy: stream 2/4 (268435456-536870912) size 256M finished
2020-03-29 00:56:37 DEBUG : file.txt: multi-thread copy: stream 4/4 (805306368-1073741824) size 256M finished
2020-03-29 00:56:37 DEBUG : file.txt: multi-thread copy: stream 3/4 (536870912-805306368) size 256M finished
2020-03-29 00:56:37 DEBUG : file.txt: multi-thread copy: stream 1/4 (0-268435456) size 256M finished
2020-03-29 00:56:37 DEBUG : file.txt: Finished multi-thread copy with 4 parts of size 256M
2020-03-29 00:56:39 DEBUG : file.txt: MD5 = 7a02274aaf1a8cb209f5ed0e95fef600 OK
2020-03-29 00:56:39 INFO  : file.txt: Multi-thread Copied (new)
Transferred:            1G / 1 GBytes, 100%, 87.817 MBytes/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:        11.6s
2020/03/29 00:56:39 INFO  :
Transferred:            1G / 1 GBytes, 100%, 87.817 MBytes/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:        11.6s
2020/03/29 00:56:39 DEBUG : 11 go routines active
2020/03/29 00:56:39 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["rclone" "copy" "gdrive:file.txt" "." "-P" "--stats=1s" "-vv"]

Here is my config (I know I'm bypassing the crypt in my copy command)

[gdrive]
type = drive
client_id = [REDACTED]
client_secret =  [REDACTED]
token = [REDACTED] 
root_folder_id = [REDACTED] 

[gcrypt]
type = crypt
remote = gdrive:medias
filename_encryption = standard
directory_name_encryption = true
password = [REDACTED] 
password2 = [REDACTED]

Uploads to Google drive can be slow because Rclone can’t upload multiple chunks simantiously (async), It have to upload the chunk one by one; thus the network may not go to its full potential.

One thing you can do is try upping the chunk size to 512M or more if you have enough memory; it should increase the upload speed.

I normally use 1G chunk size because I have plenty of memory and I always get 900+ Mbps upload on 1 Gbps port.

I'll just note that the API isn't capable of uploading multiple chunks at once. I did put in a feature request for this!

However if you have multiple files to upload rclone can upload those simultaneously - the number controlled by the --transfers flag - which is another way of maxing out your upload.

1 Like

Thanks for reply... I guess that way it almost impossible for me to bust the 750 GB upload limit!

Not really. That is true if you're only uploading one file at a time.

I'll try enabling the cache with a "tmp_upload_path" to see if that helps when I process many files at the same time. Hopefully the cache system is able to upload many files at the same time.

Right now without any cache sonarr and radarr takes a LOOOONG time to process each file, this bugs me a lot, I have the cache setup with tmp_upload_path.

If it doesn't work I'll have to look into having a script run at night to sync and link everything. I don't want the postprocess to hang, as I said I hate that! The script thing seems like a more complex setup in ma opinion. But if I have to do it I'll do it!

Thoughts? Before I embark on this journey, I have to wait until everybody has stopped using the streaming server anyways.

Edit: You were right, with multiple files and --transfers=40 I'm able to sustain 92 MBytes/s, about the max that my 750 Mbps can do! Still have to try the cache, did have the occasion to enable it yet.

1 Like

Ok I was able to enable my cache system. I moved many files from local storage to my gdrive -> gcache -> gcrypt mount. I saw all the files being added to my tmp_upload_path, but the speed is like there is only one transfer being used. Doesn't the cache system use multiple transfers to upload?

Here is the rclone.conf section for my cache. I also edited the gcrypt section to point the remote on the gcache.

[gcache]
type = cache
remote = gdrive:medias
chunk_size = 64M
info_age = 1d
chunk_total_size = 256G
chunk_path = /mnt/tmp/rclone/cache-backend
tmp_upload_path = /mnt/tmp/rclone/upload

Searching for a solution to the problem I've read that vfs might solve this, but I would have to disable cache. I don't mind disabling the cache since I have enough bandwidth to not require a cache.

With VFS would I be able to rapidly copy files to my mount and in the background the upload would take place using multiple connection if I'm uploading multiple files? That's the problem I had without a cache, sonarr and radarr get upset by the fact it takes too long to move a file to the destination folder.

I'll continue reading of VFS caching while waiting on your comments and recommendations.

Well, the cache is gone, I don't need that since it doesn't solve sonarr and radarr hang when post pressession files.

I've tried vfs writes cache enabled, it does what it's supposed to do, but still sonarr hangs until the sync is completely done. So I think I don't have a choisi to have a special folder where sonarr and radarr will put the movies and use unionfs and/ou mergerfs I'll have to read on those two things... This is becoming more and more complex, just because I hate seeing sonarr hangs...

I realize that I'm speaking to myself, don't worry I know I'm going insane!!! At least I agree with myself at the moment!

Well, I've done it! I had to install mergerfs, which is less complicated that it seems, everything works perfectly, there are no speed or hang problem anymore. Thank you for your help. I heavily based my setup on Animosity022 scripts.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.