Increasing speed for copying between remotes

What is the problem you are having with rclone?

When i am copying files from Onedrive to Google Drive

rclone copy -P  od:Folder gd:Folder

I am getting speeds below 10MB/s on a 100Mb/s Server with 2 Cores and 2GB RAM. What could i do to speed things up?

Both remotes are additionally mounted with a systemd script:

# /etc/systemd/system/rclone.service
Description=GDrive UL (rclone)

ExecStart=/usr/bin/rclone mount \
        --config=/root/.config/rclone/rclone.conf \
        --allow-other \
        --cache-tmp-upload-path=/tmp/rclone/upload \
        --cache-chunk-path=/tmp/rclone/chunks \
        --cache-workers=8 \
        --cache-writes \
        --cache-dir=/tmp/rclone/vfs \
        --cache-db-path=/tmp/rclone/db \
        --no-modtime \
        --drive-use-trash \
        --stats=0 \
        --checkers=16 \
        --bwlimit=50M \
        --cache-info-age=10m gd:/ /mnt/GDriveUL
ExecStop=/bin/fusermount -u /mnt/GDriveUL


What is your rclone version (output from rclone version)

rclone v1.50.2
os/arch: linux/amd64
go version: go1.13.6

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20.04

The rclone config contents with secrets removed.

type = drive
client_id = ***
client_secret = ***
scope = drive
token = {"access_token":"***","token_type":"Bearer","refresh_token":"***","expiry":"2021-01-    18T18:16:49.288741844+01:00"}
team_drive = ***

type = onedrive
token = {"access_token":"***","token_type":"Bearer","refresh_token":"***","expiry":"2021-01-18T18:05:00.837662692+01:00"}
drive_id = ***
drive_type = business

A log from the command with the -vv flag

2021-01-18 17:50:44 DEBUG : Folder/File: Sending chunk 402653184 length 8388608
2021-01-18 17:50:47 DEBUG : AnotherFolder/AnotherFile: Sending chunk 41943040 length 8388608
2021-01-18 17:50:49 DEBUG : Folder/File: Sending chunk 411041792 length 8388608
2021-01-18 17:50:53 DEBUG : Folder/File: Sending chunk 419430400 length 8388608
2021-01-18 17:50:57 DEBUG : Folder/File: Sending chunk 427819008 length 8388608
2021-01-18 17:51:10 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2021-01-18 17:51:11 DEBUG : pacer: low level retry 1/10 (error Post ***: read tcp [***::1]:53316->[2a00:1450:4001:80b::200a]:443: i/o timeout)
2021-01-18 17:51:11 DEBUG : pacer: Rate limited, increasing sleep to 1.85181372s

Update rclone as that's an old version.

Don't use a mount, copy directly remote to remote.
Don't use a cache remote as you want to use the regular remotes.

OneDrive throttles like hell.
Google Drive, you can use --drive-chunk-size to speed up large files, I use this in my rclone.conf

chunk_size = 1024M

You can pick a size based on your server specs.

Are you copying lots of little files / big files ?

Thanks for the fast response.

I need mounting for a different task, but with the copy command i posted it should do the copying in RAM, not with the mounts, shouldn't it?

Don't use a cache remote as you want to use the regular remotes.

What do you mean with this?

Yeah i already noticed throtteling of OneDrive. But with the mounts i got at least 40MB/s
I am copying my movie directory, so one big file per folder together with 1-5 smaller ones.
I noticed, that only one file is copying with "high" speed ~1.2 M/s, and the others are shown as "transferring" or with a few 10 k/s. I also experimented with limiting the --transfers to 2, but no improvement except no more 'low level retry 1/10' errors

I am guessing as you didn't include your rclone.conf, but in you mount, you have a lot of cache backend flags:

So my assumption is you have a remote and a cache remote setup for each mount. You don't want to use the cache remote, you want to point to the first remote you have. If you have no cache remotes in your rclone.conf, those flags do nothing then and can be removed.

Google only allows about 2-3 file creates per second so small files suck. For big ones if you got the memory, use a large drive-chunk-size and tune to your needs. I'd guess OneDrive would be the bottleneck.

Those two remotes [gd] and [od] i shared are my only objects in my rclone.config.
No flags set there.

Those flags in the systemd script should (and i thought they do) only apply to the mount i executed through it?

If you have no cache remotes in your rclone.conf, those flags do nothing then and can be removed.

So as i haven't defined a cache remote, those caching commands do nothing?

When i use rclone copy (on the same remotes but without using the mount), those flags should not apply or am i mixing things up?

That's correct.

Definitely not. I use a mount and run an upload script each night that uploads.

That's correct.

great to know. The tutorial didn't mention this :smiley:

Definitely not. I use a mount and run an upload script each night that uploads.

So except for bigger chunking on the GD side there is not much i can do?
Do you have an idea, why the mount is almost 4x faster (copying withouth "rclone copy" from remote to local file system) than the copy?

I'd guess it's writing local to disk first so you are getting a false positive on the speeds as it isn't uploading it yet.

Ok while playing with the chunk size i found bigger chunks (128M) to slow down the process, and SMALLER (1M) ones to double the throughput.
Any idea why this could be the case?

Everybody's setup is a bit different and dependent on a lot of factors. If you test and find something that works well for you, I always say use that.

I only copy from local to GD, have gigabit FIOS, quality of service my traffic and have a lot of spare memory on my server that copies so that is what works best for my particular setup.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.