Do I use local bandwidth when transferring remote to remote?

I'm really liking rclone. Great tool!

I'm trying to move 300GB from a Google Drive to a different user's Google Team Drive. I've made a client ID, which is that of a third user. I read somewhere that it doesn't matter who set up the client ID.

I have a couple of questions. The first is why my speeds aren't great. I was hoping for faster. Does it use my local bandwidth when transferring from one remote drive to another remote drive? I assumed not.

Transferred:   	   74.335M / 144.333 GBytes, 0%, 634.110 kBytes/s, ETA 66h15m51s
Errors:                 0
Checks:               407 / 407, 100%
Transferred:          111 / 6271, 2%
Elapsed time:        2m0s

The second question relates to how I'm controlling it. I'm using a Mac laptop to ssh to a headless Ubuntu box that runs 24/7 under my stairs. It seems that while that ssh session is live the files continue transferring but they stop when I close my laptop and break the ssh session. I assumed that the transfers would keep going but that doesn't seem to be the case. Does the ssh session need to be open?

The command that I'm currently using to transfer files is this:

rclone copy -v --transfers 10 --checkers 10 google_julian: Archive1:

Although I've tried others and got the same result.

Advice appreciated!

What version are you using?

rclone v1.48.0
- os/arch: linux/amd64
- go version: go1.12.6

I think you want:


Also, google limits to ~10 transactions per second so 10 transfers and 10 checkers is ~20 and going to make things super slow with retries as well.

Did you setup your own client ID / API key?

Thank you. I've gone from 500Kbyte/s to 50Mbyte/s. Thanks for you help. I assume it was the --drive-server-side-across-configs setting.

Can you answer my second question, namely why the transfers stop when I close my laptop, despite rclone running on a Ubuntu box? See above for more detail.

Sorry as I completely missed the second question when reading through it.

You can do that a few ways.

Something like screen or tmux:

or you can hit control-z to pause it and than bg %1

[felix@gemini ~]$ rclone ls gcrypt: --fast-list
[1]+  Stopped                 rclone ls gcrypt: --fast-list
[felix@gemini ~]$ bg
[1]+ rclone ls gcrypt: --fast-list &

and it will be going in the background:

felix    23814     1  8 15:25 ?        00:00:00 rclone ls gcrypt: --fast-list
felix    24151 23864  0 15:25 pts/1    00:00:00 grep rclone

Thank you. I think the bg %1 solution is working. Let me try it again later.

The SCREEN command is great. Thank you.

I've gone back to having problems transferring and I'm unsure why.

The command I'm using is

rclone copy -v --transfers 5 --checkers 5 --drive-server-side-across-configs google_julian: Archive2:

And a typical output is

2019/07/17 09:57:56 INFO  : 
Transferred:       45.559G / 92.819 GBytes, 49%, 556.399 kBytes/s, ETA 24h44m24s
Errors:                42 (retrying may help)
Checks:             10054 / 10054, 100%
Transferred:         2715 / 2767, 98%
Elapsed time:    23h51m0s
 * This Archive/Unoffic…/VIDEO-TS/VTS_01_1.VOB: transferring
 * This Archive/Unoffic…/VIDEO-TS/VTS_01_3.VOB: transferring
 * This Archive/Unoffic…/VIDEO_TS/VTS_01_2.VOB: transferring
 * This Archive/Unoffic…/VIDEO_TS/VTS_01_3.VOB: transferring
 * This Archive/Unoffic…/VIDEO_TS/VTS_01_3.VOB: transferring

Notice that it's transferring at 500K/s whereas I was seeing 20MB/s previously. I'm also getting quite a few errors. Does anyone have any ideas?


I noticed an issue with copying certain filetypes server-side. I assume the problem is with Google and not rclone, but I haven't done any digging.

I was able to transfer a few TB, but certain files (anything .m4v) would never transfer. Using --disable copy fixed the problem entirely. The same could be the case with other stuff as well.

Also note that the transfer speed is an average. 556 kBytes/s is what you are currently averaging, since you started the transfer- you might actually be sitting at 0 bytes/s and your numbers are just slowly sinking. You can check by taking a look at amount transferred. If it continues to sit at 45.559G and doesn't move for a little bit, then you know you aren't moving anymore.

If you run it with "-vv", you are probably hitting your daily quota for transfers.

Thank you. Here's some output after adding the -vv. Any ideas?

2019/07/17 14:51:40 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 500: Internal Error, internalError)
2019/07/17 14:51:40 DEBUG : pacer: Rate limited, increasing sleep to 2.972829212s
2019/07/17 14:51:45 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 500: Internal Error, internalError)
2019/07/17 14:51:45 DEBUG : pacer: Rate limited, increasing sleep to 4.031389908s
2019/07/17 14:51:57 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 500: Internal Error, internalError)
2019/07/17 14:51:57 DEBUG : pacer: Rate limited, increasing sleep to 8.400088033s
2019/07/17 14:52:30 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 500: Internal Error, internalError)
2019/07/17 14:52:30 DEBUG : pacer: Rate limited, increasing sleep to 16.927316954s
2019/07/17 14:52:34 INFO  : 
Transferred:             0 / 91.074 GBytes, 0%, 0 Bytes/s, ETA -
Errors:                 0
Checks:             12769 / 12769, 100%
Transferred:            0 / 94, 0%
Elapsed time:       20m0s
 * This Archive/Miscell…  + Bonus/VTS_01_1.VOB: transferring
 * This Archive/Miscell…g 2013 Video Part 2.ts: transferring
 * This Archive/Officia…ths [DVD]/VTS_01_1.VOB: transferring
 * This Archive/Officia…ths [DVD]/VTS_01_2.VOB: transferring
 * This Archive/Officia…ths [DVD]/VTS_01_3.VOB: transferring

What's the actual command that kicked that off? That's usually in the first line of the debug log.

So like this?

rclone --disable copy copy -vv --transfers 5 --checkers 5 --drive-server-side-across-configs google_julian: Archive2:

Am I not disabling the command that I'm then using?

By default server side copies are off in the latest stable.

If you remove

You will be using your local bandwidth rather than using server side. No reason to turn it on to turn it off via another command line.

With Google, you can only get 10 transactions per seconds so having 5 transfers and 5 checkers might be causing you a few of those 403s which are ok as they retry.

For Google Drive, you can also use --fast-list which speeds up things a bit as well.

Thank you. I don't want to use local bandwidth so presumably I should leave --drive-server-side-across-configs in the command? I'm still a bit confused why I'm getting slow speeds again when transferring between remote sources.

Point noted about 10 transactions.

If you turn on server side copy by using --drive-server-side-across-configs and turn off server side copies by disabling copies, you are using local bandwidth and not server side.

It's a binary type thing so either you are doing server side copies or you are not.

So what's your goal - on or off?

I see. You're saying that --drive-server-side-across-configs and --disable copy are cancelling each other out.

I'm trying to avoid using local bandwidth.

Yes, if you want to use server side copies, you need server side copies on :slight_smile:

Makes sense.

I just looked at the debug log and it said I had an invalid token so perhaps that's why I'm going slow. It looks like this:

type = drive
client_id =
client_secret = UgFfjsx_sU94nNWjDtZOORp2
scope = drive
token = {"access_token":"ya29.GlxIBy0LPxARWYs068BO-nn466TXkuPTHPcQKlKyXrjoBImPQoFFew0jVZPZMG1Zk_JPBoZyAD8l9acl-Jb84o8vozm-uokK0_P4GCp5ynwZpsGA0Ukvx4ORRDqQJA","token_type":"Bearer","refresh_token":"1/ARAzSAiTztu-hrCVDKddKyCYE08pfFKQVQJuAwaFwzNf8HGFw8qa_NBHgbE_F7Sq","expiry":"2019-07-17T16:28:10.389252975+01:00"}
team_drive = 0AJHj_E40q758Uk9PVA

I've changed some of those characters for security.

That Client ID and Client Secret are taken from OAuth2 at Have I done something wrong?

Without seeing the logs too much, I wouldn't worry too much as you get a token refresh every 10 minutes I think it is so seeing those messages are normal.