Rclone browser server side copy (gdrive)

What is the problem you are having with rclone?

I am not able to do an service side copy from source A to B (gdrive), I have tried to do a test copy from fvn1992-team-drive-crypt:/Plex to plex-cloud-unlimited2-team drive:/Plex
But I am getting the following error :
Usage:
rclone copy source:path dest:path [flags]

Flags:
--create-empty-src-dirs Create empty source dirs on destination after copy
-h, --help help for copy

Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.
Command copy needs 2 arguments maximum: you provided 3 non flag arguments: ["fvn1992-team-drive-crypt:/Plex" "plex-cloud-unlimited2-team" "drive:/Plex"]

I hope somebody can help me out.. ?
My goal is to finally sync and or copy G-drive teamdrives to eachother using a service side way.

Thanks in advance,

What is your rclone version (output from rclone version)

v1.53.3

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Windows Server 2016 64Bit

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy fvn1992-team-drive-crypt:/Plex plex-cloud-unlimited2-team drive:/Plex --fast-list --dry-run --transfers 8 --checksum --P

The rclone config contents with secrets removed.

[floorbackup-cloud-gdrive]
type = drive
scope = drive
token = {"access_token":"ya29.a0AfH6SMDLg8JNMU6FPimKj37kyG3jdhlNxpYWKBUAGpwJd7-Bky-EEzbPUPGR71DuaMVr7gncl3lkIPkqbRPYBQU24DyZs_oGincuCWm4AUzIUrhPp3DqJSQQ1oFIKGBiVwrQ","token_type":"Bearer","refresh_token":"1//090owKIXGZqcfCgYIARAAGAkSNwF-L9Ir1PpT19nUjISGt2_6N_mxgKYD5EGbrXxGXq3QYBnaZvtXZVOoyMN-RB8n8CVeFT1ACns","expiry":"2021-01-09T13:50:27.2560841+01:00"}
team_drive = 0AC5Re1oyc3kEUk9PVA

[floorbackup-cloud-gdrive-crypt]
type = crypt
remote = floorbackup-cloud-gdrive:/
filename_encryption = standard
directory_name_encryption = true
password =
password2 = 

[fvn1992-team-drive]
type = drive
scope = drive
token = {"access_token":"ya29.a0AfH6A04_14jx_qsJJWd3xIaqSI6IAn1BRhXYj6AVtMFp9CYFzDJxm3RHk3aYEk6vkjqqLq2QNmCsymUncmjFI1duZuGg","token_type":"Bearer","refresh_token":"1//09cutaLrN3V7gCgYIARAAGAkSNwF-L9IrWU7g9oEIpHkJqDMRNFWbKPDfbi-uDKrQWLVu6O0ETZVPMrrML9LnVIF21M5DMojT-KM","expiry":"2020-10-18T14:29:33.9540254+02:00"}
team_drive = 0AL-M9mlr_ojdUk9PVA
server_side_across_configs = true

[fvn1992-team-drive-crypt]
type = crypt
remote = fvn1992-team-drive:/
filename_encryption = standard
directory_name_encryption = true
password = 
password2 =
server_side_across_configs = true

[plex-cloud-unlimited-team-drive]
type = drive
client_id = ccmp.apps.googleusercontent.com
client_secret = 
scope = drive
token = {"access_token":"ya29.A0AfH6SMA08ZksdD-hcWs3u_NN8RbfJ9Q5iEBesUg-HucIZ8-umOvHrzBCfWdVKSNHjYsMQ8qKmzqKIlQqqYgC2dF95Yv2eTKdd8LsOZ_-PFg","token_type":"Bearer","refresh_token":"1//09cV2p7T2xLn7CgYIARAAGAkSNwF-L9IrpO1N8grg8DDRdhTiOMmv6MsinJpRMlkT716d9x8BsmN67YCCp2nUWjhf2IlkBn-FJNI","expiry":"2020-11-08T16:46:43.675548+01:00"}
team_drive = 0ADx3TX85b_mwUk9PVA

[plex-cloud-unlimited-team-drive-crypt]
type = crypt
remote = plex-cloud-unlimited-team-drive:/
filename_encryption = standard
directory_name_encryption = true
password =
password2 =

[plex-cloud-unlimited2-team drive]
type = drive
client_id =.apps.googleusercontent.com
client_secret = 
scope = drive
token = {"access_token":"ya29.A0AfJ_sgRrCd66JpygtzI7tt6J6rgS6XY1Z6YJwgRuKCf4BTSHkYkEEYZZRL5QYzYDUcXQBEw331JUgC3XOB41gs_tzkE99QxISxRCwnC778","token_type":"Bearer","refresh_token":"1//09P2_u4jxHnlyCgYIARAAGAkSNwF-L9IrpqXLNdPpSgbPqh6VUTLZO0pwJ1eZ37DXn0l8W5l0eVJ697XhDxxCfIj98f4TKcWgPeQ","expiry":"2020-11-08T16:42:34.9010796+01:00"}
team_drive = 0ALNAtHsbv1qIUk9PVA
root_folder_id = 
server_side_across_configs = true

[plex-cloud-unlimited2-team drive crypt]
type = crypt
remote = plex-cloud-unlimited2-team drive:/
filename_encryption = standard
directory_name_encryption = true
password = 
password2 =
server_side_across_configs = true

A log from the command with the -vv flag

Paste  log here

I have tried to follow this thread so far, but I must be doing something wrong here and I dont know what so far..

You have spaces in your folder names so you need to quote them.

Thanks for your fast response !

Do you perhaps mean I have to change it do this with quote's.. ?

rclone copy fvn1992-team-drive-crypt:/"Plex" plex-cloud-unlimited2-team drive:/"Plex" --fast-list --dry-run --transfers 8 --checksum --P

Your remote names have spaces in them so you need to put quotes around them.

It's generally easier to not put spaces in things.

Ok I see,
So Perhaps this is the right way then I hope.. ?

rclone copy [fvn1992-team-drive-crypt]:/Plex [plex-cloud-unlimited2-team drive crypt] drive:/Plex --fast-list --dry-run --transfers 8 --checksum --P

Or do I perhaps need to add the:/Plex path also inside the quotes arrea.. ?

I'm trying this at the moment, but perhaps this is all wrong, and I have misunderstood you.. ?

rclone copy [fvn1992-team-drive-crypt:/Plex [plex-cloud-unlimited2-team drive crypt]:/Plex --fast-list --dry-run --transfers 8 --checksum --P

Those are braces and not quotes. With any OS if you have spaces, you need to put quotes around them if you have a space.

rclone copy fvn1992-team-drive-crypt:/Plex "plex-cloud-unlimited2-team drive crypt":/Plex --fast-list --dry-run --transfers 8 --checksum --P

Im sorry I see, English is not my strong point here..

Thanks for the example commands, I will try them :slight_smile:

Interresting and thanks again, i have tried the following command now from your example:

rclone copy fvn1992-team-drive-crypt:/Plex "plex-cloud-unlimited2-team drive crypt":/Plex --fast-list --dry-run --transfers 8 --checksum --P

But I am getting the following Error :
(seems like withouth --P it is running I think, but I think i wont get any rclone output of whats going on now.. ? )

Error: unknown flag: --P
Usage:
rclone copy source:path dest:path [flags]

Flags:
--create-empty-src-dirs Create empty source dirs on destination after copy
-h, --help help for copy

Use "rclone [command] --help" for more information about a command.
Use "rclone help flags" for to see the global flags.
Use "rclone help backends" for a list of supported services.

2021/01/09 13:46:38 Fatal error: unknown flag: --P

It's telling you that flag is unknown.

You have to change it to one of the two below:

  -P, --progress                             Show progress during transfer.

I see, thanks stuppid again of me.. :upside_down_face:

Seems to be working now, awesome big thanks so far ! :slight_smile:

I hope you can perhaps help me a little more.
I would love to if possible add a command to skip all files which are already on the destination, so I wont make a double copy.
Do you perhaps also know the command flag for this, or are these files already being skipped automatically.. ?

Is it just this flag : ?
--ignore-existing flag

Im btw also wondering, if I add the following flag :

--drive-stop-on-upload-limit

This will let the copy command running for 24/7 until it's done ?
So this will stop on 750GB server side I presume.. ?

I remember I have read somewhere, server side uses a little bit less than 750GB.. ?
Or how does this work in my case, this is not clear to me..

No flag is needed. Files that are already there are not re-uploaded.

You can only copy 750GB per day so once you hit that, it'll error out. That flag lets rclone stop rather than generating errors on all the remaining files. If you have more than 750GB to copy, you should use it.

Thanks again ! :slight_smile:

I see, is this the same for a Sync command ?
The Sync command will I hope, presume also skip files that are already on the destination.. ?

Atm I have a Googleapi Error 403 User rate limit exceeded, I have used --drive-stop-on-upload-limit inside my command.

I ran the following command, which was very fast server side :slight_smile:

rclone copy fvn1992-team-drive-crypt:/Plex "plex-cloud-unlimited2-team drive crypt":/Plex --fast-list --transfers 8 --checksum --drive-stop-on-upload-limit --progress

Perhaps --drive-stop-on-upload-limit is beter for use to add with a sync command.. ?
I want to achieve a permanent sync, which every 24 hours will contineu to copy, when the limit is over..

I would like to mention that I have over 50Tb of data which I want to copy/sync over multiple team drives..
So more than 750GB of data in total.. :wink:

Copy and sync are for different use copies.
A copy takes everything from the source and uploads/copies it to the destination.

A sync takes everything from the and source and upload its and deletes anything not matching to make the source identical to the destination and deletes things on the destination.

Whichever you want to use is defined by your use case and what you want to happen. If you use sync, please use --dry-run and validate it does what you want as it's a destructive operation.

I see, thanks for explaining more to me.

I think I'm fine with a sync, if it will only delete or change the destination and not the source.
I will indeed use the --dry-run command first time.

I'm btw also wondering would it be possible to do a "permanent" move sync from a lokal drive to a gdrive rclone crypt location.. ?

Right now for months i am using rclone browser to each time manually move new files inside my gdrive crypt location.
But still looking for a way to make this automated, and have the files moved automaticually.
for example auto move files from C:/download location to Gdrive:/Plex

Thanks again for all your help ! :slight_smile:

Sorry as I am not sure what you mean. When you run a copy or sync, it happens real time and is permanent. You can schedule it or something if that's what you mean?

I never used / seen rclone browser so I have no idea what it really does.

Ok thanks for your info again.

Rclone browser is in my opinion a simple desktop app, to bring a kind of GUI to rclone.
See link and download page : https://github.com/kapitainsky/RcloneBrowser

Anyway, what I hope to achieve is a "permanent" move from locaton A (local) to Gdrive, which for example polls every 5 or 10 seconds if there is anything new on local destination A and then moves it to the Gdrive destination.

I hope this perhaps clears it up.. ?

When I for example then run a Sync command for rclone with --drive-stop-on-upload-limit , to use to sync between gdrive crypt locations, it will remain syning untill everything is done I hope ?
And will also respect the 750GB limit, is that correct ?
So I can for example leave this running until everything has been synced.. ?

When you rclone, it runs what you specify and exits when done.

So when you run a copy or sync, it's a point in time copy.

If you want to keep things up to date, you'd schedule a job to do that and run it every x minutes.