Hello Nick and rclone team, Brian from Workspace Admins here.
For my day job, I've been asked by a few customers of mine to assist with migrating files from Dropbox/Citrix Fileshare, etc. which would become cost-prohibitive if I were to use migration tools which typically have a limit of 10GB/license.
I am considering using rclone in GCP compute engine as an option, ideally without having to download the data to the disk first.
My questions are
What is more important for rclone copy? Disk IOPS or RAM?
Would you recommend copying files from remote to remote directly? Or should I stage the data first on the attached disk? (As I write this I think latter may be better so I am not making too many API calls against the migration source, and I will have better visibility and data integrity over the length of the project.)
Logging (--log-file, --log-level, ls/lsf before and after for comparison. I did notice that rclone lsf does support --csv for some, but not all). Are there any other recommended tips? I will have to go back and look at the Q&A part of the video.
Yup, token seems fine, but something must have changed during authorization process for rclone. Very bizarre, or I am remembering what I did wrong, in any case, copying and pasting rclone.conf seems to have fixed the issue!