Hello Nick and rclone team, Brian from Workspace Admins here.
For my day job, I've been asked by a few customers of mine to assist with migrating files from Dropbox/Citrix Fileshare, etc. which would become cost-prohibitive if I were to use migration tools which typically have a limit of 10GB/license.
I am considering using rclone in GCP compute engine as an option, ideally without having to download the data to the disk first.
My questions are
What is more important for rclone copy? Disk IOPS or RAM?
Would you recommend copying files from remote to remote directly? Or should I stage the data first on the attached disk? (As I write this I think latter may be better so I am not making too many API calls against the migration source, and I will have better visibility and data integrity over the length of the project.)
Logging (--log-file, --log-level, ls/lsf before and after for comparison. I did notice that rclone lsf does support --csv for some, but not all). Are there any other recommended tips? I will have to go back and look at the Q&A part of the video.
If you are copying from local disk then IOPS definitely. However if you are copying from network to network then RAM but rclone doesn't usually use a huge amount.
I'd normally set up a VM in the cloud and do a remote to remote copy using the full bandwidth of the cloud provider.
Copying locally first might give you the comfort that you have the data locally.
rclone lsf is good for cataloging what you've got.
rclone check is great for an after check to see if everything copied properly.
rclone sync/copy doesn't support --csv yet, but I have it on my todo list!
Did you use rclone authorize? I have a feeling that this won't work properly for Sharefile due to rclone getting more info in the oauth. Copying the config file should work though.
Yup, token seems fine, but something must have changed during authorization process for rclone. Very bizarre, or I am remembering what I did wrong, in any case, copying and pasting rclone.conf seems to have fixed the issue!