Movinf File From Server To Google Drive

rclone move /www/wwwroot/download/VidSrc/Videos/Folder Google:Movies/Vidsrc --delete-empty-src-dirs --tpslimit 12 --tpslimit-burst 12 --transfers 64 --checkers 32 --checksum --log-file /www/wwwroot/download/rclone.log ```

Just have to move video files from server to rclone, is my above cmd is ok for this task? i will really appreciate if some one guide me further,

Regards

hi, that looks good, but i would tweak it a bit

  • might add --log-level=debug
  • make sure to use latest rclone version.

fwiw, i never use move, unless i am forced to.
i prefer the two-step

  1. rclone copy
  2. rclone delete

If not sure keep it simple and definitely not use high number of transfers and checkers - use defaults (you are using cheap consumer cloud storage not some enterprise S3 monster you can transfer tones of data without limits):

rclone move /www/wwwroot/download/VidSrc/Videos/Folder Google:Movies/Vidsrc --delete-empty-src-dirs  --log-file /www/wwwroot/download/rclone.log 

Then if you have issues post your rclone.log, your rclone.conf etc.

how would be a move replacement cmd with rclone copy and rclone delete? and can i use -P --fast-list in my cmd?

No difference. This is up to you.

definitely yes. -P is for you to see what is happening. --fast-list - it helps when you have A LOT of files. But you can use it regardless.

1 Like

something like

rclone copy   /www/wwwroot/download/VidSrc/Videos/Folder Google:Movies/Vidsrc --tpslimit 12 --tpslimit-burst 12 --transfers 64 --checkers 32 --checksum --log-level DEBUG --log-file /www/wwwroot/download/rclone.log 
rclone delete /www/wwwroot/download/VidSrc/Videos/Folder --delete-empty-src-dirs --log-file /www/wwwroot/download/rclone.log --log-level DEBUG --log-file /www/wwwroot/download/rclone.log --dry-run 

technically, could use purge instead of delete --delete-empty-src-dirs
but i would not do so.

yes, no problem

1 Like

if i use copy cmd and after complete copy, is there any way to check that origin and target folder are same before deleting origin folder?

good question, could use rclone check
if you want to be paranoid about it rclone check --download

tho, when rclone transfers a file, it does compare checksums.

also, your copy command is already using --checksum,
so rclone copy --checksum would be equivalent to rclone check

If rclone copy ends with exit code 0 means all was copied with no issue.

You can always run rclone copy again - if all the same it will be quick and will tell you.

Or check rclone check

Also do not be afraid to experiment - use --dry-run flag to run things without any risk of changing anything.

Failed to copy: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded

should i use my own service account?

Thank you soo much both of you, you helping me a lot, thanks again

i believe that means you have hit a hard quota limit from google.
have to wait for that to reset.

how much data have you uploaded in the last 24 hours, more than 750GiB?

hard to answer, as no config file was posted, no debug log, no command?

anyways, should create your own client id+secret, as per rclone docs.

and not sure --transfers 64 would make a real difference, based on how gdrive works.

ops just hitting that limit, i was not aware that google have upload limit, any idea when it will reset?

i have no other error in my log file so can i keep using --transfers 64 or want me to change it with --transfers 32?

Im using my own client id + secret key.

based on forum posts, 24 hours.
https://rclone.org/drive/#drive-stop-on-upload-limit

sure, i guess it does not matter, can leave it as it is.
"Drive has quite a lot of rate limiting. This causes rclone to be limited to transferring about 2 files per second only. Individual files may be transferred much faster at 100s of MiB/s but lots of small files can take a long time."

Thanks @asdffdsa
Hopefully this will my last question, is there any flag for stop moving/copying files when upload limit reach and wait for reset limit and start the process again?
and
can i use create another client id + secret key for another 750GB upload? is that possible o no?

Thanks

;wink

one option is too limit the bandwidth, so that you never hit the limit in 24 hours.

for example, to upload 750GiB in 24 hours, would use --bwlimit=8.68
to play it safe, use --bwlimit=8.5 which would take 25 hours,
or safer yet, --bwlimit=8 which would take 27 hours

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.