Set rclone where to start copy from

What is the problem you are having with rclone?

My google education drive will be shut down, I have been trying to move files from that to another google drive folder. But with limiting the bandwidth, my google education will be shut down before I can move everything over. I am thinking about creating another user, but I don't want the same files to be copied, is there a way to tell rclone/drive to start copying from a certain folder structure or start from the bottom up?

I assume I would have to create a new remote for the new user and set that up to copy from the oldgdrive, but I am worried that it will start from the beginning again which is pointless.

I am thinking this and I am wondering if it's possible:

copy oldgdrive:folder newgdrive:folder start a-z

copy oldgdrive:folder newergdrive:folder start z-a

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.0

  • os/version: rocky 8.6 (64 bit)
  • os/kernel: 4.18.0-372.16.1.el8_6.x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.18.3
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --bwlimit=8650k --stats 30m copy oldgrive:folder newgdrive:folder

Yep:

What is the default setting for copy, so I should use order by name descending correct?

The default is somewhat random so when I moved my stuff, I had two copies going and I did one ascending and one descending until it was done.

It's too far along for me to start a new one again, I will start descending and just cross my fingers

I don't think you'll hit an issue.

I used move and if something is missing, it just tosses an error that it isn't there anymore.

If you are using copy, it would most likely say it already exists and move past it so I would not think it should be a problem at all.

If you are doing server side copies, bwlimit doesn't apply either, but I'm not sure if you are doing that.

What do you mean by server side copy? I need to use bwlimit or I will hit the upload limit I thought? I also think I made the mistake of copying not into a shared drive but into an individual's drive.

I can't see your rclone.conf nor a log so I can't tell you for sure what you've got setup.

rclone copyto GD:hosts GD:hostsagain -vvv
2022/07/15 14:13:59 DEBUG : Setting --config "/opt/rclone/rclone.conf" from environment variable RCLONE_CONFIG="/opt/rclone/rclone.conf"
2022/07/15 14:13:59 DEBUG : rclone: Version "v1.59.0" starting with parameters ["rclone" "copyto" "GD:hosts" "GD:hostsagain" "-vvv"]
2022/07/15 14:13:59 DEBUG : Creating backend with remote "GD:hosts"
2022/07/15 14:13:59 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2022/07/15 14:13:59 DEBUG : GD: Loaded invalid token from config file - ignoring
2022/07/15 14:13:59 DEBUG : Saving config "token" in section "GD" of the config file
2022/07/15 14:13:59 DEBUG : GD: Saved new token in config file
2022/07/15 14:14:00 DEBUG : fs cache: adding new entry for parent of "GD:hosts", "GD:"
2022/07/15 14:14:00 DEBUG : hosts: Need to transfer - File not found at Destination
2022/07/15 14:14:01 DEBUG : hosts: md5 = 44f74a6bbe47bcbe3a18ef0893ee27dc OK
2022/07/15 14:14:01 INFO  : hosts: Copied (server-side copy) to: hostsagain
2022/07/15 14:14:01 INFO  :
Transferred:   	        278 B / 278 B, 100%, 0 B/s, ETA -
Transferred:            1 / 1, 100%
Elapsed time:         2.1s

2022/07/15 14:14:01 DEBUG : 7 go routines active

If it's a server side copy, it doesn't bandwidth as it's all done on the server so setting a bwlimit would not do anything.

There's no harm in hitting the 750G upload as it'll just reset the next day. I would just stop when I hit the limit and do that daily.

It's going from a totally different domain to a new domain? I would assume that's not server side?

I am copying from one mount to a new mount.

Yes, you can server side copy across Google Drive to Google Drive. It doesn't have to be the same drive as I just showing an example from the log.

Wouldn't the upload limit still apply?

Yes, that's why I use stop on upload flag.

Is there a way I can do a wild card for a directory like below? It doesn't look like order by command for name is going to work for my use case. I might be looking for something like this - How to copy using a wildcard

rclone ls --include '/Z/**' oldgdrive:folder and that command would copy all directories and whatever files are in that directory starting with the letter Z not just the folder with the title Z if that makes sense

What’s the output of the command? What’s it doing that you don’t want it to?

I figured it out by messing with the --include command, if I start a copy from rclone will it re-write everything or is it smart enough to know what is already on the destination and not over write it?

yes, as per the docs.
Copy files from source to dest, skipping identical files.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.