Rclone Transfer Question

Hey everyone,

I'm still a bit new to this whole gdrive/rclone thing so please bear with me. I've got a script setup that is supposed to upload new files to gdrive after they've finished downloading using rclone move and this fires every 5min. It still needs a bit of tweaking, especially with more than one file but works for the most part. My first question is this:

  1. I have the move/copy process logging to a file. From the log file, for pretty much every file it always displays this message:
2019/05/17 18:10:01 DEBUG : 2 go routines active
2019/05/17 18:10:01 DEBUG : rclone: Version "v1.47.0" finishing with parameters ["rclone" "moveto" "/home/td00/TorrentShare/RadarrDownloads" "/home/td00/mnt/gdrive/Movies" "--checkers" "30" "-vv" "--log-file=/home/td00/logs/gdrive-upload-movies.log" "--tpslimit" "3" "--retries" "5" "--low-level-retries" "10" "--transfers" "10" "--drive-chunk-size" "32M" "--max-age" "10m" "--exclude" "*partial~"]
2019/05/17 18:10:01 INFO  :
Transferred:       18.440G / 21.883 GBytes, 84%, 638.151 kBytes/s, ETA 1h34m16s
Errors:                 0
Checks:                 2 / 2, 100%
Transferred:            0 / 1, 0%
Elapsed time:     8h25m0s
 * L.A. Confidential (199…1997) Bluray-1080p.mkv: 71% /12.245G, 845.844k/s, 1h11m7s

2019/05/17 18:10:01 INFO  :
Transferred:        9.779G / 12.245 GBytes, 80%, 551.285 kBytes/s, ETA 1h18m11s
Errors:                 0
Checks:                 0 / 0, -
Transferred:            0 / 1, 0%
Elapsed time:     5h10m0s
 * L.A. Confidential (199…1997) Bluray-1080p.mkv: 79% /12.245G, 713.798k/s, 1h0m23s

2019/05/17 18:10:01 INFO  :
Transferred:        9.537G / 12.245 GBytes, 78%, 546.449 kBytes/s, ETA 1h26m37s
Errors:                 0
Checks:                 0 / 0, -
Transferred:            0 / 1, 0%
Elapsed time:      5h5m0s
 * L.A. Confidential (199…1997) Bluray-1080p.mkv: 77% /12.245G, 415.878k/s, 1h53m49s

Notice how it seems to be displaying the same file with different percentages multiple times? When it completes it completes successfully but I want to make sure I don't have something set up incorrectly and it's not moving/copying the file multiple times?

The other question I have is that as I stated I am using rclone move to move these files. However, according to my logs it doesn't appear to be using move, but copy instead as per this message in the log:

2019/05/17 18:30:04 DEBUG : Stand By Me (1986) 1080p/Stand By Me (1986) 1080p NL Subs.mkv: Can't move: rename /home/td00/TorrentShare/RadarrDownloads/Stand By Me (1986) 1080p/Stand By Me (1986) 1080p NL Subs.mkv /home/td00/mnt/gdrive/Movies/Stand By Me (1986) 1080p/Stand By Me (1986) 1080p NL Subs.mkv: invalid cross-device link: trying copy
2019/05/17 18:30:04 DEBUG : Stand By Me (1986) 1080p/Stand By Me (1986) 1080p NL Subs.mkv: Can't move, switching to copy
2019/05/17 18:30:04 DEBUG : Stand By Me (1986) 1080p/Stand By Me (1986) 1080p NL Subs.mkv: Failed to pre-allocate: operation not supported
2019/05/17 18:31:01 INFO  :
Transferred:       10.052G / 12.245 GBytes, 82%, 547.277 kBytes/s, ETA 1h10m1s
Errors:                 0
Checks:                 2 / 2, 100%
Transferred:            0 / 1, 0%
Elapsed time:     5h21m0s
 * L.A. Confidential (199…1997) Bluray-1080p.mkv: 82% /12.245G, 304.696k/s, 2h5m47s

2019/05/17 18:31:01 INFO  :
Transferred:       19.096G / 21.883 GBytes, 87%, 634.474 kBytes/s, ETA 1h16m45s
Errors:                 0
Checks:                 2 / 2, 100%
Transferred:            0 / 1, 0%
Elapsed time:     8h46m0s
 * L.A. Confidential (199…1997) Bluray-1080p.mkv: 77% /12.245G, 312.116k/s, 2h36m1s

2019/05/17 18:31:01 INFO  :
Transferred:       32.027M / 7.937 GBytes, 0%, 546.505 kBytes/s, ETA 4h12m48s
Errors:                 0
Checks:                 0 / 0, -
Transferred:            0 / 1, 0%
Elapsed time:        1m0s
 * Stand By Me (1986) 108…986) 1080p NL Subs.mkv:  0% /7.937G, 740.837k/s, 3h6m29s

So even though it fails to move, it does copy? Any way to fix these? Also shout out to Animosity022 I've been reading these boards and following your suggestions for the past two weeks getting this all setup. Very helpful!

The logs look like you are running two copies of rclone move at once... Check in ps axf to see.

Rclone will try to do a server side move, if that isn't possible it will do a copy then a delete which is what is happening here.

That failed to pre-allocate message is interesting... What OS and which filesystem are you using?

Hi @ncw I'm currently running Ubuntu Server 18.04 LTS in a VM with Ext4 as filesystem. I ran that command you suggested but it didn't show multiple copies of rclone move. I do have a script that runs on a cron job every 5 min that is supposed to move files that are less than 10 minutes old. I wonder if I have my time configurations messed up:

rclone moveto "$FROM" "$TO" --checkers 30 -vv --log-file=$LOGFILE --tpslimit 3 --retries 5 --low-level-retries 10 --transfers 10 --drive-chunk-size 32M --max-age 10m --exclude *partial$~

@ncw Would you know why it's unable to make this server side move? Am I perhaps missing something in a configuration somewhere? For the most part everything works... I'm just tweaking now to get rid of these errors and stuff.

Thanks for all the help!

I don't know if this is what you're running into, but I've found that when I run rclone copy/sync with -P as a systemd service, I get the behavior you're describing, where successive updates are printed out as new lines in the journal. I wonder if the same is true for cron...

If you want a cheap way to make sure two rclones don't run at the same time then add the --rc flag. If you are using the rc elsewhere then set --rc-addr too.

It all depends on what $FROM and $TO are?

@chiraag Thanks for the tip! I am not using the -p or a systemd service yet. I am just ironing out some of these kinks first before I move on to create the systemd service files - but I will keep this in mind thanks!

@ncw The FROM and TO variables are just locations for the files:


And cheap solutions for me! I will try out the --rc and see if that corrects it for me and let you know!

I put the --rc parameter in and received the following error:
2019/05/18 10:50:01 Failed to start remote control: start server failed: listen tcp bind: address already in use

Im not sure but in my opinion I think that your problem is to call the move command every 5 minutes.

At the first 5 minutes it starts moving the file.

At 10 minutes (New move) it doesnt find the file on Drive so it try to mové (while the other rclone are transferring too)

At 15 minutes, another time it check that the file is not on Drive and starts transferring...

So at the end you have a lot of commands at the same time with the same file transferring.

@Cindakil I'm beginning to think you are right. Though I think it might have more to do with the age of the file(s) I am requesting moved rather than running the rclone move command every 5min. Or maybe a combination of both!

Right now I want it to move all files that are 10min old or younger. I guess I would need to figure out a way to set the cron to run the script at a time where it would capture all new files that were downloaded by Sonarr or Radarr and not miss anything.

Maybe you can make a little script that first checks if are running some rclone move command, if its running, doesnt starts a New one. If not, start rclone mové again.

Im sure there are some similar made by someone.

Add the -rc to prevent more than one from running. And you can use the "timeout" linux command to call rclone to make sure it isn't hung and is terminated if it runs for say 20 minutes just to be safe.

When I add the -rc to prevent more than one from running I get the following error:

2019/05/18 10:50:01 Failed to start remote control: start server failed: listen tcp bind: address already in use

Is this expected behavior for this command?

Also, in terms of not making the script run simultaneously... I've been googling around and have noticed some people add this to their scripts to prevent simultaneous runs:

if pidof -o %PPID -x “gdrive-upload-movies.sh”; then
exit 1

I have added this but it doesn't seem to work either? Unless I'm writing this wrong as well?

You should replace gdrive-upload-movies.sh with the name of your script. Also, to troubleshoot, you can put the line pidof -o %PPID -x “gdrive-upload-movies.sh” to see what it prints out. That will tell you if it ever detects another instance running.

Hey @chiraag So that is the name of my script lol, but when I try to run the command nothing comes up so that could very well be an issue!

Yeah. You have to kill the ones that are already running.

Ah got it! Will do!

I think the problem lies with cron. If you search my threads, I've had many cron related issues that sound similar to yours.

This solution worked perfectly well for me:
Install tmux and run an infinite while loop in your tmux session.

while :; do rclone move --all-your-arguments source/ dest:; sleep 5m; done

That way you accomplish your goal, while skipping cron altogether, you have more control of the process, and you can easily start/stop/modify anything you need. No need for the -rc flag as you know that only one instance will be running.

Or, if you're on Linux, try a systemd timer? It solves the issue by never activating the same unit more than once (so if it's already running, it doesn't start another instance), which sounds like what you want. And it's cleaner than the while loop approach too.

1 Like

Thanks @mattzab I will look into this suggestion to see if it could work for me. But just to re-iterate for the purpose of explanation for this thread, what I am trying to do is run this:

rclone moveto "$FROM" "$TO" --checkers 30 -vv --log-file=$LOGFILE --tpslimit 3 --retries 5 --low-level-retries 10 --transfers 10 --drive-chunk-size 32M --max-age 10m --exclude *partial$~

Every 5 minutes to check for new files that Radarr would have downloaded. This is known because I have another script that fires every 3min that just "touches" new files in order to update their modified information to allow the above rclone script to pick them up. It works, however as noted there is often multiple instances running that are copying/moving the same file and I am trying to eliminate this.

@chiraag So this could work - but I'm not worried about the number of process running, as long as they are unique files based on modified time right. So I'm wondering if setting this to only fire once, will miss any new files added during the time window?

Ah, the reason server side move doesn't work is that you are trying to move across filesystem boundaries I'd guess. So $TO looks like it is a mount. mv would do the same thing, so nothing you can optimise here.