Bug: rclone copy & rclone move command

Hello,

Thanks for the awesome tool. I’ve been using it with Plexdrive.
In my use case, I remote mount Google Drive with Plexdrive in my VPS to get better read access speed performance of my .rar files.
Then I use command: unrar x file.rar into a directory that is being watched by a bash script to do command: rclone move ( moving the extracted files back to Google Drive in different directory)

Each rar files contained many 256MB or more of videos. When unrar finish fast enough before rclone picks up and start “rclone move” via a bash script, then everything is fine.
Else, “rclone move” will try to move the file as-is, which in this case the file is truncated/“streamed” from unrar that is still extracting the file. I am not sure the integrity and what actually “rclone move” do in this case but from network monitoring it seems that it transfer files in weird way.

HEre is what I know, for example, unrar takes 2 minutes to finish unraring each files in a rar file, “rclone move” will transfer something during that 2 minutes window, and then it looks like that “rclone move” retries again when the file was done processed by unrar.

The result of this transfer is unplayable videos in the remote Google Drive.
And it seems like “rclone move” left a left-over files at /tmp in chunks

ncdu 1.11 ~ Use the arrow keys to navigate, press ? for help

— /tmp/chunks/1S_eS2xsk3Oh02L_kxDVSVqmuTpsVHYmt ------------------------------
/…
5.0 MiB [##########] 2673868800
5.0 MiB [##########] 2668625920
5.0 MiB [##########] 2663383040
5.0 MiB [##########] 2652897280
5.0 MiB [##########] 2631925760
5.0 MiB [##########] 2605711360
5.0 MiB [##########] 2600468480
5.0 MiB [##########] 2589982720
5.0 MiB [##########] 2584739840
5.0 MiB [##########] 2579496960
5.0 MiB [##########] 2563768320
5.0 MiB [##########] 2553282560
5.0 MiB [######### ] 996147200
5.0 MiB [######### ] 99614720
5.0 MiB [######### ] 990904320

I am guessing that the last try that “rclone move” tries to do actually made the file broken? or maybe because it tries to move a streamed file that is being written/extracted by unrar.
So… this only happen when unrar is not fast enough extracting/writing files.

I hope that in future “rclone move” will do this sort of check before start moving file:

lsof | grep filename

or something like:

until err_str=$(lsof /path/to/file 2>&1 >/dev/null); do
if [ -n “$err_str” ]; then
# lsof printed an error string, file may or may not be open
echo “lsof: $err_str” >&2

# tricky to decide what to do here, you may want to retry a number of times,
# but for this example just break
break

fi

//lsof returned 1 but didn’t print an error string, assume the file is open
sleep 1
done
if [ -z “$err_str” ]; then
// file has been closed, move it
mv /path/to/file /destination/path
fi

if a file is being opened for writing by another process rclone may give notification/ask user what to do to avoid moving random bytes during the file is being written by another process.

Not sure if matter, but here is the stat:

Plexdrive gives me around 100Mbps for read access of rar files in Google Drive, which is read by unrar.
unrar write at much slower rate, say around 30-40Mbps to a folder being watched by rclone.
Rclone moves gives me around 160-200Mbps to move back to Google Drive.
It is great speed by rclone.

Unfortunately there isn’t a cross platform way of doing that as far as I know. If there was then it is a great idea… Maybe I’ll write one…

In the mean time you can either

  • unpack to a different directory then mv the files after. This should be straight forward.
  • or use the latest beta with --vfs-cache-mode writes on your mount and write directly into the mount. That should work fine now, but it is quite new the code so may still have bugs.
1 Like