Command on encrypted mount dir

I wonder if i can make commands such as copy, move or delete files directly in an encrypted mounted dir or if i have to give the same commands with rclone COMMAND scret:
Is it the same?

Can you type exactly what you try to achieve ?

Let say that you have rclone remotes
acd: ( unencrypted Amazon Drive remote )
acdcrypt pointed to acd:/crypt

So what do you want to list/delete/move ?

If i copy and unencrypted file to the encrypted dir with cp command do i have my file encrypted or unencrypted?

Whatever you copy/move to your encrypted remote will be lol encrypted.
Basically if destination is encrypted remote rclone will encrypt content during upload.

But also if i use for excemple from terminal:
or only if i make
rclone copy FILENAME secret:ENCRYPTED_DIR

Yes, i’ve just managed to try and all files copied in the mounted dir are encrypted.
So there are no problems if i do all in the things in the mounted dir instead with rclone commands?

I would suggest you always use rclone copy/move and rclone mount just for reading the content.
Maybe explain whats your scenario exactly eg what do you want to do.

Most typical scenarios with Sonarr, CouchPotato, Torrents, NZB’s is how to move files to cloud once they are downloaded.

For that the best way is for your clients to copy/move files to one local folder and then you run scheduled script that rclone copy/move those files to cloud.

Yes, my question was because of my sickrage and couchpotato.
I have to study your suggestion about moving to a local folder first and the with a scheduler of rclone.

Just set your sickrage and couchpotato to move file to local folder and you can use my script that will move those files to cloud

Dont forget to set your paths.

p.s. The script will be run by crontab everyminute and automatically exit if there is already upload in progress. It wont upload any files until they are not at least 15 minutes old, if you are not using encfs and moving files you can set time to 1 minute eg the main reason for this is so files are not being uploaded while they are still being partially copied locally.


That’s great. Tnx so much.I will try this

I’ve tried your suggestion and almost all is working fine.
Just one thing is very strange.

I created a script as your to move from localk dir to encrypted one.

if pidof -o %PPID -x "rclone-upload.cron"; then
   exit 1


if find $FROM* -type f -mmin +15 | read
  echo "$(date "+%d.%m.%Y %T") RCLONE UPLOAD STARTED" | tee -a $LOGFILE
  rclone move $FROM $TO -c --no-traverse --transfers=300 --checkers=300 --delete-after --min-age 15m --log-file=$LOGFILE
  echo "$(date "+%d.%m.%Y %T") RCLONE UPLOAD ENDED" | tee -a $LOGFILE

And added i line with crotab -e to make the script every minute

          • /root/scripts/rclone-upload.cron >/dev/null 2>&1

Checking logs i find this even if i have files to copy from some hours

31.01.2017 09:39:01 RCLONE UPLOAD STARTED
31.01.2017 09:39:01 RCLONE UPLOAD ENDED
31.01.2017 09:40:01 RCLONE UPLOAD STARTED
31.01.2017 09:40:01 RCLONE UPLOAD ENDED
31.01.2017 09:41:01 RCLONE UPLOAD STARTED
31.01.2017 09:41:01 RCLONE UPLOAD ENDED
31.01.2017 09:42:01 RCLONE UPLOAD STARTED
31.01.2017 09:42:01 RCLONE UPLOAD ENDED
31.01.2017 09:43:01 RCLONE UPLOAD STARTED
31.01.2017 09:43:01 RCLONE UPLOAD ENDED
31.01.2017 09:44:01 RCLONE UPLOAD STARTED
31.01.2017 09:44:01 RCLONE UPLOAD ENDED

Than i tried to manually run the script and it makes the move:
Here is the log.

2017/01/31 09:44:25 Encrypted Google drive root 'crypt/56bq628lpesuhej1jugfhjid$
2017/01/31 09:44:25 Encrypted Google drive root 'crypt/56bq628lpesuhej1jugfhjid$
2017/01/31 09:45:23
Transferred: 1.945 GBytes (32.242 MBytes/s)
Errors: 0
Checks: 2
Transferred: 2
Elapsed time: 1m1.7s

  • …/Salem - S03E04 - Night’s Black Agents.mkv: 36% done, 11.707 MBytes/s, ET$
  • …012.ITA.AC3.Subs.1080p.BluRay.x264-HRS.mkv: 23% done, 12.055 MBytes/s, ET$
  • …1958.ITA.AC3.Sub.1080p.BluRay.x264-HRS.mkv: 20% done, 11.724 MBytes/s, ET$

Any adea?

OK, i found out.
i had to put this in my script:

I am experiencing the same issue,
When you say PATH_TO_MY_SCRIPT do you mean /home/plex/scripts/ or /home/plex/scripts/rclone-upload.cron ?


i mean /home/plex/scripts/ where you have the script to execute.

1 Like

FYI This will not work as intended with NZB downloaders that preserve file creation times from rar files. It will always evaluate to true because the mtimes will reflect the timestamp of the files when they were compressed, not decompressed.

You can fix it in sabnzbd by enabling ‘ignore_unrar_dates’ under Special in the settings.

A more general fix is something like this:

CLOCKFILE="/tmp/$(basename $0).clock"

Reverse the logic of the first part:

if find $FROM* -type f -mmin -15 | read; then
echo "$(date "+%d.%m.%Y %T") Files younger than 15 min in ${FROM}" | tee -a $LOGFILE
exit 1

And then at the end of the script:

# Make sure we don't get stale mtimes in news from unrar
if [ ! -f "$CLOCKFILE" ] || find "$CLOCKFILE" -mmin +15 | read; then
    echo "$(date "+%d.%m.%Y %T") Updating mtimes in $FROM"
    touch "$CLOCKFILE"
    find $FROM -type f -exec touch "{}" \; >/dev/null 2>&1

That way the script will not get stuck if the mtimes are not updated when an rar file is upacked by updating mtimes every 15 minutes.