Best rclone options for uploading local to remote folder (GD)

hi,

i have an ip camera that stores its records to local NAS.
now i’m trying to figure out best way to upload its records to google drive as files are changed (not added, since all files are precreated on initiating if ip-camera).
google-drive-ocamlfuse does it too slowly, at about 100-300 kb/s though i have Gb internet connection.

i guess rclone can do it

could someone share your config options of rclone (script/fstab) for syncing local folder to remote (google drive) at descent speed?
besides to make this more reliable please share .service for systemd that can control availability of service (presence of mount).

would i need to use cron + rsync + inotifywait or rclone can handle file change?

thanks

I run this every night at like 2 or 3 in the morning to move my local files to the cloud:

felix@gemini:~/scripts$ cat upload_cloud
#!/bin/bash
# RClone Config file
RCLONE_CONFIG=/data/rclone/rclone.conf
export RCLONE_CONFIG
LOCKFILE="/var/lock/`basename $0`"

(
  # Wait for lock for 5 seconds
  flock -x -w 5 200 || exit 1

# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --checkers 3 --fast-list --log-file /home/felix/logs/upload.log -v --tpslimit 3 --transfers 3 --exclude-from /home/felix/scripts/excludes --delete-empty-src-dirs

) 200> ${LOCKFILE}

thanks.
i’m not admin just a user.
could you please give a hint how to implement pre-copying of files to another folder for further syncing by rclone, without blocking them, coz they may change or may need an access when rclone will start to sync?

i would think of:
scan /source with incrontab and do rsync to another local folder /source2
then as it finishes duplicating incrontab informs rclone to do syncing to remote folder on GD.

the only solution i could find is to watch changes in separate file and run a 10 sec delayed job for it

i run incron as service

/lib/systemd/system/incron.service
[Unit]
Description=file system events scheduler
RequiresMountsFor=/mnt/sda5/tmp

[Service]
Type=forking
EnvironmentFile=-/etc/default/incrond
ExecStart=/usr/sbin/incrond
ExecStartPost=/bin/sh -c 'umask 022; pgrep incrond > /var/run/incrond.pid'
PIDFile=/var/run/incrond.pid
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure


[Install]
WantedBy=multi-user.target

then create a job for each specific file to control

 /mnt/file.txt IN_CLOSE_WRITE,IN_NO_LOOP /home/gd/rclone/rsync-rclone.sh $@

which runs a script upon getting trigger

#!/bin/bash

dirfile=$1
dir=`dirname $dirfile`
file=`basename $dirfile`

echo $dirfile > /home/gd/rclone/listdir-$file
echo $file > /home/gd/rclone/list-$file

sleep 10

/usr/bin/rsync --info=ALL --log-file=/var/log/rsync/rsync.log -au --inplace --chmod=D700,F600 $dirfile /mnt/sda5/ip-c/datadir0 && \
/usr/bin/rclone sync --checksum --fast-list --tpslimit=3 --transfers=3 --drive-use-trash=false --cache-info-age=48h --buffer-size=128M --cache-db-path=/mnt/sda5/gd-cache --cache-chunk-path=/mnt/sda5/gd-cache --timeout=10s --log-file=/var/log/rclone/rclone.log --log-level=DEBUG --stats=10s --files-from=/home/gd/rclone/list-$file /mnt/sda5/ip-c/datadir0/ gd:ip-c/datadir0/

if [ $? -eq 0 ]
then
  rm /home/gd/rclone/list-$file
  rm /home/gd/rclone/listdir-$file
  exit 0
else
  echo "$(date '+%Y%m%d-%H:%M:%S') => $dirfile" >> /home/gd/rclone/mistakes
  rm /home/gd/rclone/list-$file
  rm /home/gd/rclone/listdir-$file
  exit 1
fi

Have you tried to mount rclone mount as anon nfs share?
if yes what options for export did you use?
thanks.

Sorry, never have. I’d probably just mount is on the other machine I suppose.