Rclone serve sftp fails silently after a few days

What is the problem you are having with rclone?

rclone serve sftp fails silently after a few days.

This setup has been working for me for several months but some time in November or December, rclone started disappearing after it was running for a couple of days. There was nothing in the logs around the time of failure so I set the logs to -vv to catch what is going on, but I still can't make sense of the logs (see below).

I am now trying one more thing, which is setting --vfs-cache-mode from minimal to writes, but I don't see how that would solve the root cause. And even if that were so, I'd still wonder why rclone just vanishes without saying good bye, LOL. That's why I'm posting this already now.

The only way I can understand its silent disappearance is that it would somehow be killed from a lower level of the system, so I'll briefly describe how I'm running the command.

The normal way is that it is a cronjob that is triggered at reboot, but since the server on which it runs is usually up for months in a row, I sometimes trigger job manually, which I do via the OpenMediaVault UI, which has an option to run any of the cronjobs shown in the interface. To make sure that it keeps running when I close the interface, I added the ampersand at the end of the command.

I've been considering setting the cronjob to hourly since it seems that running the same command again doesn't create another instance, but I don't like this kind of dirty workaround...

Run the command 'rclone version' and share the full output of the command.

rclone v1.61.1

  • os/version: debian 11.6 (64 bit)
  • os/kernel: 5.15.39-3-pve (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.19.4
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

pCloud

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone serve sftp pcloud:Backup/ --addr :2022 --user christoph --pass *********** --log-file=/zfs/NAS/config/rclone/rclone.log --vfs-cache-mode minimal -vv &

The rclone config contents with secrets removed.

[pcloud]
type = pcloud
token = {"access_token":"*********","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}

A log from the command with the -vv flag

2022/12/26 06:34:15 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/d6/1e6087c5031d01bbf4dc640b6f65a8546da4ed9079b97b963e94a6f730414c.jxrwksfy.tmp not removed, freed 0 bytes
2022/12/26 06:34:15 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/5b/29e30184a9339c40ba228453a13c6ae8cf84fd5a99e5bd52c0fe2a64ce7c3c.jsrfforf.tmp not removed, freed 0 bytes
2022/12/26 06:34:15 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/d8/b1c3d09decfd8ec0ad42d0a37e7dd2b113ffc65c67c3d756c9a380f7d5beba.gidbbazc.tmp not removed, freed 0 bytes
2022/12/26 06:34:15 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/0a/c3aa48b1a5eac081790b741bc90695503915ec96bf881510a8d2d0f101f41d not removed, freed 0 bytes
2022/12/26 06:34:15 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/b9/381a92c9aae816ffc2725e33f9bbec1737a92221c4a1de3e5e4a3368c5647b.bltgqaav.tmp not removed, freed 0 bytes
2022/12/26 06:34:15 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/0e/74e209452c07f7aa4d79803dcf5fe65ead4fd57eb1f7dc2fc8cabd070957ca not removed, freed 0 bytes
2022/12/26 06:34:15 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/c4/1a65e9216283c08692b49a75a26627603bd27f511c40e6b13eeb6ad5970f27 not removed, freed 0 bytes
2022/12/26 06:34:15 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/bb/0dd34e6826b9c704cde8eba1dccd5810c23224f1fb6df329147bbc46dac5e4.ujdpiote.tmp not removed, freed 0 bytes
2022/12/26 06:34:15 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/3e/4b88a860d06cb88dcd7951c96016a29f5ef6af16d8056afce1d37a19c939a9.ezzaagth.tmp not removed, freed 0 bytes
2022/12/26 06:34:15 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/a3/bd257834277ba21562a1bebfeaaa0b215aa357bc7c5f31ab069e9fc22fac91 not removed, freed 0 bytes
2022/12/26 06:34:15 INFO  : vfs cache: cleaned: objects 190 (was 190) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2022/12/26 06:34:15 DEBUG : /Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: OpenFile: flags=O_WRONLY|O_CREATE|O_TRUNC, perm=-rwxrwxrwx
2022/12/26 06:34:15 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: Open: flags=O_WRONLY|O_CREATE|O_TRUNC
2022/12/26 06:34:15 DEBUG : Duplicacy/chunks/cf: Added virtual directory entry vAddFile: "f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp"
2022/12/26 06:34:15 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: >Open: fd=Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp (w), err=<nil>
2022/12/26 06:34:15 DEBUG : /Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: >OpenFile: fd=Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp (w), err=<nil>
2022/12/26 06:34:15 DEBUG : Duplicacy/chunks/cf: Added virtual directory entry vAddFile: "f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp"
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 65536 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 196608 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 196608 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 163840 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 196608 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 360448 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 458752 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 393216 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 425984 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 458752 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 589824 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 688128 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 622592 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 655360 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 688128 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 884736 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 983040 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 1015808 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 1277952 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 1310720 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 1343488 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 1376256 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 1409024 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 1441792 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 1474560 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 1507328 for 1s
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: waiting for in-sequence write to 1507328 for 1s
2022/12/26 06:34:16 DEBUG : pcloud root 'Backup': Target remote doesn't support streaming uploads, creating temporary local FS to spool file
2022/12/26 06:34:16 DEBUG : Creating backend with remote "/tmp/rclone-spool2643599599"
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: md5 = 98ca4e55bf3dbdb0fd69e18d822047c9 OK
2022/12/26 06:34:16 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: Size and md5 of src and dst objects identical
2022/12/26 06:34:18 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: md5 = 98ca4e55bf3dbdb0fd69e18d822047c9 OK
2022/12/26 06:34:18 INFO  : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: Copied (new)
2022/12/26 06:34:18 DEBUG : Duplicacy/chunks/cf: Added virtual directory entry vAddFile: "f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp"
2022/12/26 06:34:21 INFO  : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp: Moved (server-side) to: Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889
2022/12/26 06:34:21 DEBUG : Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889: Updating file with Duplicacy/chunks/cf/f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889 0xc0727c80c0
2022/12/26 06:34:21 DEBUG : Duplicacy/chunks/cf: Added virtual directory entry vDel: "f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889.ujohwutd.tmp"
2022/12/26 06:34:21 DEBUG : Duplicacy/chunks/cf: Added virtual directory entry vAddFile: "f205021cffa3ff0232265920cd908f2df88909e95277cb8f354be4d0508889"
2022/12/26 06:34:21 DEBUG : Duplicacy/chunks/63: Re-reading directory (23m40.928013252s old)
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63: Removed virtual directory entry vDel: "ded262365e7cf100e9bbd1d989a5994ccdcf1f86a1cfce920f51af3459e4a8.llypaull.tmp"
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63: Removed virtual directory entry vAddFile: "ded262365e7cf100e9bbd1d989a5994ccdcf1f86a1cfce920f51af3459e4a8"
2022/12/26 06:34:25 DEBUG : /Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: OpenFile: flags=O_WRONLY|O_CREATE|O_TRUNC, perm=-rwxrwxrwx
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: Open: flags=O_WRONLY|O_CREATE|O_TRUNC
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63: Added virtual directory entry vAddFile: "fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp"
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: >Open: fd=Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp (w), err=<nil>
2022/12/26 06:34:25 DEBUG : /Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: >OpenFile: fd=Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp (w), err=<nil>
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63: Added virtual directory entry vAddFile: "fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp"
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: waiting for in-sequence write to 131072 for 1s
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: waiting for in-sequence write to 196608 for 1s
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: waiting for in-sequence write to 360448 for 1s
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: waiting for in-sequence write to 458752 for 1s
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: waiting for in-sequence write to 393216 for 1s
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: waiting for in-sequence write to 425984 for 1s
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: waiting for in-sequence write to 458752 for 1s
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: waiting for in-sequence write to 524288 for 1s
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: waiting for in-sequence write to 720896 for 1s
2022/12/26 06:34:25 DEBUG : pcloud root 'Backup': Target remote doesn't support streaming uploads, creating temporary local FS to spool file
2022/12/26 06:34:25 DEBUG : Creating backend with remote "/tmp/rclone-spool1632792204"
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: md5 = c5157b5de8ad033086ef54c533acb819 OK
2022/12/26 06:34:25 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: Size and md5 of src and dst objects identical
2022/12/26 06:34:26 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: md5 = c5157b5de8ad033086ef54c533acb819 OK
2022/12/26 06:34:26 INFO  : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: Copied (new)
2022/12/26 06:34:26 DEBUG : Duplicacy/chunks/63: Added virtual directory entry vAddFile: "fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp"
2022/12/26 06:34:30 INFO  : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp: Moved (server-side) to: Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5
2022/12/26 06:34:30 DEBUG : Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5: Updating file with Duplicacy/chunks/63/fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5 0xc0727c8480
2022/12/26 06:34:30 DEBUG : Duplicacy/chunks/63: Added virtual directory entry vDel: "fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5.mejmstsl.tmp"
2022/12/26 06:34:30 DEBUG : Duplicacy/chunks/63: Added virtual directory entry vAddFile: "fb8bed2dd5e36525f67050d220ee877a6ed0d0f8ef4aac61f4faf7da5c58b5"
2022/12/26 06:34:30 DEBUG : Duplicacy/chunks/9a: Re-reading directory (29m9.112038279s old)
2022/12/26 06:34:32 DEBUG : Duplicacy/chunks/9a: Removed virtual directory entry vDel: "8be1d0c87679f9ada8392a0fa5ac5d516c7aed265158dea9b57b417a966bfc.jfaxwtfn.tmp"
2022/12/26 06:34:32 DEBUG : Duplicacy/chunks/9a: Removed virtual directory entry vAddFile: "8be1d0c87679f9ada8392a0fa5ac5d516c7aed265158dea9b57b417a966bfc"
2022/12/26 06:34:32 DEBUG : Duplicacy/chunks/69: Re-reading directory (11m9.012710613s old)
2022/12/26 06:34:36 DEBUG : Duplicacy/chunks/69: Removed virtual directory entry vAddFile: "ea19733c837372575b72bceeec34fdece234e178088f2193c5bd3e13fad1f1"
2022/12/26 06:34:36 DEBUG : Duplicacy/chunks/69: Removed virtual directory entry vDel: "ea19733c837372575b72bceeec34fdece234e178088f2193c5bd3e13fad1f1.duaxfhqj.tmp"
2022/12/26 06:34:36 DEBUG : Duplicacy/chunks/e3: Re-reading directory (10m59.838952989s old)
2022/12/26 07:19:04 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item Duplicacy/chunks/b6/08d508c724e154882d685f44a68d0cbcf4b406dca34f64d60fdc20e73890f1 not removed, freed 0 bytes

There is nothing in the logs to indicate rclone went wrong, so it's likely it was killed by the system.

Most likely cause is it using too much memory. Can you track that?

Thanks for looking into this. I'm not sure how I can track the memory use of an individual process but the memory usage for the entire system over the past year looks like this:

image

And with more detail the last month:

image

So, there is plenty of memory available and I haven't seen anything suggesting that there is a memory problem.

What else might be a reason for the system shooting down rclone?

If the system did kill it then why should be in the syslog.

I guess rclone could have exceeded some ulimit also, but I'm not sure the system would have killed it for that, just denied the resource.

Did the disk get full?

Apparently it is a memory problem. But I have no idea where to start solving it. Does the following output tell you something about where I should look?

sudo dmesg -T | grep rclone
[Sat Dec 17 13:53:43 2022] [2703857]     0 2703857  1606259  1371013 11558912      276             0 rclone
[Sat Dec 17 13:53:43 2022] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=emulator,mems_allowed=0,global_oom,task_memcg=/system.slice/openmediavault-engined.service,task=rclone,pid=2703857,uid=0
[Sat Dec 17 13:53:43 2022] Out of memory: Killed process 2703857 (rclone) total-vm:6425036kB, anon-rss:5484052kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:11288kB oom_score_adj:0
[Fri Dec 23 17:14:33 2022] [ 184621]     0 184621  1658918  1075847 12038144    12529             0 rclone
[Fri Dec 23 17:14:33 2022] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=user.slice,mems_allowed=0,global_oom,task_memcg=/system.slice/openmediavault-engined.service,task=rclone,pid=184621,uid=0
[Fri Dec 23 17:14:33 2022] Out of memory: Killed process 184621 (rclone) total-vm:6635672kB, anon-rss:4303388kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:11756kB oom_score_adj:0
[Mon Dec 26 07:36:21 2022] [3208405]     0 3208405  1558538   986398 11157504    10827             0 rclone
[Mon Dec 26 07:36:21 2022] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init.scope,mems_allowed=0,global_oom,task_memcg=/system.slice/openmediavault-engined.service,task=rclone,pid=3208405,uid=0
[Mon Dec 26 07:36:21 2022] Out of memory: Killed process 3208405 (rclone) total-vm:6234152kB, anon-rss:3945592kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:10896kB oom_score_adj:0
[Tue Dec 27 10:14:10 2022] rclone invoked oom-killer: gfp_mask=0x1100cca(GFP_HIGHUSER_MOVABLE), order=0, oom_score_adj=0
[Tue Dec 27 10:14:10 2022] CPU: 0 PID: 1089696 Comm: rclone Tainted: P           O      5.15.39-3-pve #2
[Tue Dec 27 10:14:10 2022] [1071914]     0 1071914  1073393   729889  7266304      845             0 rclone
[Tue Dec 27 12:58:41 2022] [1071914]     0 1071914  1073393   639624  7266304    10360             0 rclone
[Fri Dec 30 15:50:27 2022] [1071914]     0 1071914  1541737   893559 11071488     7604             0 rclone
[Sun Jan  1 23:41:19 2023] [1071914]     0 1071914  1660128  1317908 11976704     6370             0 rclone
[Sun Jan  1 23:41:19 2023] oom-kill:constraint=CONSTRAINT_NONE,nodemask=(null),cpuset=init.scope,mems_allowed=0,global_oom,task_memcg=/user.slice/user-1000.slice/session-45807.scope,task=rclone,pid=1071914,uid=0
[Sun Jan  1 23:41:19 2023] Out of memory: Killed process 1071914 (rclone) total-vm:6640512kB, anon-rss:5271632kB, file-rss:0kB, shmem-rss:0kB, UID:0 pgtables:11696kB oom_score_adj:0

Is there a problem with rclone or with the system?

That appears to say rclone was using 6GB of RAM when it was killed which is quite a lot.

Roughly how many files are in use? If the anwer was say 6M then using 6G of memory might be appropriate. It is possible this code would fix your problem - Added cache cleaner for directories by darthShadow · Pull Request #6561 · rclone/rclone · GitHub

There are some notes on adjusting GOGC here which can help: FAQ

I am using rclone to serve the backend for my duplicacy backups. Duplicacy splits files into chunks which are stored in a directory with 256 subdirectories. The number of files in each directory should be similar, so I checked four of them randomly: 12614, 12499, 12422, 12340. So let's say there are around 12500 files in each of these directories which gives us 3.2M files.

So, unless I am doing something wrong, I would say rclone is handling less than 4M files, rather than 6M.

What does that tell us?

So, that code is due in the next rclone update? - It would be great if that would solve the issue. :slight_smile:

I'm not sure I want to reduce memory usage at the cost of additional CPU usage. I have more than enough memory on that machine (at leat that is my impression, but Debian doesn't seem to think so). But I could try export GOGC=50 to see if it helps. But how exactly do I do that? I entered export GOGC=20 in the terminal and it silently accepted it, but I suspect that is not how you do it... :thinking:

Thats the right order of magnitude of files for that memory usage.

I'm currently reworking the VFS to use an on disk db for the metadata which will fix this problem. Whether it makes it into the next release or not, I don't know!

You do the export GOGC=20 in the terminal or bash script immediately before you run rclone.

export GOGC=20
rclone serve ...

It will persist for that terminal session / bash script but isn't a global setting.

You can see the value of the environment variables with env.

Okay, thank you so much, I changed my cronjob to

export GOGC=50 && rclone serve sftp pcloud:Backup/ --addr :2022 --user christoph --pass Font6-Antibody-Widget --log-file=/zfs/NAS/config/rclone/rclone.log --vfs-cache-mode writes -vv &

I'll report back if that doesn't work. In any case, I'm looking forward to the fixes that are forthcoming.

1 Like

It lasted for 12 days until rclone was killed again.
ow trying with GOGC=50 and we'll see. But my feeling is that I'll have to wait for that update to get this to work.

1 Like

It lasted 14 days this time. So the difference - if any - between GOGC=50 and GOGC=20 is rather small. I notice that I apparently failed to update my cron job to GOGC=50 last time so that I was running with GOGC=20 the last two weeks. So I'm trying it again now, this ime GOGC=50 for real!

So I guess I'll have to wait for that pull request to be finalized.

In the mean time: what would be the best way of automatically restarting my rclone command every 10 days?

After I realized that I had rclone running as a user service, I am not sure how much of my GOGC testing so far is actually valid. I launched that command (export...) manually so it was probably running for some time, but as soon as it failed, the user service would probably have taken over - without the GOGC setting.

So I disabled the user service for now so that it won't interfere with the commands I'm trying.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.