Does this configuration make sense? rclone times out

What is the problem you are having with rclone?

Starting rclone times out with the following configuration. This suddenly started happening and the same configuration has been running for awhile.

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.0

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

[Unit]
Description=RClone VFS Service
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
Environment=GOMAXPROCS=2

ExecStart=%h/bin/rclone mount gdrive: %h/gdrive \
  --config %h/.config/rclone/rclone.conf \
  --use-mmap \
  --allow-other \
  --buffer-size 256M \
  --drive-chunk-size 128M \
  --dir-cache-time 720h \
  --vfs-cache-mode full \
  --vfs-read-chunk-size 128M \
  --vfs-read-chunk-size-limit off \
  --vfs-cache-max-age 5000h \
  --vfs-cache-poll-interval 10m \
  --vfs-cache-max-size 2000G

StandardOutput=file:%h/scripts/rclone_vfs_mount.log
ExecStop=/bin/fusermount -uz %h/gdrive
Restart=on-failure

[Install]
WantedBy=default.target

A log from the command with the -vv flag

It does not output anything in the log. This appears when starting rclone.

Job for rclone-vfs.service failed because a timeout was exceeded.
See "systemctl --user status rclone-vfs.service" and "journalctl --user -xe" for details.

as per that message, please run the two commands and post the full output.
and use a rclone debug log and post the full output

Ah, darn. I exited the terminal that had that information. I don't have that at the moment as the new current config works (very different config). I'll test the old config soon. In the meantime, I do have this to share. When running systemctl status on rclone previously it seemed to be stuck on starting/activating. ps aux | grep rclone would show it as active as well.

● rclone-vfs.service - RClone VFS Service
   Loaded: loaded (/home/redacted/.config/systemd/user/rclone-vfs.service; enabled; vendor preset: enabled)
   Active: activating (start) since Thu 2023-03-09 21:59:20 CET; 38s ago
 Main PID: 89673 (rclone)
   CGroup: /user.slice/user-1038.slice/user@1038.service/rclone-vfs.service
           └─89673 /home/redacted/bin/rclone mount gdrive: /home/redacted/gdrive --config /home/redacted/.config/rclone/rclone.conf --use-mmap --allow-other --buffer-size 256M --drive-chunk-size 128M --dir-cache-t
Mar 09 21:59:20 pollux systemd[128020]: Starting RClone VFS Service...

I don't know whether this is the problem but I am suspicious of it! Why would you want to limit the number of Go threads?

Good question. It appears that my box provider inserts that by default. Their default config that works has it as well.

[Unit]
Description=RClone VFS Service
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
Environment=GOMAXPROCS=2

ExecStart=%h/bin/rclone mount gdrive: %h/gdrive \
  --config %h/.config/rclone/rclone.conf \
  --use-mmap \
  --dir-cache-time 1000h \
  --poll-interval=15s \
  --vfs-cache-mode writes \
  --tpslimit 10

StandardOutput=file:%h/scripts/rclone_vfs_mount.log
ExecStop=/bin/fusermount -uz %h/gdrive
Restart=on-failure

[Install]
WantedBy=default.target

Interesting! I don't think removing it / commenting it will harm anything and if you are really lucky it will fix the problem :wink:

i would take the working config, add one or two flags at a time and see when the problem re-occurs.
that way might figure out which flag(s) is the issue.

what do you do with the rclone mount? stream from plex or what?

not sure these are needed or need to be set to such high values, as most configs do not use them.

  --buffer-size 256M \
  --drive-chunk-size 128M \
  --vfs-read-chunk-size 128M \
  --vfs-read-chunk-size-limit off \

I like this approach. Great idea!

what do you do with the rclone mount? stream from plex or what?

Yeah, Plex.

not sure these are needed or need to be set to such high values, as most configs do not use them.

I tried to cache as much as possible to prevent Google from kicking us off for daily limits. Not sure if it made sense though. I have a large library being shared among family + friends.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.