Hemorrhaging RAM

What is the problem you are having with rclone?

Drive wont mount and the service is continually restarting each time creating a new PID taking more and more RAM until the server keels over :grimacing:

What is your rclone version (output from rclone version)

  • rclone v1.53.3
  • os/arch: linux/amd64
  • go version: go1.15.5

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Debian 4.19.160-2 64bit (Cloud VPS with 1Gb up and down)

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone.service

The rclone config contents with secrets removed.

[gdrive]
type = drive
client_id = <Removed>
client_secret = <Removed>
scope = drive
token = <Removed>
root_folder_id = <Removed>

A log from the command with the -vv flag

2021/01/06 12:18:44 DEBUG : rclone: Version "v1.53.3" starting with parameters ["/usr/bin/rclone" "mount" "gdrive:" "/mnt/downloadhd/gdrive" "--config" "/home/container/.config/rclone/rclone.conf" "--allow-other" "--allow-non-empty" "--cache-db-purge" "--vfs-cache-mode" "full" "--vfs-cache-max-age" "24h" "--vfs-read-ahead" "100G" "--vfs-read-chunk-size" "512M" "--buffer-size" "512M" "--log-level" "DEBUG" "--log-file" "/home/container/logs/rclone.log"]
2021/01/06 12:18:44 DEBUG : Creating backend with remote "gdrive:"
2021/01/06 12:18:44 DEBUG : Using config file from "/home/container/.config/rclone/rclone.conf"
2021/01/06 12:18:46 DEBUG : vfs cache: root is "/home/container/.cache/rclone/vfs/gdrive"
2021/01/06 12:18:46 DEBUG : vfs cache: metadata root is "/home/container/.cache/rclone/vfs/gdrive"
2021/01/06 12:18:46 DEBUG : Creating backend with remote "/home/container/.cache/rclone/vfs/gdrive"
...
2021/01/06 12:19:29 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded)
2021/01/06 12:19:29 DEBUG : pacer: Rate limited, increasing sleep to 1.61703549s
2021/01/06 12:19:29 DEBUG : pacer: low level retry 3/10 (error googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded)
2021/01/06 12:19:29 DEBUG : pacer: Rate limited, increasing sleep to 2.943987328s
2021/01/06 12:19:29 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded)
2021/01/06 12:19:29 DEBUG : pacer: Rate limited, increasing sleep to 4.613176159s
2021/01/06 12:19:29 DEBUG : pacer: low level retry 7/10 (error googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded)
2021/01/06 12:19:29 DEBUG : pacer: Rate limited, increasing sleep to 8.911560638s
2021/01/06 12:19:29 DEBUG : pacer: Reducing sleep to 0s
2021/01/06 12:19:29 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded)
2021/01/06 12:19:29 DEBUG : pacer: Rate limited, increasing sleep to 1.094917322s

In short I fiddled with my working rclone.service initially to add --rc functionality but also adjust the config for better performance when streaming content from Plex.

I had copied the --rc related config from @Animosity022 post here however when it rebooted it failed to start stating something was already using the address, I tried changing the port but it didn't work so removed the --rc related config and focused on optimizing the VFS settings.

I'm sure from the logs i've got an API ban from Google now so will need to wait until tomorrow for it to work and @Animosity022 I know i've still got the --allow-non-empty \ in there but thats becuase my containers launch at reboot and keep recreating the directories before the drive has had chance to mount :stuck_out_tongue:

Thanks in advance :smiley:

You have this in your service:

 --buffer-size 512M \

So you take 512M per file that is used. You should remove that and use the default.

As always fast and efficiant responses (do you even sleep :wink: )

Thats stopped the RAM bleed, but the serivce is still restarting continuiosly creading new PID's each time what could cause that?

And have I got the rest of the config right for optimum remote Plex streaming?

Thanks again :slight_smile:

There's no log file so there's no way to tell.

You'd have to check the log file and systemd to see why or share the logs.

:man_facepalming: yes logs would help.

Here is the rclone log

Here is part of the output from journalctl -u rclone

Here is the out put from journalctl -u rclone -b

If wasn't until I checked the journalctl -u rclone there are litrally thousands of these type of entries:

Jan 06 12:29:37 Plex-Cloud systemd[1]: rclone.service: Found left-over process 788 (rclone) in control group while starting unit. Ignoring.
Jan 06 12:29:37 Plex-Cloud systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.

There had been some latency starting Playback in Plex since removing the cache config and going to VFS options but as the videos actually played I presummed all was well but wanted to see if I could make them start any quicker and here i am with broken mount again :stuck_out_tongue:

You must have wings @Animosity022 as you ahve the patience of a saint :smile:

Allow-non-empty really makes thing tough as you are overmounting and creating havoc so it's a bit tough to understand what is happening.

Looks like you hit an upload quota issue and you keep trying to write to the mount so it keeps trying to upload and you get into a loop of things being unable to upload, rclone timing out, systemd restarting it and rinse/repeat.

ok that makes sense, its hard (impossible) to diagnose a problem and advise when i'm not sticking to the known working config. This may force my hand to look into how to change the start order of the services or intoduce a "check for mount" kind of thing if i can get the drive to mount before docker starts then removing the --allow-non-empty is easy.

Yeah I added a list to Sonarr which resulted in Sabznbd going on a download spree and Foie grasing my upload allowance to Google yesterday and again today :stuck_out_tongue:

When its mounted correctly for every file should I be seing the amount of these entries?:

Jan 06 13:14:56 Plex-Cloud systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.

I'm going to do the following:

  • clear the back log of files waiting to be uploaded
  • Remove the --allow-non-empty from the service
  • Reboot
  • stop the containers
  • Delete the directories preventing the mount
  • Wait for clone to reattempt the mount

If that works i will leave the config alone and work on how to resolve the docker staring before the mount complets :slight_smile:

I'll let you know how i get on

I don't use containers as they add zero value to me and over complicate things for no benefit (in my use case).

If you are hitting upload quota and keep trying to upload to a mount, it's going to error out over and over and over and be very unresponsive as each file copied has to fail and that generates an IO error on the file copy.

You really should stop writing to the mount if you are hitting upload quota if you are trying to use the mount.

Majority of folks upload via a separate process and leave the mount alone.

you can also remove --cache-db-purge, not used in your case.

Good spot @asdffdsa thanks :wink:

So quick update i started with task one which was to remove the backlog of files to be sync'ed to the google drive as soon as i did that the drive connected :grin:

This makes perfect sense when you think about it, the Google task/API calls are done sequentionally as i had exhausted the 750GB user upload limit set by Google the remaining uploads where queue awaiting the reset at midnight and the drive mount was sat behind these task.

As soon as I cleared the task in front the drive mounted, bit like clearing a blocked pipe :stuck_out_tongue:

Thanks for the help and big thanks to @Animosity022 for your fast support and endless patience :love_you_gesture:t2:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.