Getting "transport endpoint is not connected" all the time

What is the problem you are having with rclone?

transport endpoint is not connected under load

What is your rclone version (output from rclone version )

rclone v1.50.1

  • os/arch: linux/amd64
  • go version: go1.13.4

What is your rclone version (output from rclone version )

Debian Buster

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run

rclone mount genc:/Media /fs/cloud
--attr-timeout 1000h
--buffer-size 128M
--dir-cache-time 1000h
--log-level INFO
--log-file /home/user/rclone.log
--poll-interval 15s
--timeout 1h
--umask 002
--user-agent="Drive Client"

Any tips to fix it?

You can remove this as it doesn't do anything on a mount.

What's the reason to change these? You should remove them and leave at the default.

That means that something stopped the drive but you still have write access going on and it cannot fully unmount it.

You'd want to stop all your rclone processes and make sure none are running and start it up again.

To see why it's stopping, you'd want to add in:

--log-level DEBUG --log-file /tmp/rclone.log (or something location you want)

Are you just start it up from the prompt or using a service?

and finally, what does rclone version show?

Best to use the question template rather than deleting all the questions as that's the format to get help the fastest :slight_smile:

Follow Animosity's instructions in general, but these:
--tpslimit 5
--drive-pacer-burst 25

Are for the record diametrically opposed.
The first one says "never make mroe than 5 requests pr second"
The second says "you can make up to 25 requests pr second when quota allows".
It just makes no sense whatsoever.

I'd just leave them to defaults.
Somewhat doubt this is the cause of your problem however.
I would post your plex settings for scanning. Most likely you have in-depth metadata scans on that may be reading every file in your library or at least part of every file. That is not suited to cloud-drive usage and is known to cause problems.

A debug log would also be very helpful. Add these you your mount command:
--log-level DEBUG

Fixed it.

Those settings were desperation... Got less problems with them, maybe placebo effect...

The most interesting lines that I get with debug activated are:

2019/11/24 17:10:31 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console:, userRateLimitExceeded)

And these are just before i get the error:

Actually this is the first time I see this error so clear, I previously had another, is the "out of memory" related with buffer and plex check multiple files? Should I remove it?

Ya this is result of my desperation. With those I had less errors, and was just testing out solutions, and maybe ended up with frankenstein mount settings... I removed them and added log in debug mode. Looks like is related with buffer getting "out of memory"?

Ah, well good in a sense - because an out of memory error is a very clear problem we can fix.

You do not need to worry about the pacer 1/10 error. This is perfectly normal to have some of during normal operation. You can just consider this the server telling rclone's pacer that it needs to throttle down the request-rate slightly. The pacer will respond gracefully to this to try and mostly stay under the limit for optimal performance. Unless you start to see significant amounts of 4/10 or 5/10 retry errors or higher you can consider this perfectly normal and nothing to worry about. It should not be related to your problem at all.

So how much RAM do you have on the system exactly?

I suspect that Plex is doing agressive scanning and opening way too many file at once, such that even the reasonable 128M buffer may multiply so much that it causes a memory problem. If so that's a Plex settings issue. Nothing else in your config should be particularly RAM heavy. You probably do not actually need a 128M buffer - but I don't suspect that this is the main issue in an of itself. Awaiting your response to be able to say more about the best way to fix this...

1 Like

I only have 2GB of RAM on this server, so this could easily be a problem scanning my "large library".

I removed the buffer-size from settings and now I'm testing it. Looks fine. The default is 16MB right?

I have thumbnails disabled in plex, and everything, but plex is very buggy and settings aren't clear enough many times, I will recheck it.

Testing it now, 3 simultaneous 4k streams (30GB per file) running without any issue, for now.

Any way to reduce the amount of RAM used by rclone?

Default buffer is 16M yes. This may already be enough to play things smoothly, so unless you actually have issues with this size I recommend you just leave it at that.

With 2GB RAM we need to be conservative. It shouldn't be a problem at all, but your OS is probably using most of the first GB on it's own, so we don't have that much room to play around with non-essentials. rclone does not use much RAM by default though. It's mostly very large buffer-sizes or chunk-sizes that make it bloat, and these are usually very much optional to increase. With your current setup it is really only an extreme amount of concurrent connections that could make it go OOM. (assuming there is nothing else on the system that is also sucking up a lot of RAM obviously).

I still suspect the Plex scans as I said, because even with 128M buffer it would take a significant amount of concurrent transfers to OOM. I would just post an image of your Plex settings. I think there are 2 settings pages that are relevant for this. I can provide some decent educated guesses on what settings are likely to cause trouble - and Animosity is an expert on the topic.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.