100 cpu when 100 plus users connect

What is the problem you are having with rclone?

Hi All

i have a setup which users connect to and view files but after a certain amount of user i get 100% cpu
i have done the following service file. My server is pretty strong also.

Description=RClone Service

ExecStart=/usr/bin/rclone mount tcrypt: /mnt/unionfs
--buffer-size 32M
--dir-cache-time 1000h
--log-level INFO
--log-file /opt/rclone/logs/rclone.log
--poll-interval 15s
--umask 002
--user-agent MusBoxMount
--rc-addr :5572
--vfs-cache-mode full
--vfs-cache-max-size 200G
--vfs-cache-max-age 48h
--bwlimit-file 4M
ExecStop=/bin/fusermount -uz /mnt/unionfs
ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr _async=true


What is your rclone version (output from rclone version)

rclone v1.53.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

ubuntu 18.04

Which cloud storage system are you using? (eg Google Drive)

google drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste command here

The rclone config contents with secrets removed.

Paste config here

type = crypt
remote = tdrive:/encrypt
filename_encryption = standard
directory_name_encryption = true
password = 123
password2 = 456

client_id = 567
client_secret = 123
type = drive
token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"678","expiry":"2020-12-15T14:24:21.160404942Z"}
team_drive = 0ACQbepUI69HvUk9PVA

A log from the command with the -vv flag

Paste  log here

You have a super confusing mount command as it contains cache backend stuff, which isn't in your rclone.conf

That all does nothing and can be removed.

What are the actual specs on the server you have?

If you are hitting 100%CPU, your server isn't big enough to handle the load.

Hi Thanks for gettng back to me my server spec is the following

  • Dual E5-2650v2 CPU
  • 128 GB RAM
  • 2x400GB SSD Disk
  • 10 Gbit Dedicated Uplink

That's a CPU from 2013 so extremely old and not really that powerful. Might want to look into upgrading that if you are planning to serve a 100 users.

do you beleive thats the issue behind it upgrading server will fix issue ?

I can't imagine to say 'fix' as I don't know enough about your use case other than a sentence.

Based on what you've shared, trying to use a CPU from 2013 to serve 100 users seems unlikely to work based on the information provided.

You'd have to detail what you are doing, expectations for performance, what software you are using and even then, it's a bit of a trial and error unless you are doing something very cookie cutter as each environment is unique.

ok I will be frank its a media centre and the files accessed are movies that sit on google cloud encrypted i have set a 200gb cache to avoid api limits which doing the cache does avoid that and help alot but its odd as 98 users processor seems ok once it hits about 100 it starts to bottle neck.

Those are called breaking points as if you a limit, thing tend to break.

It would depend on what software you are using for the media center, what level of bitrates you are providing to people. It is direct playing/transcoding.

There's a sliver of a htop report it looks like. When you hit 100, what does the whole thing look like? Are you saying when you get 2 more users it goes from 48% to 100%? That seems very odd.

I am using a platform that creates symlinks of videos to play and yes literally 2-4 more users and the processor max's out

Can you share the output of when that happens?

i will do really appreciate you responding so quickly thank you

i will put the live htop results but my mate said he saw PHP-FPM maxing cpu if this helps also.

initially i was running xtream with plexguide setuo but hitting api limits i then changed to rclone and mounting the way shown above which is now giving me this issues the joke is with plexguide setup no cpu issue just api limits being hit

I don't know what plexguide is so not sure how you hit 1 billion API hits in a day as those are the API limits.

sorry i meant download quota not api

Google works fine without issue when not using cache. However we run into “download quota exceeded api 403 error” this is why we setup the cache to save on the download quota but in turn this has introduced issues with the cpu hitting 100%

You haven't shown any CPU issues yet as your screenshots show 48%.

If you have the space, you may want to increase your cache location and increase the time if you have folks hitting the same things.