Emby Rclone workspace Very long launch

yes it is true nothing was in reading i will i do with an activity

  • You have exceeded the maximum paste size of 512 kilobytes per paste. PRO users don't have this limit!

I would like but it's starting to tire...

That log has 1.5GB in use.

2021/06/19 14:50:38 INFO  : vfs cache: cleaned: objects 7 (was 7) in use 0, to upload 0, uploading 0, total size 1.516G (was 1.516G)

So looks good as well.

I know the cache is working fine, but this is the part of the cleanup that I don't understand. Why is it exceeding my 200gb limit

There isn't any evidence of the log growing over 200GB.

You'd have to recreate the issue and share the log and it's easy to see what's going on.

The logs you've shared have had 0GB used and 1.5GB.

Here are the properties of my .cache / rclone folder where the vfs cache

1607 items, totaling 256.1 GB

It's not from the mount log you've shared though as I can't see anything else on your server. The mount you are running isn't using more than 1.5GB of cache.

Are you running more than one mount? Do you have anything else stored in there?

My cache points to /cache.

root@gemini:/cache# du -sh *
16K	lost+found
747G	vfs
21M	vfsMeta

I have my mount configured for 750GB so looks exactly as I'd expect. You'll see your remote name in the directory.

root@gemini:/cache/vfs# ls -al
total 12
drwx------ 3 felix felix 4096 Mar 14 16:27 .
drwxrwxr-x 5 felix felix 4096 Mar 14 16:27 ..
drwx------ 6 felix felix 4096 Jun 16 17:08 gcrypt
root@gemini:/cache/vfs#

Mine is gcrypt and that's my 747GB folder.

jonathan@jonathan-MS-7B86:~/.cache/rclone$ ls -al
total 16
drwxr-xr-x 4 jonathan jonathan 4096 jun 16 21:15 .
drwxr-xr-x 33 jonathan jonathan 4096 jun 15 20:04 ..
drwx------ 4 jonathan jonathan 4096 jun 17 18:05 vfs
drwx------ 4 jonathan jonathan 4096 jun 17 18:05 vfsMeta

jonathan@jonathan-MS-7B86:~/.cache/rclone$ du -sh *
137G vfs
3,4M vfsMeta

Very strange :sweat_smile:

Are you running more than one mount?
If you see what I did, I went into the folder and look for my remote name.

Doesn't seem to equal

yes but the other remote is on another server which is my backup. It is known him that I made my tests it indicates 1.6G but the linux properties on the vfs folder indicates 53.1 go

In fact what I notice is that the linux explorer calculates the real size of the combined files of the vfs cache 4 film for 40 gb

My stuff is headless so it's easier/cleaner to just use CLI rather than trying to figure out what some GUI is doing.

My question was if you are running more than one mount on the same server as that's what we are focusing on.

Try to replicate what I've been sharing.

Go into the folder.
Check if more than one folder is there.

root@gemini:/cache/vfs# ls
gcrypt

I have my one remote.

The log you shared had this as the cache location for your mount:

/home/jonathan/.cache/rclone/vfs/Gcrypt"

Is that what you see when you ls that directory?

ls /home/jonathan/.cache/rclone/vfs

Shows what?

I only have 1 mount per server..

jonathan@jonathan-MS-7B86:~/.cache/rclone$ ls
vfs vfsMeta
jonathan@jonathan-MS-7B86:~/.cache/rclone$ gcrypt

jonathan@jonathan-MS-7B86:~/.cache/rclone$ ls /home/jonathan/.cache/rclone/vfs
Gcrypt

What I just found is that what is in the vfs folder is the actual size of the files that have been cached.
It's as if the full files were cached by linux

To overcome this problem I put the max size cache to 100 GB which is equivalent to a real size of 200 GB of space used on the disk. I now have 196 GB of real space used on the disk

jonathan@jonathan-MS-7B86:~/.cache/rclone$ du -sh *
99G vfs
3,3M vfsMeta
jonathan@jonathan-MS-7B86:~/.cache/rclone$

1,257 items, totaling 198.4 GB

If you set max size to 100GB, as you can see from your used space, it's only 99G which is right:

It doesn't use 200GB on disk as you've shown it's only 99GB.

I parted some movies and the disk space used by vfs is 250gb now.
du -sh * does not exceed 100 GB

If you want to share the output and a log, we can see what's going on.

You aren't using more than 100GB based on the output you've shared so far.