Rclone Mount Memory Leaks & GDrive Enumeration

Great - thanks for confirming.

Got it.

There doesn't appear to be any goroutine leaks - you've just got a busy mount.

The memory trace is interesting

File: rclone
Type: inuse_space
Time: May 26, 2020 at 2:25am (SAST)
Showing nodes accounting for 2614.89MB, 99.75% of 2621.34MB total
Dropped 45 nodes (cum <= 13.11MB)
      flat  flat%   sum%        cum   cum%
  766.70MB 29.25% 29.25%   766.70MB 29.25%  strings.(*Builder).grow
  514.09MB 19.61% 48.86%   514.09MB 19.61%  github.com/rclone/rclone/vfs.newFile
  392.51MB 14.97% 63.83%   392.51MB 14.97%  encoding/json.(*decodeState).literalStore
  325.54MB 12.42% 76.25%   586.56MB 22.38%  github.com/rclone/rclone/backend/drive.(*Fs).newRegularObject
  261.02MB  9.96% 86.21%   261.02MB  9.96%  fmt.Sprintf
  218.98MB  8.35% 94.56%   793.58MB 30.27%  github.com/rclone/rclone/vfs.(*Dir)._readDirFromEntries
   60.51MB  2.31% 96.87%    60.51MB  2.31%  github.com/rclone/rclone/vfs.newDir

What it looks like is that you've got a lot of VFS objects in memory.

How many files do you have in your mount? (rclone size remote:)

I guess updatedb has pulled the metadata for all of them into memory - that is why it is using so much memory. You can reduce

  --dir-cache-time duration                Time to cache directory entries for. (default 5m0s)

To make rclone get rid of those directory entries quicker. Though I think (looking at your command line) that you have it at the default 5 minutes already - is that correct?

So maybe the VFS layer isn't pruning its directory cache properly...

I'm not quite sure exactly where all the memory come from but some of that usage doesn't look very efficient!

Can you do

go tool pprof -svg http://localhost:5572/debug/pprof/heap

And post the generated svg file - that should show the trace of where the memory got used. That will also generate a .gz file - if you could stick that in the archive too then I can run my own analyses - thanks.