Rclone mount memory increasing linearly

GOGC=50 and --dir-cache-time=12h

same problem with a more stable chart. It looks like there's something that's hard to get out of memory.

Fetching profile over HTTP from http://localhost:5572/debug/pprof/heap
Saved profile in /root/pprof/pprof.rclone.alloc_objects.alloc_space.inuse_objects.inuse_space.003.pb.gz
File: rclone
Type: inuse_space
Time: Aug 3, 2022 at 10:06am (-03)
Showing nodes accounting for 5374.16MB, 97.76% of 5497.48MB total
Dropped 174 nodes (cum <= 27.49MB)
      flat  flat%   sum%        cum   cum%
  761.10MB 13.84% 13.84%   761.10MB 13.84%  encoding/json.(*decodeState).literalStore
     752MB 13.68% 27.52%      752MB 13.68%  google.golang.org/api/internal/gensupport.NewMediaBuffer (inline)
  579.61MB 10.54% 38.07%   579.61MB 10.54%  github.com/rclone/rclone/vfs.newFile
  419.07MB  7.62% 45.69%   476.57MB  8.67%  github.com/rclone/rclone/fs/hash.(*MultiHasher).Sums
  418.60MB  7.61% 53.30%   418.60MB  7.61%  github.com/rclone/rclone/vfs/vfscache.newItem
  297.03MB  5.40% 58.71%   492.04MB  8.95%  github.com/rclone/rclone/backend/local.(*Fs).newObject
  259.54MB  4.72% 63.43%   293.06MB  5.33%  github.com/rclone/rclone/fs/hash.StreamTypes
  219.51MB  3.99% 67.42%   219.51MB  3.99%  path.Join
  195.01MB  3.55% 70.97%   195.01MB  3.55%  strings.(*Builder).grow (inline)
  190.52MB  3.47% 74.43%   237.02MB  4.31%  github.com/rclone/rclone/backend/googlecloudstorage.(*Fs).newObjectWithInfo
  186.02MB  3.38% 77.82%  1300.07MB 23.65%  github.com/rclone/rclone/backend/googlecloudstorage.(*Fs).Put
  186.01MB  3.38% 81.20%   186.01MB  3.38%  encoding/hex.EncodeToString (inline)
  169.82MB  3.09% 84.29%   169.82MB  3.09%  github.com/rclone/rclone/vfs.(*Dir).addObject
  149.01MB  2.71% 87.00%   149.01MB  2.71%  github.com/rclone/rclone/backend/union.(*Fs).wrapEntries
  146.50MB  2.66% 89.67%   146.50MB  2.66%  github.com/rclone/rclone/backend/union/upstream.(*Fs).WrapObject (inline)
  118.72MB  2.16% 91.83%   436.28MB  7.94%  github.com/rclone/rclone/vfs.(*Dir)._readDirFromEntries
  109.51MB  1.99% 93.82%   300.29MB  5.46%  github.com/rclone/rclone/vfs.newRWFileHandle
      63MB  1.15% 94.96%   140.51MB  2.56%  github.com/rclone/rclone/backend/union.(*Fs).mergeDirEntries
      50MB  0.91% 95.87%   121.50MB  2.21%  github.com/rclone/rclone/backend/union.(*Fs).put
   47.51MB  0.86% 96.74%    47.51MB  0.86%  github.com/rclone/rclone/vfs.newDir
      29MB  0.53% 97.26%  1031.44MB 18.76%  github.com/rclone/rclone/cmd/mount.(*Dir).Create
   19.50MB  0.35% 97.62%   596.29MB 10.85%  github.com/rclone/rclone/cmd/mount.(*Dir).Lookup
    5.96MB  0.11% 97.73%   424.56MB  7.72%  github.com/rclone/rclone/vfs/vfscache.(*Cache)._get
    1.60MB 0.029% 97.76%  2095.68MB 38.12%  bazil.org/fuse/fs.(*Server).handleRequest
         0     0% 97.76%  2095.68MB 38.12%  bazil.org/fuse/fs.(*Server).Serve.func1
         0     0% 97.76%  2095.68MB 38.12%  bazil.org/fuse/fs.(*Server).serve
         0     0% 97.76%   761.10MB 13.84%  encoding/json.(*Decoder).Decode
         0     0% 97.76%   430.55MB  7.83%  encoding/json.(*decodeState).array
         0     0% 97.76%   761.10MB 13.84%  encoding/json.(*decodeState).object
         0     0% 97.76%   761.10MB 13.84%  encoding/json.(*decodeState).unmarshal
         0     0% 97.76%   761.10MB 13.84%  encoding/json.(*decodeState).value
         0     0% 97.76%   680.59MB 12.38%  github.com/rclone/rclone/backend/googlecloudstorage.(*Fs).List
         0     0% 97.76%   249.52MB  4.54%  github.com/rclone/rclone/backend/googlecloudstorage.(*Fs).itemToDirEntry
         0     0% 97.76%   680.59MB 12.38%  github.com/rclone/rclone/backend/googlecloudstorage.(*Fs).list
         0     0% 97.76%   431.07MB  7.84%  github.com/rclone/rclone/backend/googlecloudstorage.(*Fs).list.func1
         0     0% 97.76%   680.59MB 12.38%  github.com/rclone/rclone/backend/googlecloudstorage.(*Fs).listDir
         0     0% 97.76%   249.52MB  4.54%  github.com/rclone/rclone/backend/googlecloudstorage.(*Fs).listDir.func1
         0     0% 97.76%  1132.55MB 20.60%  github.com/rclone/rclone/backend/googlecloudstorage.(*Object).Update
         0     0% 97.76%  1082.55MB 19.69%  github.com/rclone/rclone/backend/googlecloudstorage.(*Object).Update.func1
         0     0% 97.76%    95.50MB  1.74%  github.com/rclone/rclone/backend/googlecloudstorage.(*Object).setMetaData
         0     0% 97.76%   339.52MB  6.18%  github.com/rclone/rclone/backend/local.(*Fs).List
         0     0% 97.76%   715.61MB 13.02%  github.com/rclone/rclone/backend/local.(*Fs).Put
         0     0% 97.76%   107.51MB  1.96%  github.com/rclone/rclone/backend/local.(*Fs).cleanRemote
         0     0% 97.76%   195.01MB  3.55%  github.com/rclone/rclone/backend/local.(*Fs).localPath
         0     0% 97.76%   226.02MB  4.11%  github.com/rclone/rclone/backend/local.(*Fs).newObjectWithInfo
         0     0% 97.76%   293.06MB  5.33%  github.com/rclone/rclone/backend/local.(*Object).Hash
         0     0% 97.76%   477.09MB  8.68%  github.com/rclone/rclone/backend/local.(*Object).Update
         0     0% 97.76%   140.51MB  2.56%  github.com/rclone/rclone/backend/union.(*Fs).List
         0     0% 97.76%  1101.62MB 20.04%  github.com/rclone/rclone/backend/union.(*Fs).List.func1
         0     0% 97.76%   121.50MB  2.21%  github.com/rclone/rclone/backend/union.(*Fs).Put
         0     0% 97.76%  2087.68MB 37.98%  github.com/rclone/rclone/backend/union.(*Fs).put.func1
         0     0% 97.76%    46.01MB  0.84%  github.com/rclone/rclone/backend/union.(*Object).Update.func1
         0     0% 97.76%  3236.30MB 58.87%  github.com/rclone/rclone/backend/union.multithread.func1
         0     0% 97.76%  2015.67MB 36.67%  github.com/rclone/rclone/backend/union/upstream.(*Fs).Put
         0     0% 97.76%    81.50MB  1.48%  github.com/rclone/rclone/backend/union/upstream.(*Fs).WrapEntry
         0     0% 97.76%    46.01MB  0.84%  github.com/rclone/rclone/backend/union/upstream.(*Object).Update
         0     0% 97.76%   136.47MB  2.48%  github.com/rclone/rclone/cmd/mount.(*File).Open
         0     0% 97.76%   292.56MB  5.32%  github.com/rclone/rclone/cmd/mount.(*FileHandle).Flush
         0     0% 97.76%  1514.62MB 27.55%  github.com/rclone/rclone/fs.pacerInvoker
         0     0% 97.76%   140.51MB  2.56%  github.com/rclone/rclone/fs/list.DirSorted
         0     0% 97.76%   121.50MB  2.21%  github.com/rclone/rclone/fs/operations.Copy
         0     0% 97.76%   432.07MB  7.86%  github.com/rclone/rclone/lib/pacer.(*Pacer).Call
         0     0% 97.76%  1082.55MB 19.69%  github.com/rclone/rclone/lib/pacer.(*Pacer).CallNoRetry (inline)
         0     0% 97.76%  1514.62MB 27.55%  github.com/rclone/rclone/lib/pacer.(*Pacer).call
         0     0% 97.76%   302.56MB  5.50%  github.com/rclone/rclone/vfs.(*Dir).Create
         0     0% 97.76%   576.79MB 10.49%  github.com/rclone/rclone/vfs.(*Dir).Stat
         0     0% 97.76%   576.79MB 10.49%  github.com/rclone/rclone/vfs.(*Dir)._readDir
         0     0% 97.76%   576.79MB 10.49%  github.com/rclone/rclone/vfs.(*Dir).stat
         0     0% 97.76%   836.36MB 15.21%  github.com/rclone/rclone/vfs.(*File).Open
         0     0% 97.76%   111.51MB  2.03%  github.com/rclone/rclone/vfs.(*File).Path
         0     0% 97.76%   300.29MB  5.46%  github.com/rclone/rclone/vfs.(*File).openRW
         0     0% 97.76%   168.32MB  3.06%  github.com/rclone/rclone/vfs.(*RWFileHandle).Truncate
         0     0% 97.76%   168.32MB  3.06%  github.com/rclone/rclone/vfs.(*RWFileHandle).openPending
         0     0% 97.76%   292.56MB  5.32%  github.com/rclone/rclone/vfs.(*ReadFileHandle).Flush
         0     0% 97.76%   293.06MB  5.33%  github.com/rclone/rclone/vfs.(*ReadFileHandle).checkHash
         0     0% 97.76%   424.56MB  7.72%  github.com/rclone/rclone/vfs/vfscache.(*Cache).Exists
         0     0% 97.76%   424.56MB  7.72%  github.com/rclone/rclone/vfs/vfscache.(*Cache).get
         0     0% 97.76%   125.01MB  2.27%  github.com/rclone/rclone/vfs/vfscache.(*Item).Close.func2
         0     0% 97.76%   125.01MB  2.27%  github.com/rclone/rclone/vfs/vfscache.(*Item)._store
         0     0% 97.76%   125.01MB  2.27%  github.com/rclone/rclone/vfs/vfscache.(*Item).store
         0     0% 97.76%   125.01MB  2.27%  github.com/rclone/rclone/vfs/vfscache/writeback.(*WriteBack).upload
         0     0% 97.76%   761.10MB 13.84%  google.golang.org/api/internal/gensupport.DecodeResponse (inline)
         0     0% 97.76%      752MB 13.68%  google.golang.org/api/internal/gensupport.NewInfoFromMedia
         0     0% 97.76%      752MB 13.68%  google.golang.org/api/internal/gensupport.PrepareUpload
         0     0% 97.76%   330.55MB  6.01%  google.golang.org/api/storage/v1.(*ObjectsInsertCall).Do
         0     0% 97.76%      752MB 13.68%  google.golang.org/api/storage/v1.(*ObjectsInsertCall).Media
         0     0% 97.76%   431.07MB  7.84%  google.golang.org/api/storage/v1.(*ObjectsListCall).Do
         0     0% 97.76%   195.01MB  3.55%  path/filepath.Join (inline)
         0     0% 97.76%   195.01MB  3.55%  path/filepath.join
         0     0% 97.76%   195.01MB  3.55%  strings.(*Builder).Grow (inline)
         0     0% 97.76%   195.01MB  3.55%  strings.Join

must change profile008.txt to profile008.zip
profile008.txt (21.8 KB)

@ncw woul you like any more test? so we can work on --dir-cache-max-items

hey @ncw

I noticed that running the rclone rc vfs/forget command frees only part of the memory but not all.

That is interesting. I'd like to see a before and after profile if possible.

before rclone rc vfs/forget the OS says rclone mount is using 8gb ram.
the go tool pprof -text http://localhost:5572/debug/pprof/heap command says it's using 4gb ram.

2 minutes after rclone rc vfs/forget the OS says rclone mount is using 2gb ram. The pprof command says it's using 100mg ram.

attached profile before and after.
must change profile.txt to profile.zip
profile.txt (42.9 KB)

The profiles look good - rclone ditched the memory it was asked to ditch.

However it looks like it couldn't give the memory back to the OS.

Memory can only be returned to the OS if a 4k block is 100% unused. What happens is that there might be a small allocation in that 4k block which keeps it in memory.

So, unfortunately this is expected. Of the 2GB rclone has, it will use that before asking the OS for any more RAM.

How would your application cope if you ran vfs/forget once per hour (say)?

if using --dir-cache-time=12h the vfs/forget once per hour will it be limited to 1h?

Yes, that would limit the directory cache to 1h effectively. However the files would still be in the VFS cache so to use one would be quite quick - a quick query to GCS to read the directory listings to check the files are still there, then read them off the disk.

1 Like