How to reduce rclone memory usage?

What is the problem you are having with rclone?

Rclone uses a large of memory, how to reduce memory usage?
The current mounted object storage is 25TB and the number of files is about 100 million.
Using rclone rc vfs/forget will make getting files very slow next time. For example, when get a file in the directory /a/b/c/1. txt, rclone will go through the S3 storage and traverse the directories a, b, and c, which every directory contains many files and sub directories, resulting in very slow access speed.

Run the command 'rclone version' and share the full output of the command.

#rclone version
rclone v1.61.1

  • os/version: centos 7.9.2009 (64 bit)
  • os/kernel: 3.10.0-1160.90.1.el7.x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.19.4
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

s3 storage

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount eos:ecloud1 /mnt/eos --cache-dir /temp/rclone --vfs-cache-mode writes --no-modtime --transfers 32 --vfs-cache-max-age 24h --vfs-cache-max-size 2G --dir-cache-ti
me 438000h --vfs-disk-space-total-size 40T --config /root/.config/rclone/rclone.conf --log-file /temp/rclone.log --allow-non-empty --rc

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

# cat /root/.config/rclone/rclone.conf
[eos]
type = s3
provider = ChinaMobile
access_key_id = xxxxx
secret_access_key = xxxxxxxx
endpoint = http://xxxxxxx
acl = private

A log from the command that you were trying to run with the -vv flag

# go tool pprof -text http://localhost:5572/debug/pprof/heap
Fetching profile over HTTP from http://localhost:5572/debug/pprof/heap
Saved profile in /root/pprof/pprof.rclone.alloc_objects.alloc_space.inuse_objects.inuse_space.058.pb.gz
File: rclone
Type: inuse_space
Time: Jan 19, 2024 at 3:03pm (CST)
Showing nodes accounting for 45149.51MB, 98.70% of 45746.22MB total
Dropped 88 nodes (cum <= 228.73MB)
      flat  flat%   sum%        cum   cum%
10661.07MB 23.30% 23.30% 10733.08MB 23.46%  github.com/rclone/rclone/backend/s3.s3MetadataToMap
 7246.33MB 15.84% 39.15%  7246.33MB 15.84%  github.com/rclone/rclone/vfs.newFile
 7154.75MB 15.64% 54.79%  7154.75MB 15.64%  github.com/rclone/rclone/vfs/vfscache.newItem
 4076.56MB  8.91% 63.70% 13659.77MB 29.86%  github.com/rclone/rclone/backend/s3.(*Fs).Put
 3231.57MB  7.06% 70.76%  3231.57MB  7.06%  github.com/rclone/rclone/vfs.(*Dir).addObject
 3066.61MB  6.70% 77.46%  3066.61MB  6.70%  net/textproto.(*Reader).ReadMIMEHeader
 2024.68MB  4.43% 81.89%  2024.68MB  4.43%  path.Join
 1729.61MB  3.78% 85.67%  5324.15MB 11.64%  github.com/rclone/rclone/vfs.newRWFileHandle
 1421.70MB  3.11% 88.78%  1525.70MB  3.34%  github.com/rclone/rclone/backend/s3.(*Fs).newObjectWithInfo
 1042.08MB  2.28% 91.06%  1150.08MB  2.51%  github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil.XMLToStruct
  829.56MB  1.81% 92.87%  2946.95MB  6.44%  github.com/rclone/rclone/vfs.(*Dir)._readDirFromEntries
  774.03MB  1.69% 94.56%   775.73MB  1.70%  bazil.org/fuse.(*Conn).ReadRequest
  466.01MB  1.02% 95.58% 20250.55MB 44.27%  github.com/rclone/rclone/cmd/mount.(*Dir).Create
  465.01MB  1.02% 96.60%   465.01MB  1.02%  github.com/rclone/rclone/vfs.(*File).addWriter
  447.51MB  0.98% 97.58%   447.51MB  0.98%  reflect.(*structType).Field
  328.05MB  0.72% 98.29%   328.05MB  0.72%  github.com/rclone/rclone/vfs.newDir
     166MB  0.36% 98.66%  5597.19MB 12.24%  github.com/rclone/rclone/cmd/mount.(*Dir).Lookup
   11.50MB 0.025% 98.68%   349.57MB  0.76%  github.com/rclone/rclone/cmd/mount.(*Dir).Mkdir
    6.91MB 0.015% 98.70%  7161.65MB 15.66%  github.com/rclone/rclone/vfs/vfscache.(*Cache)._get
         0     0% 98.70%   775.73MB  1.70%  bazil.org/fuse/fs.(*Server).Serve
         0     0% 98.70% 28223.07MB 61.69%  bazil.org/fuse/fs.(*Server).Serve.func1
         0     0% 98.70% 28223.07MB 61.69%  bazil.org/fuse/fs.(*Server).handleRequest
         0     0% 98.70% 28223.07MB 61.69%  bazil.org/fuse/fs.(*Server).serve
         0     0% 98.70%  1598.58MB  3.49%  github.com/aws/aws-sdk-go/aws/request.(*HandlerList).Run
         0     0% 98.70%  1598.58MB  3.49%  github.com/aws/aws-sdk-go/aws/request.(*Request).Send
         0     0% 98.70%  1598.08MB  3.49%  github.com/aws/aws-sdk-go/aws/request.(*Request).sendRequest
         0     0% 98.70%   447.51MB  0.98%  github.com/aws/aws-sdk-go/private/protocol/rest.PayloadType
         0     0% 98.70%  1597.58MB  3.49%  github.com/aws/aws-sdk-go/private/protocol/restxml.Unmarshal
         0     0% 98.70%  1150.08MB  2.51%  github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil.UnmarshalXML
         0     0% 98.70%   448.01MB  0.98%  github.com/aws/aws-sdk-go/service/s3.(*S3).HeadObjectWithContext
         0     0% 98.70%  1150.08MB  2.51%  github.com/aws/aws-sdk-go/service/s3.(*S3).ListObjectsWithContext
         0     0% 98.70%  2739.28MB  5.99%  github.com/rclone/rclone/backend/s3.(*Fs).List
         0     0% 98.70%  1589.20MB  3.47%  github.com/rclone/rclone/backend/s3.(*Fs).itemToDirEntry
         0     0% 98.70%  2739.28MB  5.99%  github.com/rclone/rclone/backend/s3.(*Fs).list
         0     0% 98.70%  1150.08MB  2.51%  github.com/rclone/rclone/backend/s3.(*Fs).list.func1
         0     0% 98.70%  2739.28MB  5.99%  github.com/rclone/rclone/backend/s3.(*Fs).listDir
         0     0% 98.70%  1589.20MB  3.47%  github.com/rclone/rclone/backend/s3.(*Fs).listDir.func1
         0     0% 98.70%  1596.87MB  3.49%  github.com/rclone/rclone/backend/s3.(*Object).Open
         0     0% 98.70%  9584.71MB 20.95%  github.com/rclone/rclone/backend/s3.(*Object).Update
         0     0% 98.70%   448.01MB  0.98%  github.com/rclone/rclone/backend/s3.(*Object).headObject
         0     0% 98.70%   448.01MB  0.98%  github.com/rclone/rclone/backend/s3.(*Object).headObject.func1
         0     0% 98.70% 10733.08MB 23.46%  github.com/rclone/rclone/backend/s3.(*Object).setMetaData
         0     0% 98.70%  1150.08MB  2.51%  github.com/rclone/rclone/backend/s3.(*v1List).List
         0     0% 98.70%   255.04MB  0.56%  github.com/rclone/rclone/cmd/mount.(*Dir).ReadDirAll
         0     0% 98.70%  1596.37MB  3.49%  github.com/rclone/rclone/cmd/mount.(*FileHandle).Read
         0     0% 98.70%   775.73MB  1.70%  github.com/rclone/rclone/cmd/mount.mount.func2
         0     0% 98.70%  1598.58MB  3.49%  github.com/rclone/rclone/fs.pacerInvoker
         0     0% 98.70%  1587.37MB  3.47%  github.com/rclone/rclone/fs/chunkedreader.(*ChunkedReader).Open
         0     0% 98.70%  1596.87MB  3.49%  github.com/rclone/rclone/fs/chunkedreader.(*ChunkedReader).openRange
         0     0% 98.70%  2739.28MB  5.99%  github.com/rclone/rclone/fs/list.DirSorted
         0     0% 98.70% 13661.27MB 29.86%  github.com/rclone/rclone/fs/operations.Copy
         0     0% 98.70%  1598.58MB  3.49%  github.com/rclone/rclone/lib/pacer.(*Pacer).Call
         0     0% 98.70%  1598.58MB  3.49%  github.com/rclone/rclone/lib/pacer.(*Pacer).call
         0     0% 98.70%  5312.47MB 11.61%  github.com/rclone/rclone/vfs.(*Dir).Create
         0     0% 98.70%   338.07MB  0.74%  github.com/rclone/rclone/vfs.(*Dir).Mkdir
         0     0% 98.70%   255.04MB  0.56%  github.com/rclone/rclone/vfs.(*Dir).ReadDirAll
         0     0% 98.70%  5431.19MB 11.87%  github.com/rclone/rclone/vfs.(*Dir).Stat
         0     0% 98.70%  5686.23MB 12.43%  github.com/rclone/rclone/vfs.(*Dir)._readDir
         0     0% 98.70%  5431.19MB 11.87%  github.com/rclone/rclone/vfs.(*Dir).stat
         0     0% 98.70% 14479.47MB 31.65%  github.com/rclone/rclone/vfs.(*File).Open
         0     0% 98.70%  1994.18MB  4.36%  github.com/rclone/rclone/vfs.(*File).Path
         0     0% 98.70%  5324.15MB 11.64%  github.com/rclone/rclone/vfs.(*File).openRW
         0     0% 98.70%  3129.03MB  6.84%  github.com/rclone/rclone/vfs.(*RWFileHandle).Truncate
         0     0% 98.70%  3129.03MB  6.84%  github.com/rclone/rclone/vfs.(*RWFileHandle).openPending
         0     0% 98.70%  1596.37MB  3.49%  github.com/rclone/rclone/vfs.(*ReadFileHandle).ReadAt
         0     0% 98.70%  1587.37MB  3.47%  github.com/rclone/rclone/vfs.(*ReadFileHandle).openPending
         0     0% 98.70%  1596.37MB  3.49%  github.com/rclone/rclone/vfs.(*ReadFileHandle).readAt
         0     0% 98.70%  7161.15MB 15.65%  github.com/rclone/rclone/vfs/vfscache.(*Cache).Exists
         0     0% 98.70%  7161.65MB 15.66%  github.com/rclone/rclone/vfs/vfscache.(*Cache).get
         0     0% 98.70% 13661.27MB 29.86%  github.com/rclone/rclone/vfs/vfscache.(*Item).Close.func2
         0     0% 98.70% 13661.27MB 29.86%  github.com/rclone/rclone/vfs/vfscache.(*Item)._store
         0     0% 98.70% 13661.27MB 29.86%  github.com/rclone/rclone/vfs/vfscache.(*Item).store
         0     0% 98.70% 13661.27MB 29.86%  github.com/rclone/rclone/vfs/vfscache/writeback.(*WriteBack).upload
         0     0% 98.70%  3066.61MB  6.70%  net/http.(*persistConn).readLoop
         0     0% 98.70%  3066.61MB  6.70%  net/http.(*persistConn).readResponse
         0     0% 98.70%  3066.61MB  6.70%  net/http.ReadResponse
         0     0% 98.70%   447.51MB  0.98%  reflect.(*rtype).FieldByName
         0     0% 98.70%   447.51MB  0.98%  reflect.(*structType).FieldByName

Please do not use old rclone versions. It is v1.65.1 now.

Not saying that latest version fixes everything magically but there is no point investigating historical rclone releases.

To improve mount performance try to add flags:
--fast-list
--vfs-fast-fingerprint

Not really sure what it is. Probably typo?

I made a mistake , It should be rclone rc vfs/forget

Ahh clear now. Why you use it? Obviously when you clear cache it has to be re-populated again.

The caching of the directory and file structure is only memory in current rclone. It means that for large datasets you need as I remember about 1GB RAM for every 1m objects cached. Until one day this cache can be stored on disk there is not much you can do about it.

If you reduce this number then rclone will ditch things out of the directory cache when the time comes which will help a lot with memory usage, though you'll need v1.64.2 or v1.65.1 for the biggest benefit.

Try --dir-cache-time 24h to match your --vfs-cache-max-age 24h

Have you also tried --use-mmap flag? It might also help with memory usage.

Thank you for reply. May I ask, is there a plan to store cache to the disk?

Thank you for your reply. If the cache is cleared, the access speed will slow down, which is not the desired result.
For example, when get a file /a/b/c/1.jpg, rclone will not just get /a/b/c/1.jpg,instead,it will traverse the directories a, b, and c, which every directory contains many files and sub directories. It will take a long time.

It would also have additional benefit - persistence. No more cache warming - which for large datasets can take its time.

The subject resurface from time to time but I do not think much work was done yet towards solving it.

Definitely not something coming in the next release.

I think I've found a way,

This is my test environment,Directory structure like this:

├── 8619658
│   ├── 19544472
│   │   ├── 905033003.txt
│   │   └── 905033322.txt
│   ├── 19544473
│   │   ├── 905033004.txt
│   │   ├── 905033005.txt
│   │   ├── 905033006.txt
│   │   ├── 905033007.txt
│   │   ├── 905033008.txt
│   │   ├── 905033009.txt
│   │   ├── 905033010.txt
│   │   ├── 905033011.txt
│   │   ├── 905033012.txt
│   │   ├── 905033013.txt
│   │   ├── 905033014.txt
│   │   ├── 905033015.txt
│   │   ├── 905033016.txt
│   │   ├── 905033017.txt
│   │   ├── 905033018.txt
│   │   ├── 905033019.txt
│   │   ├── 905033020.txt
│   │   ├── 905033021.txt
│   │   ├── 905033022.txt
│   │   ├── 905033023.txt
│   │   ├── 905033024.txt
│   │   ├── 905033025.txt
│   │   ├── 905033026.txt
│   │   ├── 905033027.txt
│   │   ├── 905033028.txt
│   │   ├── 905033029.txt
│   │   ├── 905033030.txt
│   │   ├── 905033031.txt
│   │   ├── 905033032.txt
│   │   ├── 905033033.txt
│   │   ├── 905033034.txt
│   │   ├── 905033035.txt
│   │   ├── 905033036.txt
│   │   ├── 905033037.txt
│   │   ├── 905033038.txt
│   │   ├── 905033039.txt
│   │   ├── 905033040.txt
│   │   ├── 905033041.txt
│   │   ├── 905033042.txt
│   │   ├── 905033043.txt
│   │   ├── 905033044.txt
│   │   ├── 905033045.txt
│   │   ├── 905033046.txt
│   │   ├── 905033047.txt
│   │   ├── 905033048.txt
│   │   ├── 905033049.txt
│   │   ├── 905033050.txt
│   │   ├── 905033051.txt
│   │   ├── 905033052.txt
│   │   ├── 905033053.txt
│   │   ├── 905033054.txt
│   │   ├── 905033055.txt
│   │   ├── 905033056.txt
│   │   ├── 905033057.txt
│   │   ├── 905033058.txt
│   │   ├── 905033059.txt
│   │   ├── 905033060.txt
│   │   ├── 905033061.txt
│   │   ├── 905033062.txt
│   │   ├── 905033063.txt
│   │   ├── 905033064.txt
│   │   ├── 905033065.txt
│   │   ├── 905033066.txt
│   │   ├── 905033067.txt
│   │   ├── 905033068.txt
│   │   ├── 905033069.txt
│   │   ├── 905033070.txt
│   │   ├── 905033071.txt
│   │   ├── 905033072.txt
│   │   ├── 905033073.txt
│   │   ├── 905033074.txt
│   │   ├── 905033075.txt
│   │   └── 905033076.txt
│   ├── 19544474
│   │   ├── 905033077.txt
│   │   ├── 905033078.txt
│   │   ├── 905033079.txt
│   │   ├── 905033080.txt
│   │   ├── 905033081.txt
│   │   ├── 905033082.txt
│   │   ├── 905033083.txt
│   │   ├── 905033084.txt
│   │   ├── 905033085.txt
│   │   ├── 905033086.txt
│   │   ├── 905033087.txt
│   │   ├── 905033088.txt
│   │   ├── 905033089.txt
│   │   ├── 905033090.txt
│   │   ├── 905033091.txt
│   │   ├── 905033092.txt
│   │   ├── 905033093.txt
│   │   ├── 905033094.txt
│   │   ├── 905033095.txt
│   │   ├── 905033096.txt
│   │   ├── 905033097.txt
│   │   ├── 905033098.txt
│   │   ├── 905033099.txt
│   │   ├── 905033100.txt
│   │   ├── 905033101.txt
│   │   ├── 905033102.txt
│   │   ├── 905033103.txt
│   │   ├── 905033104.txt
│   │   ├── 905033105.txt
│   │   ├── 905033106.txt
│   │   ├── 905033107.txt
│   │   ├── 905033108.txt
│   │   ├── 905033109.txt
│   │   ├── 905033110.txt
│   │   ├── 905033111.txt
│   │   ├── 905033112.txt
│   │   ├── 905033113.txt
│   │   ├── 905033114.txt
│   │   ├── 905033115.txt

I can use a script to forget these directories and files, and then only read the directories from OBS. This way, the memory usage will be much smaller and it will not affect the speed because the large directories have already been cached.

Memory remains unchanged after “rclone rc vfs/forget dir1=x dir2=xx ...”,always 119MB:

Fetching profile over HTTP from http://localhost:5572/debug/pprof/heap
Saved profile in /root/pprof/pprof.rclone.alloc_objects.alloc_space.inuse_objects.inuse_space.628.pb.gz
File: rclone
Type: inuse_space
Time: Feb 18, 2024 at 10:56am (CST)
Showing nodes accounting for 115.55MB, 96.60% of 119.62MB total
Dropped 23 nodes (cum <= 0.60MB)
      flat  flat%   sum%        cum   cum%
   34.01MB 28.43% 28.43%    34.01MB 28.43%  github.com/rclone/rclone/vfs.newFile
      22MB 18.39% 46.82%    24.50MB 20.48%  github.com/rclone/rclone/backend/s3.(*Fs).newObjectWithInfo
   18.50MB 15.47% 62.29%       22MB 18.39%  github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil.XMLToStruct
   15.84MB 13.24% 75.53%    15.84MB 13.24%  bazil.org/fuse/fs.(*Server).saveNode
   14.26MB 11.92% 87.45%    48.77MB 40.77%  github.com/rclone/rclone/vfs.(*Dir)._readDirFromEntries
       3MB  2.51% 89.96%        3MB  2.51%  encoding/xml.CharData.Copy (inline)
    2.50MB  2.09% 92.05%     2.50MB  2.09%  github.com/rclone/rclone/backend/s3.stringClonePointer (inline)
       2MB  1.67% 93.72%        2MB  1.67%  github.com/aws/aws-sdk-go/aws/endpoints.init
       2MB  1.67% 95.40%        2MB  1.67%  github.com/rclone/rclone/cmd/mount.(*Dir).Lookup
    1.44MB  1.20% 96.60%     1.44MB  1.20%  bazil.org/fuse/fs.(*Server).dropNode
         0     0% 96.60%   115.05MB 96.18%  bazil.org/fuse/fs.(*Server).Serve.func1
         0     0% 96.60%   115.05MB 96.18%  bazil.org/fuse/fs.(*Server).handleRequest
         0     0% 96.60%    15.84MB 13.24%  bazil.org/fuse/fs.(*Server).saveLookup
         0     0% 96.60%   115.05MB 96.18%  bazil.org/fuse/fs.(*Server).serve
         0     0% 96.60%       22MB 18.39%  github.com/aws/aws-sdk-go/aws/request.(*HandlerList).Run
         0     0% 96.60%       22MB 18.39%  github.com/aws/aws-sdk-go/aws/request.(*Request).Send
         0     0% 96.60%       22MB 18.39%  github.com/aws/aws-sdk-go/aws/request.(*Request).sendRequest
         0     0% 96.60%       22MB 18.39%  github.com/aws/aws-sdk-go/private/protocol/restxml.Unmarshal
         0     0% 96.60%       22MB 18.39%  github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil.UnmarshalXML
         0     0% 96.60%       22MB 18.39%  github.com/aws/aws-sdk-go/service/s3.(*S3).ListObjectsWithContext
         0     0% 96.60%       47MB 39.29%  github.com/rclone/rclone/backend/s3.(*Fs).List
         0     0% 96.60%       25MB 20.90%  github.com/rclone/rclone/backend/s3.(*Fs).itemToDirEntry
         0     0% 96.60%       47MB 39.29%  github.com/rclone/rclone/backend/s3.(*Fs).list
         0     0% 96.60%       22MB 18.39%  github.com/rclone/rclone/backend/s3.(*Fs).list.func1
         0     0% 96.60%       47MB 39.29%  github.com/rclone/rclone/backend/s3.(*Fs).listDir
         0     0% 96.60%       25MB 20.90%  github.com/rclone/rclone/backend/s3.(*Fs).listDir.func1
         0     0% 96.60%       22MB 18.39%  github.com/rclone/rclone/backend/s3.(*v1List).List
         0     0% 96.60%    95.77MB 80.06%  github.com/rclone/rclone/cmd/mount.(*Dir).ReadDirAll
         0     0% 96.60%       22MB 18.39%  github.com/rclone/rclone/fs.pacerInvoker
         0     0% 96.60%       47MB 39.29%  github.com/rclone/rclone/fs/list.DirSorted
         0     0% 96.60%       22MB 18.39%  github.com/rclone/rclone/lib/pacer.(*Pacer).Call
         0     0% 96.60%       22MB 18.39%  github.com/rclone/rclone/lib/pacer.(*Pacer).call
         0     0% 96.60%    95.77MB 80.06%  github.com/rclone/rclone/vfs.(*Dir).ReadDirAll
         0     0% 96.60%    95.77MB 80.06%  github.com/rclone/rclone/vfs.(*Dir)._readDir
         0     0% 96.60%        1MB  0.84%  regexp.Compile (inline)
         0     0% 96.60%        1MB  0.84%  regexp.MustCompile
         0     0% 96.60%        1MB  0.84%  regexp.compile
         0     0% 96.60%     3.50MB  2.93%  runtime.doInit
         0     0% 96.60%     3.50MB  2.93%  runtime.main

Then use the tree command to cache the directories:

# tree -d /mnt/eos/xx -L 2
├── 8619658
│   ├── 19544472
│   ├── 19544473
│   ├── 19544474
│   └── 19544475
├── 86196581
│   ├── 19544472
│   ├── 19544473
│   ├── 19544474
│   └── 19544475
├── 861965812
│   ├── 19544472
│   ├── 19544473
│   ├── 19544474
│   └── 19544475
├── 8619658123
│   ├── 19544472
│   ├── 19544473
│   ├── 19544474
│   └── 19544475
├── 8619659
│   ├── 19544480
│   ├── 19544481
│   ├── 19544482
│   ├── 19544483
│   ├── 19544484
│   └── 19544486
├── 86196591
│   ├── 19544480
│   ├── 19544481
│   ├── 19544482
│   ├── 19544483
│   ├── 19544484
│   └── 19544486
├── 861965912
│   ├── 19544480
│   ├── 19544481
│   ├── 19544482
│   ├── 19544483
│   ├── 19544484
│   └── 19544486

Must use the tree command to cache the directory again, otherwise memory will not be released. Now, memory usage had reduced.

pprof -text http://localhost:5572/debug/pprof/heap
Fetching profile over HTTP from http://localhost:5572/debug/pprof/heap
Saved profile in /root/pprof/pprof.rclone.alloc_objects.alloc_space.inuse_objects.inuse_space.630.pb.gz
File: rclone
Type: inuse_space
Time: Feb 18, 2024 at 10:58am (CST)
Showing nodes accounting for 37753.48kB, 100% of 37753.48kB total
      flat  flat%   sum%        cum   cum%
10589.99kB 28.05% 28.05% 10589.99kB 28.05%  bazil.org/fuse/fs.(*Server).saveNode
 8193.50kB 21.70% 49.75%  8193.50kB 21.70%  github.com/rclone/rclone/vfs.newFile
 4096.56kB 10.85% 60.60%  5120.58kB 13.56%  github.com/rclone/rclone/backend/s3.(*Fs).newObjectWithInfo
 3671.37kB  9.72% 70.33% 11864.87kB 31.43%  github.com/rclone/rclone/vfs.(*Dir)._readDirFromEntries
 3072.14kB  8.14% 78.47%  4096.16kB 10.85%  github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil.XMLToStruct
 2048.84kB  5.43% 83.89%  2048.84kB  5.43%  github.com/aws/aws-sdk-go/aws/endpoints.init
 1469.58kB  3.89% 87.79%  1469.58kB  3.89%  bazil.org/fuse/fs.(*Server).dropNode
 1024.02kB  2.71% 90.50%  1024.02kB  2.71%  encoding/xml.CharData.Copy (inline)
 1024.02kB  2.71% 93.21%  1024.02kB  2.71%  github.com/rclone/rclone/backend/s3.stringClonePointer (inline)
  513.50kB  1.36% 94.57%   513.50kB  1.36%  github.com/gdamore/tcell/v2/terminfo/p/pcansi.init.0
  513.12kB  1.36% 95.93%   513.12kB  1.36%  regexp.onePassCopy
  512.62kB  1.36% 97.29%   512.62kB  1.36%  regexp/syntax.(*compiler).inst (inline)
  512.20kB  1.36% 98.64%   512.20kB  1.36%  runtime.malg
  512.01kB  1.36%   100%   512.01kB  1.36%  github.com/rclone/rclone/cmd/mount.(*Dir).Lookup
         0     0%   100% 33653.18kB 89.14%  bazil.org/fuse/fs.(*Server).Serve.func1
         0     0%   100% 33653.18kB 89.14%  bazil.org/fuse/fs.(*Server).handleRequest
         0     0%   100% 10589.99kB 28.05%  bazil.org/fuse/fs.(*Server).saveLookup
         0     0%   100% 33653.18kB 89.14%  bazil.org/fuse/fs.(*Server).serve
         0     0%   100%  4096.16kB 10.85%  github.com/aws/aws-sdk-go/aws/request.(*HandlerList).Run
         0     0%   100%  4096.16kB 10.85%  github.com/aws/aws-sdk-go/aws/request.(*Request).Send
         0     0%   100%  4096.16kB 10.85%  github.com/aws/aws-sdk-go/aws/request.(*Request).sendRequest
         0     0%   100%  4096.16kB 10.85%  github.com/aws/aws-sdk-go/private/protocol/restxml.Unmarshal
         0     0%   100%  4096.16kB 10.85%  github.com/aws/aws-sdk-go/private/protocol/xml/xmlutil.UnmarshalXML
         0     0%   100%  4096.16kB 10.85%  github.com/aws/aws-sdk-go/service/s3.(*S3).ListObjectsWithContext
         0     0%   100%   513.12kB  1.36%  github.com/rclone/rclone/backend/internetarchive.init
         0     0%   100%  9216.73kB 24.41%  github.com/rclone/rclone/backend/s3.(*Fs).List
         0     0%   100%  5120.58kB 13.56%  github.com/rclone/rclone/backend/s3.(*Fs).itemToDirEntry
         0     0%   100%  9216.73kB 24.41%  github.com/rclone/rclone/backend/s3.(*Fs).list
         0     0%   100%  4096.16kB 10.85%  github.com/rclone/rclone/backend/s3.(*Fs).list.func1
         0     0%   100%  9216.73kB 24.41%  github.com/rclone/rclone/backend/s3.(*Fs).listDir
         0     0%   100%  5120.58kB 13.56%  github.com/rclone/rclone/backend/s3.(*Fs).listDir.func1
         0     0%   100%  4096.16kB 10.85%  github.com/rclone/rclone/backend/s3.(*v1List).List
         0     0%   100% 21081.60kB 55.84%  github.com/rclone/rclone/cmd/mount.(*Dir).ReadDirAll
         0     0%   100%  4096.16kB 10.85%  github.com/rclone/rclone/fs.pacerInvoker
         0     0%   100%  9216.73kB 24.41%  github.com/rclone/rclone/fs/list.DirSorted
         0     0%   100%  4096.16kB 10.85%  github.com/rclone/rclone/lib/pacer.(*Pacer).Call
         0     0%   100%  4096.16kB 10.85%  github.com/rclone/rclone/lib/pacer.(*Pacer).call
         0     0%   100% 21081.60kB 55.84%  github.com/rclone/rclone/vfs.(*Dir).ReadDirAll
         0     0%   100% 21081.60kB 55.84%  github.com/rclone/rclone/vfs.(*Dir)._readDir
         0     0%   100%   512.62kB  1.36%  google.golang.org/grpc/internal/binarylog.init
         0     0%   100%  1025.75kB  2.72%  regexp.Compile (inline)
         0     0%   100%  1025.75kB  2.72%  regexp.MustCompile
         0     0%   100%  1025.75kB  2.72%  regexp.compile
         0     0%   100%   513.12kB  1.36%  regexp.compileOnePass
         0     0%   100%   512.62kB  1.36%  regexp/syntax.(*compiler).cap (inline)
         0     0%   100%   512.62kB  1.36%  regexp/syntax.(*compiler).compile
         0     0%   100%   512.62kB  1.36%  regexp/syntax.Compile
         0     0%   100%  3588.09kB  9.50%  runtime.doInit
         0     0%   100%  3588.09kB  9.50%  runtime.main
         0     0%   100%   512.20kB  1.36%  runtime.newproc.func1
         0     0%   100%   512.20kB  1.36%  runtime.newproc1
         0     0%   100%   512.20kB  1.36%  runtime.systemstack

Rclone has ran for over 100 days with more than 20TB data . It runs very stably, with only the issue of insufficient memory. I'm so happy that the memory issue has been resolved now. I hope there are no hidden dangers. haha

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.