No Luck, cleanup still gets throttled and there's really no difference with or without --fast-list flag. So no gain in changing this.
Let's wait and see what Dalo and Yukonblonde report back, but I don't see any issues in my use case.
No Luck, cleanup still gets throttled and there's really no difference with or without --fast-list flag. So no gain in changing this.
Let's wait and see what Dalo and Yukonblonde report back, but I don't see any issues in my use case.
Hi guys, I have two mounts with the beta version. One is a sharepoint union and other just a onedrive crypt mount.
From my initial experience and having a look at debug log, i cant see any throttling as well. Although I didnt read the debug log that thoroughly.
Not sure how the clean up command is triggered though.
More than happy to share my debug log if it helps.
I've merged the ListR support to master now which means it will be in the latest beta in 15-30 minutes and released in v1.65
Thank you all for testing
do I get you right:
I can activate this feature by adding --fast-list to the mount command?
No, --fast-list
doesn't do anything with rclone mount
directly.
However what you can do is run rclone mount
with the --rc
flag, then use rclone rc vfs/refresh recursive=true
to fill up the vfs cache with info. Eg
rclone rc vfs/refresh recursive=true _async=true
This will use ListR
which is the underlying mechanism behind --fast-list
and should fetch all the directory entries very quickly.
You'll want to set this to large - say 999d
to make sure the entries you just read don't get discarded.
--dir-cache-time duration Time to cache directory entries for (default 5m0s)
hm, I still don't get it right with --rc flag.
Never used this before...
Usually, I fire up my drives on startup with a service.
In this, there is a mount command with many flags on.
What I unterstood is, that when i attach --rc, I am able to remote control via api, right?
What I don't understand is, where to put this command on?
rclone rc vfs/refresh recursive=true _async=true
and how can i make it permanent with my mounts?
Sorry for the questions.
Start a new post. Use help and support. Include all your info.
Hey,
I managed it to include it in my mounts now.
Onedrive
Total objects: 17.223k (17223)
Total size: 19.317 TiB (21239521671089 Byte)
Building the tree in 24s is huge!
There is also no API hit at all....
root@dalo87:~# /usr/bin/rclone rc vfs/refresh recursive=true --url 127.0.0.1:5572 _async=true
{
"jobid": 12
}
root@dalo87:~# rclone rc --url 127.0.0.1:5572 job/status jobid=12
{
"duration": 24.114644376,
"endTime": "2023-10-03T00:43:29.283119119+02:00",
"error": "",
"finished": true,
"group": "job/12",
"id": 12,
"output": {
"result": {
"": "OK"
}
},
"startTime": "2023-10-03T00:43:05.168474743+02:00",
"success": true
}
Could you please share your mount command ?
This is my mount command for Onedrive Shares (I have 5 of them)
Assuming you would use vfs/refresh at start you need to specifiy one service for every drive, because you need different ports for the rc service triggering api requests. I didn't manage to keep this in just one service-file rotating ports yet...help needed
Feel free to use this command. Note, that I have a heavy machine and don't worry about limited Memory. You would propably lower the --onedrive-chunk-size
Place the file with name in etc/systemd/system
<yourservicename>-<port>@.service
so that you have
rclone-5571@.service
rclone-5572@.service
....for every drive
with following content:
# User service for Rclone mounting
#
# Place in ~/.config/systemd/user/
# File must include the '@' (ex rclone@.service)
# As your normal user, run
# systemctl --user daemon-reload
# You can now start/enable each remote by using rclone@<remote>..
# systemctl --user enable rclone@dropbox
# systemctl --user start rclone@dropbox
[Unit]
Description=rclone: Remote FUSE filesystem for cloud storage config %i
Documentation=man:rclone(1)
After=network-online.target
Wants=network-online.target
AssertPathIsDirectory=%h/mnt/%i
StartLimitInterval=200
StartLimitBurst=5
[Service]
Type=notify
#Environment="RCLONE_CONFIG_PASS=mypass" #set config password if needed
ExecStart= \
/usr/bin/rclone mount \
--config=%h/.config/rclone/rclone.conf \
# --log-level DEBUG \
# --log-file /root/logs/rclone.log \
--umask 022 \
--vfs-cache-mode full \
--allow-other \
--no-modtime \
--buffer-size 32M \
--cache-dir /root/rclone/cache \
--no-checksum \
--disable-http2 \
--vfs-fast-fingerprint \
--ignore-checksum \
--no-check-certificate \
--checkers 1 \ #recommended
--tpslimit 3 \ # recommended
--transfers 1 \ # recommended for Onedrive/sharpoint
--bwlimit-file 100M:100M \ # API gets quickly hit without setting these...
--low-level-retries 1 \ #recommended
--onedrive-no-versions \ #turn of versions, remember also to set at Microsoft-admin page
--onedrive-hash-type none \
--onedrive-chunk-size 250M \ # lower this if you are low on memory, dont exceed above 250M!
--dir-cache-time 9999h \ # keep dir cache forever
--vfs-cache-max-age 1m \ # clear soon if not needed
--vfs-read-chunk-size 256M \
#--vfs-read-chunk-size-limit 1G \
--poll-interval 1m \
--rc-addr=127.0.0.1:%j \ #port is given in filename after "-"
--rc \
--ignore-size \
--user-agent "ISV|yourname|yourid" \
--vfs-cache-min-free-space 40G \ # since i dont have enough hdd space, I set min-free for all drives than max-size
#--vfs-cache-max-size 100G \
%i: %h/mnt/%i
ExecStop=/bin/fusermount -u %h/mnt/%i
Restart=always
RestartSec=30
ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:%j _async=true # triggers vfs/refresh event in the background, mount doesnt wait for it.
[Install]
WantedBy=default.target
after that, setup the Service with
systemctl daemon-reload
systemctl enable <yourservicename>-<port>@<yourmountname>.service
Mine were
systemctl enable rclone-5571@drive1.service
systemctl enable rclone-5572@drive2.service
systemctl enable rclone-5573@drive3.service
then start with
systemctl start rclone-5571@drive1.service
systemctl start rclone-5571@drive1.service
....
starts also at boot.
to verify if vfs/refresh happened you can check with job/status
rclone rc --url 127.0.0.1:port job/status jobid=1
you can find jobids in the debug log. Remember to uncomment in the service-file
This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.