What is the problem you are having with rclone?
What is your rclone version (output from
rclone: Version "v1.51.0-163-ge2bf9145-beta"
Which OS you are using and how many bits (eg Windows 7, 64 bit)
debian buster 64bit up2date
Which cloud storage system are you using? (eg Google Drive)
The command you were trying to run (eg
rclone copy /tmp remote:tmp)
rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5573 --log-file /media/data/logs/scan_gcrypt.log -vv
The rclone config contents with secrets removed.
ExecStart=/usr/bin/rclone mount \
--buffer-size 256M \
--dir-cache-time 1000h \
--log-level INFO \
--log-file /media/data/logs/gcrypt.log \
--poll-interval 15s \
--timeout 1h \
--umask 0007 \
--user-agent debianrs \
--rc-addr 127.0.0.1:5573 \
--cache-dir /dev/shm \
--vfs-read-chunk-size 128M \
--vfs-read-chunk-size-limit 1G \
--vfs-cache-mode writes gcrypt: /media/gcrypt
ExecStop=/bin/fusermount -uz /media/gcrypt
A log from the command with the
2020/10/30 19:20:59 DEBUG : rclone: Version "v1.51.0-163-ge2bf9145-beta" starting with parameters ["rclone" "rc" "vfs/refresh" "recursive=true" "--rc-addr" "127.0.0.1:5573" "--log-file" "/media/data/logs/scan_gcrypt.log" "-vv"]
2020/10/30 19:25:59 DEBUG : 3 go routines active
2020/10/30 19:25:59 Failed to rc: connection failed: Post "http://127.0.0.1:5573/vfs/refresh": net/http: timeout awaiting response headers
i just realized that its not working anymore. it was working before and i didnt changed my setup for quite a while.
is it possible that i just need to raise --retries int and/or --retries-time as i have much more files on my setup now as in the beginning?
i cant find any false setting in my setup and i am using the beta version to use the "--cutoff-mode=" command
thanks in advance!
that is an old version of an old beta.
perhaps update both the mount command and the refresh command.
You'd need the mount log to see what's going on.
That's an older version of rclone so might be wise to upgrade and look at vfs-cache-mode full.
i just made the update to the latest stable. i did a beta update last week but it seems it was not working
edit: will update in some minutes
Still not working. Should i switch to log level DEBUG (i am on info right now)
I was switching to cache mode write (from full) and had better performance with it actually. i had some problems on full with my nextcloud crypted storage on my gdrive
Try adding a longer timeout to the rc command
with --timeout? --contimeout?
ok damn, now i am getting a "error googleapi: Error 403: User Rate Limit Exceeded." - but it should reset in one hour?
need to try it later then but had no exceeded limit before, its because of the tons of api calles i had while trying to fix the rc refresh o.o
edit: but actually i dont get it. i am still streaming plex on my television while getting the error on the scan
for testing, you can try working with a subfolder
rclone rc vfs/refresh dir=home/junk
ok its working with the dir=xyz command
so i assume its not working because is hitting the api limit for a short time while scanning thousands of small files
It shouldn't, but then again I don't run refresh more than twice a day.
me neither, i am running it once the mount is newly mounted and this is happening maybe one time per month
i had syncing problems today with my nextcloud thats why i looked into it actually
i checked the api console and its the hits per 100 second (1.000) which i am hitting while doing the refresh.
setting a high timeout time is doing the trick for me and its just doing some sleep time after hitting the limit and not aborting the command.
so it takes very long for the scan but its not resulting in an error anymore.
thanks for the quick help again!
This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.