Rclone 1.53 release

Not sure I would say it's doing anything other than getting what is requested. Defaults are generally fine and if you really wanted to, you could increase the vfs-read-ahead to get more of the file, but I personally suggest not changing anything unless you have an issue.


I don't think either will jeopardize start times provided you have the bandwidth for the extra download that they will cause. This would probably only matter if you started multiple streams open at once.

Yes, more or less :slight_smile: As soon as data is in the buffer it is eligible to be shipped off to the user. Data gets downloaded into the buffer (--buffer-size) and then written on to the disk. Rclone reads it back off the disk to send it to the user. However very likely it is in the OSes buffers at this point so this is very efficient.

As promised here we go:

Rclone settings:

perfmon write activity:

Disk Activity:

Explorer becomes unresponsive:

vfs folder (configured on Rclone settings):

vfsMeta folder:

the Sandisk extreme is an external USB drive right? Are you current on drivers and firmware? Have you tried putting the cache on a different drive?

Yes, it's an external USB device... I don't think the problem is that, honestly ...

I think it is the problem. Is it formatted fat32? fat32 doesn't support sparse files, only NTFS and ReFS support sparse files in windows.

Is formatted in exFat

ok, only NTFS and ReFS support sparse files in windows.

exfat, fat32, fat, whatever else you might use doesn't support sparse files.

That should be the problem then :confused:

Thanks! I will try on a different device, however, why does Rclone write two files to different folders?

Additionally, I am currently troubleshooting the problem I reported before

2020/09/10 10:27:39 ERROR : IO error: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded

I suggest you start a separate thread for each issue.

Huge number of people are experiencing the 403 atm. Appears to be a Google backend issue. Might be best to wait a bit and check Google down detector periodically, rather than trying to troubleshoot the 403 right now.

Source? I googled for that and I don't see any reported issue

Well spotted.

This should go in the docs I think...

Rclone writes the file itself to the vfs hierarchy and the info about the file to the vfsMeta hierarchy. If you look at the vfsMeta files you'll see they are JSON files with info about the file in the vfs hierarchy.

Yeah well spotted, I would suggest docs and error on Rclone logs

Thanks !

Again, in less than 12 hours I just got again the same error, or Rclone had some unexpected behavior between 1.52.3 and 1.53.0, or is Google problems on their side.

This is actually a much better indicator:


1 Like

Yes. It is "another" source. There are 4 or 5 ... sometimes useful to check them all to triangulate. Google tends to be, for understandable reasons, a bit more conservative about its thresholds and when it acknowledges outages.

Google has also changed a few of its apis and group management in the last month. Mostly smooth, but a few folks have had glitches that appear to be related to those (not rclone related, mostly).

1 Like

I have also reports from a community that deal with Gdrive a lots, and the same 403 errors happen a lot. With that said, this is Google doing and nothing from this new rclone release, so we should not mention it in this thread.

1 Like

We could not mention it, as indeed this is a release thread.

Was trying to provide some context / another avenue for the klunky conversation, so he doesn't potentially waste time diagnosing something that is not rclone related.

1 Like