Rclone 1.53 release

If you do some reading on Discourse, the software that runs this forum, it is very intentionally not supported nested conversations a la reddit. I see both pros and cons. Even this comment shows it since it’s related directly to your comment but not the rest.

I prefer the nested but that’s also because I spend a lot of time on reddit. I could be convinced otherwise

There are a few settings that default to suppress to make things more confusing so I toggled them off as that should help a bit, but I found the same thing as they just don't nest.

I love you all.

Socially distant internet hugs all around.


I'm not sure if this comes from the new version, but since I upgraded to v1.53.0-DEV with brew, I have been experiencing a lot of userRateLimitExceeded errors when doing server side syncing between two Google Drive folders using service accounts.

The operations stop around 50GB, throwing that error. Switching to another SA still results the same.

I have tried with fresh service accounts so I'm pretty sure that they don't hit the daily upload limit.
I have also tried syncing using the same SA and not doing server-side, and it works fine.
I have also tried different client_id and client_secret.

I can't really provide a log file, so it's pretty vague for y'all, but I'll try to narrow the issue down by either:

  • downgrade to previous rclone version.
  • try to sync different folders.

I want to see if anybody else experiencing the same issue though.

I can't think of a change in rclone which would affect that. The downgrade test should show that one way or another.

I haven't noticed this issue and I regularly sync multiple TBs server-side.

Is this with shared drives (team drives) or shared drive folders? Could you also please create a separate thread for this?

Sure. I was thinking of reporting a potential issue for the new release, but let me narrow it down and create a separate thread for it.

I've been getting quite a few userRateLimitExceeded errors recently. Do you get them on operations other than server side copying?

I have had no issue doing sync normally (without the server-side flag).

I've been using 1.53 for a few days now and it's much more 'buttery' with faster launches (about 1-2s down from about 2-3s for first launch and instantaneous if in the cache for repeat) and skipping seems faster as well - Plex did what seems to be a fairly major update as the UI is faster, so this might be contributing. Either way, I'm not complaining!

I want to experiement with --vfs-read-ahead - @ncw what's the default value please so I have a starting point? I can't find it in the docs.

It's default is 0. But with --buffer-size default being 16M, rclone will read ahead 16M (buffer size (memory) + vfs read ahead (disk))

1 Like

Like already reported on this thread, I am using Rclone last version (rclone-v1.53.0-windows-amd64), and the error is happening almost all-day

2020/09/10 10:27:39 ERROR : IO error: open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded

I will revert to rclone-v1.52.3-windows-amd64 and see if it stabilizes.

Also i tried to use the new VFS cache mode with FULL setting and observed that, for example, a 50GB file is requested, the 50GB file is written to the cache-dir folder completely and not just the requested parts of file.

One more question. Does whatever is buffered to RAM via --buffer-size get written to the new disk cache? And also --vfs-read-ahead data?

The file will appear to be 50GB but it is a sparse file so doesn't actually contain 50 GB of data.


The --buffer-size data gets written to the disk when the stream is closed so it isn't wasted.

--vfs-read-ahead data is written straight to the disk.

1 Like

Yeah, I understood that the problem is... looking at file properties on the VFS folder he occupies the 50gb on vfsMeta folder he occupies some bytes.

How are you measuring the file properties? On unix use du -m to see space actually used in MB

$ ls -l 1GB-sparse
-rw-rw-r-- 1 ncw ncw 1073741824 Sep 10 16:13 1GB-sparse
$ ls -lh 1GB-sparse
-rw-rw-r-- 1 ncw ncw 1.0G Sep 10 16:13 1GB-sparse
$ du -m 1GB-sparse
0	1GB-sparse

I'm not sure how to do this on Windows...

I will take some screenshots and do the tests as soon as the API error disappears :slight_smile:

1 Like

Sorry for the flurry of questions. Hopefully, this will be the last one. I'm getting amazing start times with 1.53 with the default 16MB buffer and I don't want to jinx this by raising --buffer-size or --vfs-read-ahead too high.

Am, I correct that rclone prioritises the connection to optimise movie stream playback i.e. requesting the next required chunk, and if there's spare bandwidth for more chunks, it will then fill the buffer and then the disk (-vfs-read-ahead), rather than e.g. filling the buffer and then the disk before starting playback, resulting in slower launch times?

Thanks in advance.

Best to to a new post for this and use the Help and Support template and include all the info and we can see what's going on.

It uses sparse files so it does only grab the parts it needs but the file looks to be full.