I have started suffering slow speeds, is like is limited to 10-11Mbps.
I have a mount and if I use rsync it starts transferring ok but after a few second it goes down to 11Mbps.
Re the logs, I have changed my mount --log-level to DEBUG but there isn't anything new a part from the INFO logs for vfs cache cleaned
2020/09/29 20:54:02 INFO : vfs cache: cleaned: objects 2088 (was 2088) in use 0, to upload 0, uploading 0, total size 283.042G (was 283.042G)
2020/09/29 20:55:02 INFO : vfs cache: cleaned: objects 2088 (was 2088) in use 0, to upload 0, uploading 0, total size 283.042G (was 283.042G)
2020/09/29 20:56:02 INFO : vfs cache: cleaned: objects 2088 (was 2088) in use 1, to upload 0, uploading 0, total size 283.042G (was 283.042G)
2020/09/29 20:57:02 INFO : vfs cache: cleaned: objects 2088 (was 2088) in use 0, to upload 0, uploading 0, total size 283.115G (was 283.042G)
2020/09/29 20:58:02 INFO : vfs cache: cleaned: objects 2088 (was 2088) in use 0, to upload 0, uploading 0, total size 283.115G (was 283.115G)
2020/09/29 20:59:02 INFO : vfs cache: cleaned: objects 2088 (was 2088) in use 0, to upload 0, uploading 0, total size 283.115G (was 283.115G)
I think you are pulling this file directly from your VFS cache and not from Gdrive. What storage are you using for your cache volume? /mnt/pool/cache? is it a thumb drive or something?
You can manually delete the file from the VFS cache location and try again.
@ookla-ariel-ride I did some more testing with my old mount settings and I don't have the issue so it has to be something to do with the cache or VFS mode as the HDD is capable of much more than 10M
/dev/sdb:
Timing buffered disk reads: 550 MB in 3.01 seconds = 182.84 MB/sec
This is my old mount but I'd rather go back to VFS full, performance was much better when it worked
Thanks for the advise @asdffdsa
I'm freeing up some space on my main HDD to set the cache there and if I'm still facing the issue I will start like that.
Edit;
This is very odd.... After doing the above the issue was still there so I've done another test and realised that the issue is just with the mount...
If I do the following I pass the 10M mark no problem....
I don't think the issue are the HD's, I've tried using both (system HD and Download HD) for cache with same results and disks can read/write at much better speeds than 10MB/sec
/dev/sda:
Timing cached reads: 27560 MB in 1.99 seconds = 13858.22 MB/sec
Timing buffered disk reads: 558 MB in 3.01 seconds = 185.43 MB/sec
/dev/sdb:
Timing cached reads: 28744 MB in 1.99 seconds = 14458.70 MB/sec
Timing buffered disk reads: 550 MB in 3.01 seconds = 182.93 MB/sec
In fact the issue is only with the mount. If I use rclone copy/move I get much better speeds so could it be the Kernel?
I had 4.19.0-10-amd64 when all this started and I've tried 5.4.0, 5.6.0 & 5.7.0 but I can't appreciate any improvement.