I just started testing this with the latest beta and I’ve been running some benchmarks (too early to publish as there’s nothing scientific yet but it seems to be outperforming PlexDrive significantly).
Two issues: I have files “disappearing” from my directory. Not on the actual Google Drive itself but I’ll “ls” a directory from my cached-Google mount and see maybe 10 files and 1 sub directory. I can read/copy the files no problem. Don’t do anything but wait 30 minutes and come back and the directory is empty except for the sub. Killing the mount and starting it up again with the dump cache option brings everything back from the dead. However, it does mean that a full scan of a new Plex server, for example never completes because files keep disappearing.
In the logs, I’m seeing this (not sure if related):
panic: runtime error: slice bounds out of range
goroutine 20496 [running]:
github.com/ncw/rclone/cache.(*Handle).getChunk(0xc42499a230, 0x5000000, 0xc42499a268, 0xff02e0, 0xc42cf35e08, 0x5a2cd7e5, 0xc427b8d080)
github.com/ncw/rclone/cache.(*Handle).Read(0xc42499a230, 0xc424076000, 0x1000, 0x100000, 0x0, 0x0, 0x0)
github.com/ncw/rclone/fs.ReadFill(0x7f5be6106a98, 0xc42499a230, 0xc424076000, 0x1000, 0x100000, 0xe6ef20, 0xf3cc40, 0xeb0f00)
github.com/ncw/rclone/fs.(*buffer).read(0xc420193980, 0x7f5be6106a98, 0xc42499a230, 0x7f5be6106a98, 0xc42499a230)
created by github.com/ncw/rclone/fs.(*asyncReader).init