Well you’d keep adding stuff to the cache till you hit the max size then it’ll purge the oldest data when adding new data.
The --rc flag allows you to remote control the mount so you can purge individual directories from the cache (structure and or the data chunks too).
Useful when something seems to be missing from the mount but you’re sure it’s on the Google drive, means you can retain the cache data at the same time. Else you’d be forced to send a SIGHUP signal to rclone to kill all cache data and structures. Which if you did have nearly 8TB of locally stored data would be a shame. If you are going down that route I’d consider larger cache chunks to minimise the huge database that would required to index 8TB of 32M chunks.
This isn’t as bandwidth efficient as leaving the data locally and uploading when full though as you’re sending it up the pulling it down at least once.
It might be possible to patch the cache upload routine to upload at a set free space on the drive rather than by time and you’d replicate your existing setup much closer. Would need to talk to @remus.bunduc though. As it stands now it’ll upload with the delay timer variable set.
Third option would be to upload at the timer but not delete the local queued data that successfully uploaded until a free space trigger happens.