Data copied via rclone sync disappearing when moved within cloud

cache-backend will only affect reads.

For Plex - well, it was sort of designed with Plex in mind. It does give you some advantages in that regard - in that it makes a weaker connection a bit more robust against stutter by prefetching data well ahead of the read-point. I imagine it could also be useful for alleviating some of the load from some of Plex's more aggressive metadata scanning if you used a certain RC command to prefetch the fist chunk of all files (as that's where a lot of metadata typically resides).

That said - Animosity is the local Linux/Plex guru here, and he decided against using it in the end, so it is not actually needed if you configure Plex correctly.
I do not use Plex myself yet so I have limited knowledge of the specifics there - but here is a whole thread dedicated to explaining Animosity's personal setup for rclone+Plex, and it's bound to be a goldmine of useful info:

I think ultimately it boils down to "do you really need a read-cache?". If you have a good connection speed (ie. more than sufficient to carry as many streams as you need) you probably don't. You have 10TB download/day and nearly a million API calls in that same timeframe to use. If you can make the software behave nicely this is usually more than enough - and it allows rclone direct access to efficiently read exactly what is needed at any given time rather than involving another abstracted layer of complexity (and potential inefficiencies).

Unless you are trying to make your Plex-setup a large-scale operation to serve dozens of people at the same time... at that point a massive local cache for all the most requested reads starts to make more and more sense...