TL;DR: apart from anything you upload to “the cloud”, keep multiple local, physical copies of your backup data and verify them all (local and remote) periodically.
This is just to tell my tale of a recent near-woe: recently I uploaded a volume with ~2TB and ~230K files to “the cloud” using rclone: I first uploaded almost all of it to ACD, then I migrated everything from ACD to GDrive, and finally I uploaded just the missing files (which had names too big for ACD, etc) to GDrive; all of these were done using “rclone copy” commands, and the last one (from local to GDrive) was done twice, checking that the last “rclone copy” returned 0 files Transferred and 0 Errors (so supposedly the entire local volume was then all in the remote).
Having been burned in the past by not verifying everything, I then used “rclone mount” to mount the remote volume locally, and checked it by checking it file by file against a locally generated .md5 file (generated using md5sum). To my surprise, no less than 88 remote files were corrupted/unreadable: 75 of them with the (supposedly already corrected) “failed to authenticate decrypted block - bad password” bug, and the other 15 were simply not found (!) on the remote.
So folks, if you are uploading any data to “the cloud” that you don’t want to lose (and if you did want to lose it, you wouldn’t be going to the trouble of uploading it in the first place, right?), do yourself a favor and check everything you uploaded before considering it safe, and then recheck it periodically. Oh, and also check your local copies periodically too.
As the the famous meme says: “There is no cloud – it’s just someone else’s computer”, and like any computer it’s not only probable, it’s expected to go on the fritz from time to time and take your data with it. And rclone, wonderful as it is, only adds more complexity (and so more chance of bugs) to the mix.
EDIT: I opened an issue on Github about that problem, see https://github.com/ncw/rclone/issues/999