When I started working on a new virtual backend for rclone, I did a lot of interactive testing. That was very helpful to understand how rclone behaves, but it quickly became tedious, so I began to automate it.
I focused on black-box testing because the remote I am working on is a virtual backend that combines multiple remotes. The scripts mainly automate the kind of interactive, CLI-level testing I was doing by hand, and are a tiny complement to rclone’s Go-based test framework.
Right now my setup looks like this:
- I have a
test/directory with a set of Bash-based integration scripts. - The scripts use the local filesystem for a local remote and one or more MinIO instances to exercise S3-compatible object storage.
- The tests use a dedicated
rclone.confgenerated by a setup script so I can easily change or extend test configurations in the future. - All test data is stored under a
_datasubdirectory insidetest/, which is listed in.gitignoreso it doesn’t end up in the repo.
Does this approach make sense? I’d be interested to hear how other backend authors automate this kind of black-box / CLI-level testing, and whether you have alternative setups or patterns you’d recommend.
Annex: I am considering using bats-core for a cleaner test structure and better portability.