You don't want to do that anyway because this command will also copy unwanted folders like /tmp or /proc, as well as all storage devices that are connected to your computer. And indeed, it would cause weird errors because the cache is included.
You'll be better off selecting exactly what you need.
Use a tool that is meant for creating what you're wanting and stream it to google drive. Would be a LOT faster too as its one big file rather than MANY tiny ones. You can adjust the include/exclude to your liking as it is just 'tar'. It will also capture then permissions and links depending on the tar options.
imho, i find rcat the scariest command of all.
perhaps i am ignorant as i never used it, and thus never would.
one big file needs just one glitch to destroy it.
"Note that the upload can also not be retried"
does rcat perform a checksum of tar file?
how do you ensure the source matches the dest?
when you need to restore the data, does tar store a checksum per file?
how would know if there is an error/warning with tar, is there a log file or what?
as per the docs, "If you need to transfer a lot of data, you're better off caching locally and then rclone move it to the destination."
and that is what i do, i create a .7z file on local drive or backup server and rclone move that .7z to the cloud.
"The data must fit into RAM", so would that mean that the computer would need more free RAM, than the sum total size of every file combined? sure, i guess that tar would compress the source files.
While that's true, I wouldn't use rclone copy either to copy an entire drive. Depends on the use case. If someone is really looking for a 'backup', they should be using a backup suite.
It's no different than a copy except it doesn't know the entire file length up front. It's chunked I believe and each chunk is verified. What isn't verified is the streamed data from tar but tar should error if there is an issue. But let's all realize this isn't a 'backup'.
Considering its a stream, you can't. If you'd like to write to an interim file then that's cool.
tar is very good at verifying its data. It is just a tar file. to restore, you can use 'rclone cat to tar' to stream it back down, or just copy the tar file locally and untar it. It's just tar. You could also tee the output and generate an checksum on the fly like this ' tar cfz - files | tee >(sha256sum) | rclone rcat ......' but I really think that is a different question.
If you have the space and time. Then sure?
That's assuming you have enough space to write locally. You don't really need the interim file with shell magic though.
You can produce a log with tar. You can check the return code. You can capture standard out/error. It's like any other command.
Let's all remember, this isn't a backup. This is copying an entire drive remotely. If you want a backup, you should be using a backup utility. Restic is an example.
I get your point but I’m not sure this is faster as it eliminates parallel transfers. There’s probably an inflection point with a certain amount of very small files due to the creation time of each new file, but I think in most cases this would actually be slower.
well, there is no doubt what PURGE means. simple and pure.
any monkey knows it is a tool of destruction , we monkeys know the consequences of misusing it.
rcat is fuzzy wuzzy, humany, what is a cat, why is there a r in front of it???
humans think, I like felines, so I must like to rcat my data away to the cloud.
and then the human wakes up to realize it owns a dog named butch...
I would convert ~ to the full path since not all tools support that.
Be advised that RClone has an exclude syntax. You could exclude a specific list of directories like the cache. I do that for my Mac .DS_Store files and something else if I remember.
I agree that copying the OS is wasteful.
Also — some S3 buckets have the ability to save versions in case you need a more full “backup” solution. I would check out BackBlaze or Wasabi. They have different pricing models. Wasabi may not have S3 lifecycle automated management for versioning but is on their roadmap. I use Wasabi so not as familiar with BackBlaze but see it a lot.
RClone does a lot of validation including checksums. It will also keep the Destination copies at approximately the same timestamp. Again, not all destinations support all of what RClone does.