When doing an upload of a folder to B2 using the S3 interface where some files in the source dir are changing while the upload is happening, I see multiple versions of some of the files on the B2 web interface once the transfer is done.
See screenshots:
It looks like different versions of the files are present, but the lifecycle settings for the bucket in question have been set to "Keep only the last version"
What is your rclone version (output from rclone version)
rclone v1.52.2-232-gff843516-beta
os/arch: linux/amd64
go version: go1.14.6
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Ubuntu 20.04, 64 bit
Which cloud storage system are you using? (eg Google Drive)
B2, S3 interface
The command you were trying to run (eg rclone copy /tmp remote:tmp)
I also see that if I do an rclone purge command on the uploaded folder and then an rclone ls on that same folder, it seems to be gone, but on the B2 web interface it remains present with an asterisk next to the name of the "folder" as well as next to any files under it...
What appears to be happining is that rclone is getting these errors
2020/07/31 14:35:10 ERROR : path/to/backup/folder/userdata/activemq/localhost/KahaDB/db-3.log: Failed to copy: Put "https://s3.eu-central-003.backblazeb2.com/bucket-name/folder1/path/to/backup/folder/userdata/activemq/localhost/KahaDB/db-3.log?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=xyz%2F20200731%2Feu-central-003%2Fs3%2Faws4_request&X-Amz-Date=20200731T123509Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-mtime&X-Amz-Signature=bdcf5bb50284d1a293547eb814cb5fe7c25dacf6b5a65949e8d5316e3b51bf52": can't copy - source file is being updated (mod time changed from 2020-07-31 14:35:08.989060021 +0200 SAST to 2020-07-31 14:35:09.885085009 +0200 SAST)
This then causes rclone to abandon the transfer and try it again.
That explains why you get the versions.
If you upload a file, change it and upload it again I expect you'll get the same versions.
There are two things to note here
the existence of extra versions is not something rclone has control over using the s3 protocol. Using the native b2 protocol rclone can control it directly.
The files changing warning can be turned off with --local-no-check-updated which will mean rclone will do its best to transfer the file using the size it read at the start of the transfer.
Thank you @ncw. Yeah, I saw those errors which cause the retries. I just thought that when it retries the upload again and then copies the files again which have changed in the meantime, it would just overwrite the previous file and not create versions.
This is probably a B2 bug then? If the bucket is set to keep only last version, there shouldn't be any versions as far as I'm concerned...
I've contacted Backblaze, according to them the lifecycle rules are applied once a day, so the extra versions are deleted then. So it's not a big problem but still somewhat unfortunate that it's different than when using the B2 native API...