Objects synced to b2 gone after one hour

What is the problem you are having with rclone?

I'm trying to sync local files to B2. I'm able to run sync successfully, but after one hour (+/- 5 min; I measured), new files on the remote are gone, and ones that were deleted are restored. I checked this by running lsl before sync, immediately after sync, and one hour later, and then doing a 3-way diff of them.

I'm up-to-date on my bill, so my only theories are rclone not finalizing sync uploads correctly, using a cache for the second lsl (and the files never actually got uploaded), or B2 is actually losing them. I also verified that my filenames+paths aren't too long.

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0
- os/version: Microsoft Windows 10 Pro 2009 (64 bit)
- os/kernel: 10.0.19043.1526 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.17.2
- go/linking: dynamic
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Backblaze B2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone.exe sync --exclude "System Volume Information/**" --exclude "$RECYCLE.BIN/**"  --exclude DumpStack.log --exclude DumpStack.log.tmp --max-size=500K --modify-window 10s  d:\ b2-rclone:mybucket

The rclone config contents with secrets removed.

[b2-rclone]
type = b2
account = ...
key = ...

A log from the command with the -vv flag

Apologies for all the filtering. I'm syncing, it ended up being 20 MB, and none of it looked relevant to my problem. I included exactly what I filtered in the grep pipeline, though.

$ cat 2022-02-24-02-sync |
    grep -v 'Size and modification time the same' |
	grep -v 'Excluded$' | grep -v 'Unchanged skipping$' |
	grep -v 'Deleted$'| grep -v 'sha1 differ$' |
	grep -v '(replaced existing)$' |
	grep -v '(new)$' |
	grep -v 'sha1 = .* OK$' |
	grep -v ' Modification times differ by' |
	grep -v 'sha1 =.*B2 bucket' |
	grep -v 'sha1 =.*Local file'

2022/02/24 00:34:45 DEBUG : rclone: Version "v1.57.0" starting with parameters ["C:\\rclone\\rclone.exe" "sync" "-vv" "--exclude" "System Volume Information/**" "--exclude" "$RECYCLE.BIN/**" "--exclude" "DumpStack.log" "--exclude" "DumpStack.log.tmp" "--max-size=50K" "--modify-window" "10s" "d:\\" "b2-rclone:mybucket"]
2022/02/24 00:34:45 DEBUG : Creating backend with remote "d:\\"
2022/02/24 00:34:45 DEBUG : Using config file from "C:\\Users\\megawolf\\AppData\\Roaming\\rclone\\rclone.conf"
2022/02/24 00:34:45 DEBUG : fs cache: renaming cache item "d:\\" to be canonical "//?/d:/"
2022/02/24 00:34:45 DEBUG : Creating backend with remote "b2-rclone:mybucket"
2022/02/24 00:35:18 DEBUG : B2 bucket mybucket: Waiting for checks to finish
2022/02/24 00:35:18 DEBUG : B2 bucket mybucket: Waiting for transfers to finish
2022/02/24 00:35:27 DEBUG : Waiting for deletions to finish
2022/02/24 00:35:29 INFO  :
Transferred:        1.146 MiB / 1.146 MiB, 100%, 35.817 KiB/s, ETA 0s
Checks:             32155 / 32155, 100%
Deleted:              159 (files), 0 (dirs)
Transferred:           79 / 79, 100%
Elapsed time:        43.8s

2022/02/24 00:35:29 DEBUG : 21 go routines active

That's a strange problem... From your log it doesn't look like anything went wrong.

Can you

  • see if it is still happening?
  • see if you can make a reproducer that I could run locally?

I'd be suprised if this was the case - B2 is a heavily used backend.

Rclone doesn't cache lsl assuming you aren't using the cache backend.

Rclone checks the files are uploaded properly so I'd be really suprised if they didn't get uploaded...

I've never heard of that either. I guess it might be a one off glitch?

You'd get an error on upload if they were.

hello and welcome to the forum,

rclone sync and rclone lsl is not a daemon, does not run in the background, does not cache anything.
so if after rclone sync had successfully uploaded the files, and then, the newly sync files are gone.
i would focus on the b2 website, looking at a log file.

after the rclone sync, what objects go missing.
--- source files that did not exist in the dest, so first time copied?
--- source files that are different from existing dest, so source overwrites the dest?

if versioning enabled, have to look at that, what did b2 do with the existing dest file?

this should be easy to replicate.

  1. in a local directory on local,
    --- modify a file that already exists in the dest, can just change the modtime timestamp.
    --- create a new file in the same dir
  2. using that same dir as the source, rclone sync -vv. -vv will output rclone debug info.
  3. post the full debug log, really need to see a debug log.

That gives me an idea, it could be a lifecycle rule you've set up on the b2 bucket. They can do things to objects after a certain time.

good idea but not sure it applies for the OP case
as i assume that the OP case, is not a one time issue.

"Lifecycle rules are applied once a day. The smallest number of days you can set in a rule is 1. Rules will be applied at the next daily run after that number of days has passed. "

1 Like

It's set on "Keep prior versions for this number of days: 60." I also noticed my config doesn't have --b2-hard-delete, but that shouldn't be an issue.

Any thoughts on these?

Well this is embarrassing. I think I figured it out. I realized I have another machine syncing an old version of the data periodically. I'm still confirming, but rclone and B2 are fine, I just forgot about an old cronjob.

1 Like

I'm glad it isn't B2 or rclone losing your objects, so happy to receive your report :slight_smile:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.