Advice for Migrating from cache to mount for b2

hello all, I am on version: rclone v1.56.0 - windows 2019

I am sure I am just misreading the valid file name information in the documentation but could use some pointers

as has been pointed out to me the entire cache concept is depreciated - I have a setup where I have been doing backups from windows to b2 for a number of years using cache (large number of files with small delta - trying to minimize actual API calls)

so it SEEMS like the logical answer is to migrate to mount:

very straight forward:
set RCLONE_CONFIG=C:\rclone\rclone.conf
set RCLONE_EXCLUDE_FROM=c:\rclone\exclude.rclone

so from:

rclone copy "M:\FinanceOperations" cache:"FinanceOperations/" -vv -P

to mounting:

rclone mount b2:/ c:\b2 --vfs-cache-mode full --cache-dir c:\b2cache --vfs-cache-max-age 96h --fast-list --vfs-read-chunk-size=128M --vfs-read-chunk-size-limit=2048M --buffer-size=512M --max-read-ahead=512M --transfers=16 --checkers=8 -v -P

and doing a copy

rclone copy "M:\FinanceOperations" c:\b2\XXXXX\FinanceOperations\ -vv -P

EDIT M: drive is a network drive letter on a windows server

what I am running into is all sorts of files and folders that have non fully compliant names - (seems like for windows - (usually created on Macs many years ago) - that cache didn't seem to care about?

as an example:

ERROR : Client Correspondence/Contracts-Purchase Orders/Client Contracts/2016/XXXX/XXXXXXNegotiation Materials/XXXXX Bonus Calculator.4.29.16.xlsx: Failed to copy: The request could not be performed because of an I/O device error.

here is what it looks like when using cache:

DEBUG : Client Correspondence/Contracts-Purchase Orders/Client Contracts/2016/XXXXX/XXXXXNegotiation Materials/XXXXX Bonus Calculator.4.29.16.xlsx: Sizes identical
DEBUG : Client Correspondence/Contracts-Purchase Orders/Client Contracts/2016/XXXXX/XXXXXNegotiation Materials/XXXXX Bonus Calculator.4.29.16.xlsx: Unchanged skipping

rclone lsf "b2:/XXXXXX/Accounting/Client Correspondence/Contracts-Purchase Orders/Client Contracts/2016/XXXXXXNegotiation Materials/" works fine and shows the files there so it can handle this fine via cache but not via mount if that makes sense

the actual folder has a character I know means it was created on a Mac it looks like a filled in * I don't even know the ascii code for it


does nothing on a mount....

please post

  • config file, redact id/secret/password/token
  • debug log

Why not just do a copy directly to B2? Why use rclone to mount and then rclone to copy into that mount? If you are using --fast-list, it should already be minimizing API calls.

I know you said

but I am skeptical that this would help over a direct call. Small deltas shouldn't matter. You could maybe use --no-check-dest to minimize calls (nevermind, that will transfer everything). Or play with --no-traverse but I suspect that won't help

  • windows itself, has no problem with filename/folder with a name of XXXXXNegotiation

  • for what it is worth, not sure the double use of rclone is the best way.
    using rclone copy to a rclone mount

  • as @jwink3101 suggest, perhaps just
    rclone copy M:\FinanceOperations b2:

I gusss the assumption is Mount cache would be faster and have fewer api calls to compare files to see if they are newer - is that a faulty assumption? I guess I have to test but with the amount of files I’m concerned about crazy b2 API charges that can add up in my experience

how many files?
what is the total size of all flies?

I guess this is part of the question - about using rsync or robocopy - I just trust rclone the most :slight_smile:

Total files is about 3tb - total of maybe 100-200k files - but very few daily changes

not sure your use case, but it seems that api calls/costs are a concern.

i use cloud storage mostly for backup. veeam backup files and other stuff.
i use a combination of:

  • wasabi, a s3 clone, for hot storage. $6.00/TB/month. no charge for api and downloads.
  • aws s3 deep glacier, for cold storage at $1.00/TB/month

if that sounds interesting, check out this post.
where i have an identical source and dest with 1,000,000 files. wasabi tooks 33 seconds to sync.

just saw this post, similar to your issue of removing the cache and having issues with file names
not sure if that issue is the same as your isssue.

If you can handle the 90 day object minimum and the extra $1/month, maybe move to Wasabi? Free API calls and download.

if you upload veeam backup files, and make a request, wasabi will change the period from 90 days to 30 days.

It’s a possibility but I’m first trying to figure out why this works in cache but not in mount

boy this sure does sound like what's going on - I guess I am waiting for the next build :slight_smile:

trying to interpolate whacky characters that cache mode doesn't seem to have a problem with

I didn't want to confuse Rclone fails to make cache dir if file/folder name has illegal characters on Windows · Issue #5360 · rclone/rclone · GitHub

I just tested rclone-v1.57.0-beta.5640.2cefae51a-windows-amd64

mounting as a drive letter with --network-mode seems to have fixed this problem!

initial delta comparison for 10,000 file test took 1 min 11s after cacheing next comparison took 5 seconds and did not have any issues with any files (first run copied the 22 files it had problems with)

second test on a folder with 20,000 files worked without errors - 1m 51s initial comparison - 2nd comparison from cache 8 seconds

Edit: I stand corrected, new issues when doing larger copies I stand corrected, doing larger copies and whacky filename problems returned:

2021/08/20 19:31:11 ERROR : PRODUCTION/Production - 2016/RNL/UF TV Tag- Gummy and Fizzy/RL Tag Wrap 6316/RL Tag Talent Contracts & Invoices/RL Tag Talent Recap 6316.numbers: Failed to copy: The request could not be performed because of an I/O device error.