Is there a way to overcome backend compatibility problems like
''' Failed to copy: failed to make directory: Number of subfolders in folder is limited to 25000 (Error 403)
More specifically, I'm trying to backup a proxmox backup server datastore to opendrive but I'm running into "Maximum Files Per Folder" limit of 25k.
Is there a solution or a workaround?
I'd like to see a backend which can exist on top of a filesystem which automatically remaps the folder/file names according to a definable strategy so sync can be used without tracking renames. Similar to the crypt renaming capability.
For example a hashing strategy which places all files beginning with 'aa' into a folder named 'aa'
This is a generic problem about capabilities and not a specific error. Configs can't help to solve
2025/08/12 05:02:34 ERROR : .chunks/916d/916d153c66745827ebbad7897eccba2f0bd8adb05f1b2f94b8777a9cb646db9c: Failed to copy: failed to make directory: Number of subfolders in folder is limited to 25000 (Error 403)
Nothing like this exist today in rclone and as I am aware there are no plans to add anything similar.
You have to think about some workaround outside rclone.
One option I can think about (without changing your destination remote) is to use backup program like restic. It is well integrated with rclone and stores files (often grouped together) in directories structure utilising sharding.
Also have a look at the latest PBS which supports S3 as a destination natively.
The problem here is I think that you can not combine it with sync or copy. It does all transformations “in place”. So anyway you have to create copy of all source in some place before.
Proxmos Backup Server (PBS) datastore is rather tricky thing to deal with. It can contain millions of files and I would not waste time trying to backup it directly to a consumer grade cloud storage like OneDrive, Dropbox, opendrive etc. These services are simply not designed to store and handle huge number of files. Max number of files per directory is one limit. I am very sure that they have overall number of files limit too. Then any backup will require listing and comparing all these files… Good luck here with throttling:)
Two solutions i see is either proper S3 storage (which is directly supported by PBS) or some backup software which can not only spread files across multiple directories but also consolidate them in chunks. like restic… but anything similar will do. I would probably go with rustic - which is restic on steroids. Allows chunks up to 4GB for example (so hundreds if not thousands of PBS datastore files will fit into one file stored in the cloud). I use it personally for most of my backup needs and I am very happy with it.
I’m not planning on using convmv. It works for copy and sync too.
I’m sure I can use convmv to fix the few I have already uploaded but they can be blown away if need be.
There isn’t a maximum files limit for opendrive so that’s not an issue.
There isn’t a problem with throttling either since everything is slow there and I’m patient. The source couldn’t handle great speeds anyway so it’s a great match and pbs has a feature greatly reducing duplicate data and bandwidth of transfers.
None of those s3 providers offer unlimited data at a decent rate compared with opendrive.
Since rclone doesn’t seem to allow moving files across directory boundaries, I’ve decided to split the .chunks dir into a lot of small pieces and softlink those back into the chunks dir. Then I can tell rclone to ignore the chunks dir and sync the split dirs instead. pbs seems not to mind softlinks in place of the hex folder names and the layout is static. Thanks to this idea: Create Chunkstore on filesystems not supporting over 65k hard links per directory
# only local
function relocate_chunks_local_link() {
pbs_chunks_dir="${local_base_dir}/.chunks"
new_chunks_dir="${local_base_dir}/.chunks_real"
snapshot_dir="${local_base_dir}/.snapshots"
create_btrfs_snapshot # create a btrfs snapshot before beginning
let cnt=0
for dir in "${pbs_chunks_dir}"/*; do
let cnt+=1
# echo ${dir}
local hex=$(sed -E 's/^.*(....)$/\1/' <<< "${dir}")
local pre=$(sed -E 's/^(..).*$/\1/' <<< "${hex}")
local new_chunks_subdir="${new_chunks_dir}/${pre}"
local new_chunks_hexdir="${new_chunks_subdir}/${hex}"
# printf "%s: %d - %s\n" "$pre" "0x${hex}" "${hex}"
mkdir -p "${new_chunks_subdir}"
mv -uf "${dir}" "${new_chunks_subdir}"
ln -s "${new_chunks_hexdir}" "${pbs_chunks_dir}"
# [ $cnt -gt 3 ] && break
done
}
I ran this and it works a treat. I did something similar to move the remote files with a rclone mount.