Advanced folder and file name remapping

Is there a way to overcome backend compatibility problems like
''' Failed to copy: failed to make directory: Number of subfolders in folder is limited to 25000 (Error 403)
More specifically, I'm trying to backup a proxmox backup server datastore to opendrive but I'm running into "Maximum Files Per Folder" limit of 25k.
Is there a solution or a workaround?
I'd like to see a backend which can exist on top of a filesystem which automatically remaps the folder/file names according to a definable strategy so sync can be used without tracking renames. Similar to the crypt renaming capability.
For example a hashing strategy which places all files beginning with 'aa' into a folder named 'aa'

rclone --version
rclone v1.70.3

  • os/version: debian 12.11 (64 bit)
  • os/kernel: 6.8.12-13-pve (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.24.4
  • go/linking: static
  • go/tags: none

Opendrive

Could be copy or sync

rclone copy '/mnt/datastore/.snapshots/@rclone-ExtBackup-ec91611c4af75e2d9f453bede187b311-9f036d03e16ffa1316984a167d7214df-1754321668' 'backup-enc-opendrive:' -M --log-file='/tmp/rclone-ExtBackup-ec91611c4af75e2d9f453bede187b311-9f036d03e16ffa1316984a167d7214df-1754321668.log' --exclude '/.~lock.*' --exclude '/.snapshots' --log-level INFO --metadata --create-empty-src-dirs -multi-thread-streams 32 --multi-thread-write-buffer-size 4Mi

This is a generic problem about capabilities and not a specific error. Configs can't help to solve

2025/08/12 05:02:34 ERROR : .chunks/916d/916d153c66745827ebbad7897eccba2f0bd8adb05f1b2f94b8777a9cb646db9c: Failed to copy: failed to make directory: Number of subfolders in folder is limited to 25000 (Error 403)

Nothing like this exist today in rclone and as I am aware there are no plans to add anything similar.

You have to think about some workaround outside rclone.

One option I can think about (without changing your destination remote) is to use backup program like restic. It is well integrated with rclone and stores files (often grouped together) in directories structure utilising sharding.

Also have a look at the latest PBS which supports S3 as a destination natively.

Is there anyway this could help?

rclone convmv --name-transform command=/path/to/my/programfile names.

And what would be reverse transformation for restore?

 The --name-transform flag is also available in sync, copy, and move.

I think a regex might work but I’m struggling a bit with the syntax. Are capture groups supported like extended regex?

The problem here is I think that you can not combine it with sync or copy. It does all transformations “in place”. So anyway you have to create copy of all source in some place before.

I’m not sure I understand. I don’t need it to transform in place. Source must remain unchanged.

I just need to break up the huge .chunks dir into smaller subdirs on the destination.

That way I won’t hit the opendrive Maximum Files Per Folder limit.

I’m getting some success with this:

rclone convmv backup-enc-opendrive:.chunks --name-transform 'dir,regex=^(..)/${1}:${1}' --dry-run -vv -M --dump=filters

But how can I get it to insert a subdirectory in place of the :

Shoot!

Note that --name-transform may not add path separators / to the name. This will cause an error.

I wonder why this limitation is set?

convmv is path name renaming tool. It does not copy or move files anywhere.

i think you are going to need to write a script to handle the rename+copy.

  1. run rclone mount and have your script work on that.
  2. run rclone rcd and have your script work on that

Proxmos Backup Server (PBS) datastore is rather tricky thing to deal with. It can contain millions of files and I would not waste time trying to backup it directly to a consumer grade cloud storage like OneDrive, Dropbox, opendrive etc. These services are simply not designed to store and handle huge number of files. Max number of files per directory is one limit. I am very sure that they have overall number of files limit too. Then any backup will require listing and comparing all these files… Good luck here with throttling:)

Two solutions i see is either proper S3 storage (which is directly supported by PBS) or some backup software which can not only spread files across multiple directories but also consolidate them in chunks. like restic… but anything similar will do. I would probably go with rustic - which is restic on steroids. Allows chunks up to 4GB for example (so hundreds if not thousands of PBS datastore files will fit into one file stored in the cloud). I use it personally for most of my backup needs and I am very happy with it.

I’m not planning on using convmv. It works for copy and sync too.
I’m sure I can use convmv to fix the few I have already uploaded but they can be blown away if need be.
There isn’t a maximum files limit for opendrive so that’s not an issue.
There isn’t a problem with throttling either since everything is slow there and I’m patient. The source couldn’t handle great speeds anyway so it’s a great match and pbs has a feature greatly reducing duplicate data and bandwidth of transfers.
None of those s3 providers offer unlimited data at a decent rate compared with opendrive.

Since rclone doesn’t seem to allow moving files across directory boundaries, I’ve decided to split the .chunks dir into a lot of small pieces and softlink those back into the chunks dir. Then I can tell rclone to ignore the chunks dir and sync the split dirs instead. pbs seems not to mind softlinks in place of the hex folder names and the layout is static. Thanks to this idea: Create Chunkstore on filesystems not supporting over 65k hard links per directory

Thx for sharing.

unlimited and cheap storage does not exist. But it is another story:) covered extensively on this forum

# only local
function relocate_chunks_local_link() {
  pbs_chunks_dir="${local_base_dir}/.chunks"
  new_chunks_dir="${local_base_dir}/.chunks_real"
  snapshot_dir="${local_base_dir}/.snapshots"

  create_btrfs_snapshot # create a btrfs snapshot before beginning
  let cnt=0
  for dir in "${pbs_chunks_dir}"/*; do
    let cnt+=1
#   echo ${dir}
    local hex=$(sed -E 's/^.*(....)$/\1/' <<< "${dir}")
    local pre=$(sed -E 's/^(..).*$/\1/' <<< "${hex}")
    local new_chunks_subdir="${new_chunks_dir}/${pre}"
    local new_chunks_hexdir="${new_chunks_subdir}/${hex}"
#   printf "%s: %d - %s\n" "$pre" "0x${hex}" "${hex}"
    mkdir -p "${new_chunks_subdir}"
    mv -uf "${dir}" "${new_chunks_subdir}"
    ln -s "${new_chunks_hexdir}" "${pbs_chunks_dir}"
#   [ $cnt -gt 3 ] && break
  done
}

I ran this and it works a treat. I did something similar to move the remote files with a rclone mount.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.