S3 (OVHcloud/other): (leading) slash as part of a path not working properly

What is the problem you are having with rclone?

Due to some software quirks some of our systems use the slash (/) in their names (mostly at the beginning).

My final goal is to get a list of changed files and transfer them with `copy files-from-raw`. But that does not work (no files found).

This results in outputs like this:

# rclone lsd --max-depth 2 backup-target:/s3-backup-4xuwupw9/
           0 2000-01-01 01:00:00        -1 /
           0 2000-01-01 01:00:00        -1 public
           0 2000-01-01 01:00:00        -1 /PROD

As you can see there is a directory PROD below the slash directory.

I am able to list the files inside that directory with the following command:

rclone ls backup-target:/s3-backup-4xuwupw9/\/PROD

I am also able to use lsf to export a list:

# rclone lsf --absolute --recursive --files-only backup-target:/s3-backup-4xuwupw9/
/PROD/Rank_2025_07_14.csv
/last-successful-backup
/public/wa-manual.pdf

If I run the same command without `absolute` the output looks like this:

# rclone lsf --recursive --files-only backup-target:/s3-backup-4xuwupw9/
/PROD/Rank_2025_07_14.csv
last-successful-backup
public/wa-manual.pdf

As you can see there is already a slash in front of the first filename.

If I now use either output as input for `copy with filter-from` or `copy with filter-from-raw` the files from the slash folder are not respected and therefore not copied over.

Run the command 'rclone version' and share the full output of the command.

rclone v1.72.0

  • os/version: debian 12.12 (64 bit)
  • os/kernel: 6.1.0-39-amd64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.24.11
  • go/linking: dynamic
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

OVHcloud S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy -vv --dump filters --files-from files_to_copy.txt backup-target:/s3-backup-4xuwupw9/ backup-target:/s3-backup-4xuwupw9/2025-12-09/
rclone copy -vv --dump filters --files-from-raw files_to_copy.txt backup-target:/s3-backup-4xuwupw9/ backup-target:/s3-backup-4xuwupw9/2025-12-09/

The rclone config contents with secrets removed.

[backup-target]
type = s3
provider = OVHcloud
endpoint = s3.de.cloud.ovh.net
region = de
access_key_id = XXX
secret_access_key = XXX
### Double check the config for sensitive info before posting publicly

A log from the command with the -vv flag

2025/12/09 12:45:31 DEBUG : Setting --config "/tmp/rclone.conf" from environment variable RCLONE_CONFIG="/tmp/rclone.conf"
2025/12/09 12:45:31 NOTICE: Automatically setting -vv as --dump is enabled
--- start filters ---
--- File filter rules ---
--- Directory filter rules ---
--- end filters ---
2025/12/09 12:45:31 DEBUG : rclone: Version "v1.72.0" starting with parameters ["rclone" "copy" "-vv" "--dump" "filters" "--files-from-raw" "files_to_copy.txt" "backup-target:/s3-backup-4xuwupw9/" "backup-target:/s3-backup-4xuwupw9/2025-12-09"]
2025/12/09 12:45:31 DEBUG : Creating backend with remote "backup-target:/s3-backup-4xuwupw9/"
2025/12/09 12:45:31 DEBUG : Using config file from "/tmp/rclone.conf"
2025/12/09 12:45:31 DEBUG : fs cache: renaming cache item "backup-target:/s3-backup-4xuwupw9/" to be canonical "backup-target:s3-backup-4xuwupw9"
2025/12/09 12:45:31 DEBUG : Creating backend with remote "backup-target:/s3-backup-4xuwupw9/2025-12-09"
2025/12/09 12:45:31 DEBUG : fs cache: renaming cache item "backup-target:/s3-backup-4xuwupw9/2025-12-09" to be canonical "backup-target:s3-backup-4xuwupw9"
2025/12/09 12:45:31 DEBUG : /: Excluded
2025/12/09 12:45:31 DEBUG : last-successful-backup: Need to transfer - File not found at Destination
2025/12/09 12:45:31 DEBUG : public/wa-manual.pdf: Need to transfer - File not found at Destination
2025/12/09 12:45:31 DEBUG : S3 bucket s3-backup-4xuwupw9 path 2025-12-09: Waiting for checks to finish
2025/12/09 12:45:31 DEBUG : S3 bucket s3-backup-4xuwupw9 path 2025-12-09: Waiting for transfers to finish
2025/12/09 12:45:31 DEBUG : last-successful-backup: size = 11 OK
2025/12/09 12:45:31 DEBUG : last-successful-backup: md5 = f9380fcd9118d567707d2e73c2ed1d94 OK
2025/12/09 12:45:31 INFO  : last-successful-backup: Copied (server-side copy)
2025/12/09 12:45:31 DEBUG : public/wa-manual.pdf: size = 1754070 OK
2025/12/09 12:45:31 DEBUG : public/wa-manual.pdf: md5 = 34ba932fee0e239fa60f8a7a3740898a OK
2025/12/09 12:45:31 INFO  : public/wa-manual.pdf: Copied (server-side copy)
2025/12/09 12:45:31 INFO  : 
Transferred:   	    1.673 MiB / 1.673 MiB, 100%, 0 B/s, ETA -
Checks:                 0 / 0, -, Listed 318
Transferred:            2 / 2, 100%
Server Side Copies:     2 @ 1.673 MiB
Elapsed time:         0.4s

2025/12/09 12:45:31 DEBUG : 13 go routines active

As you can see the file inside the slash directory is not shown in the log but the slash directory itself is listed as "Excluded".

If this is not a issue but an intended behavior it would be nice if someone could give me an advice how to copy those files. Unfortunately copying or syncing without files-from is not an option as it would take days to complete. Therefore I implemented an alternative based on this WIKI entry: Big syncs with millions of files · rclone/rclone Wiki · GitHub

Thank you in advance for your help

Bernhard

Hey,

Replying to my own question:
I think I found a workaround. In my case it is possible to disable the absolute mode for lsf and then replace the existing slash at the beginning with a "special" slash (documented here: Overview of cloud storage systems):

sed 's/;\//;//g' < files_to_copy.txt > files_to_copy.corrected.txt

But this alone did not yet solve the issue.
I had to add the following parameter to the copy command:
--s3-encoding "InvalidUtf8,Dot"
NB: the default values of this parameter are Slash,InvalidUtf8,Dot

I really hope this will help someone in the future.

Best regards,
Bernhard

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.