I am looking into rclones handling of duplicate names and haven’t been able to find common approach across commands or backends.
As an example, I have looked into rclone cat in this situation:
> echo "Hello Duplicate File!" > ./testfolder1/hello > echo "Hello Duplicate Folder!" > ./testfolder2/hello/duplicates > rclone copy ./testfolder1 remote: -v --stats=0 2023/02/01 12:02:38 INFO : hello: Copied (new) > rclone copy ./testfolder2 remote: -v --stats=0 2023/02/01 12:02:44 INFO : hello/duplicates: Copied (new) > rclone lsl remote: 23 2023-02-01 12:02:29.539000000 hello 25 2023-02-01 12:02:32.135000000 hello/duplicates > rclone cat remote:hello # What is the expected result?
The above is possible on remotes allowing duplicate names (e.g. Google Drive) and bucket/object based remotes (e.g. S3).
Interestingly, I get different results on Google Drive (Hello Duplicate Folder!) and S3 (Hello Duplicate File!) – and none of them warn that they are presenting only one of two possible answers.
So here is my question: What is the expected output of
rclone cat in situations where the
path refers to
- a file and a folder (having the same name) such as
rclone cat remote:hello
- multiple folders (having the same name)
- multiple files (having the same name)
and should any of these result in a warning or error?