Sort files by modified after rename a folder

What is the problem you are having with rclone?

How to make my explorer show the creation date correctly?
Everytime I rename a folder, all the files got modified to the current date, this way my files all have the same modified date/creation date. I can't sort them.

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.0

  • os/version: Microsoft Windows 10 Pro 22H2 (64 bit)
  • os/kernel: 10.0.19045.3693 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.21.4
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

AWS S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste command here

C:\rclone\rclone.exe mount JMFCloud:jmf-cloud Y:\Cloud --vfs-case-insensitive --vfs-cache-mode full --vfs-write-back 10s --vfs-cache-max-age 300s --vfs-cache-poll-interval 60s --cache-dir %temp% --network-mode --dir-cache-time 5s --use-server-modtime --no-console

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

Paste config here

[JMFCloud]
type = s3
provider = AWS
access_key_id = XXX
secret_access_key = XXX
region = sa-east-1
location_constraint = sa-east-1
storage_class = INTELLIGENT_TIERING
env_auth = true

Double check the config for sensitive info before posting publicly

A log from the command that you were trying to run with the -vv flag

Paste  log here
![untitled|690x442](upload://jqipRLo51Np9VO6BMDRipU7cZCu.jpeg)

welcome to the forum,

can you please post a rclone debug log the shows the problem?

How to do that?

Is not quite a problem that is ocurring, I just wanna a way to sort the files by creation or modified. Everytime I rename a folder, all inside files change the creation date to the atual date.
Look the screenshot here:

--log-level=DEBUG --log-file=c:\path\to\rclone.log
change the path to match your system

ok, i was able to replicate your issue.
and i think i know what is going on.

you are using --use-server-modtime and s3 remote.
it is not possible to rename a folder with s3 remotes.

let's say the folder is named first and you rename it to second

  1. rclone creates a new folder named second
  2. rclone copies the files from first to second
  3. rclone deletes first and the files inside it.

if you use a debug log, you would see that.

2024/01/08 16:19:09 DEBUG : /test6: Rename: newPath="/test7"
2024/01/08 16:19:09 DEBUG : test6/file.ext: md5 = 9afcb2d16863f2df14342a4143c7e45d OK
2024/01/08 16:19:09 INFO  : test6/file.ext: Copied (server-side copy) to: test7/file.ext
2024/01/08 16:19:09 INFO  : test6/file.ext: Deleted

so when rclone creates the new file using --use-server-modtime, the file's time is set to the upload/creation time.

There are no directories in S3 storage - only objects and by convention some parts of their names are displayed as folders e.g.:

Three objects with following names:

first/file1.txt
first/file2.txt
first/file3.txt

can be presented as:

└── first
    ├── file1.txt
    ├── file2.txt
    └── file3.txt

But in S3 "reality" there is no object named first

When "folder" is renamed what is happening really is 3 objects names are changed to:

second/file1.txt
second/file2.txt
second/file3.txt

And as @asdffdsa explained by using --use-server-modtime you are asking rclone to use server modified time instead of object metadata. After this operation all three objects have the same modtime as expected.

So do I have a solution? I need to remove the --use-server-modtime from the command and will fix this? Or this wont work?

Edit:
I test this, removing the --use-server-modtime do the trick, I can see all the files creating date, but folders that have a lot of files, is takes too long to load. Its is not very practical.

It is your choice what is more important in this case - speed or modtime.

BTW - you can change your cache settings to improve situation. Your existing settings use very short times.

--vfs-cache-max-age 300s --vfs-cache-poll-interval 60s --dir-cache-time 5s

This cache time will made the folders load faster the second time I open a folder?
It will only take more the the first time?

Yes. As second time everything will be already cached... unless like with your settings things expire after seconds....

Myself I set cache max age and dir cache time valid for many days - not hours nor seconds:) But all depends on the way how you use it.

You can also add

      --vfs-refresh                            Refreshes the directory cache recursively in the background on start

It will pre load all dir cache on mount start.

without --use-server-modtime, rclone has to make an addtional api call for each and every file.
so keep in mind that, with aws, there is a financial cost to api calls....

https://rclone.org/s3/#reducing-costs

You guys think my parameters are ok?

"C:\rclone\rclone.exe mount JMFCloud:jmf-cloud Y:\Cloud --vfs-case-insensitive --vfs-cache-mode full --vfs-write-back 10s --vfs-cache-max-age 300s --vfs-cache-poll-interval 60s --cache-dir %temp% --network-mode --dir-cache-time 80000s --use-server-modtime --no-console"

My situation is this, I have a small office, 5 different people access those files and are mounted on their machines, it is synced and fast, if someone change something, automatically changes for everyone. But I don't know if all those parameters are good and essencial for speed and costs.

I see that remove --use-server-modtime will increse the time to load the files and the api call, but will show the original creation time of the files, even if I rename a folder or cut a file.

If you guys could share the best scenarios for my situation, I will appriciate.

only you can decide about that tradeoff.
the more api calls, the more cost.
you can monitor that at aws s3 website.

or choose another s3 provider, that does not charge for api calls, such as wasabi or idrive.
or a provider that supports something that is close to real-time changenotify such as gdrive, onedrive, or dropbox.

that means rclone will not notice changes at aws for 80000s, which is not what you want.
would need to set that to a small value, which would use more api calls.

fwiw, when i setup an small office, i prefer to keep the files local.
and use clould for backups.

About the files locally, I need use the rclone, because here in my country, I need a solution that show the log of the files, who created, who deleted, and stuffs like this. We have a law about this, before that, I use this solution of files locally and cloud backup.

ok, but what is wrong with using the operating system for that?
and how do you plan to handle file locking, prevent multiple users from accessing the same file at the same time, etc...

but if you must use cloud based, nextcloud might work for you.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.