IDrive e2 is duplicating new uploaded files because of versioning

What is the problem you are having with rclone?

New files manually uploaded to IDrive e2 by copy-pasting them using file explorer (rclone mount) are counting twice in storage space because an old version is being generated too, despite being a file that didn't exist.

If I use iDrive e2 website (webapp) to upload, this issue doesn't happen, so it's related to rclone.

I want versioning, but there's no reason to exist two versions for a file was never modified. Think of big untouched media files, I can't waste using double the space nor should I need to run cleanup all the time.

Run the command 'rclone version' and share the full output of the command.

rclone v1.63.1
- os/version: opensuse-tumbleweed (64 bit)
- os/kernel: 6.4.12-1-default (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.0
- go/linking: dynamic
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

IDrive e2 (S3) (trying the free plan before subscribing)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount --allow-other idrive: /media/idrive/ --transfers 8 --poll-interval 0

Then I'm manually uploading files by Ctrl+C → Ctrl+V in file explorer (Dolphin).

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[idrive]
type = s3
provider = IDrive
access_key_id = ***
secret_access_key = ***
endpoint = ****.***.idrivee2-2.com
chunk-size = 10Mi
upload-concurrency = 16
directory-markers = true

A log from the command that you were trying to run with the -vv flag

2023/09/14 03:58:40 INFO  : bucktest/test.txt: Copied (new)

Can you try it without your OS file explorer?

Does

cp localfile /media/idrive/

create versions too?

If not than maybe it is your file explorer problem - the way how it handles copy operations.

You could try to fix it by using:

--vfs-write-back duration  Time to writeback files after last use when using cache (default 5s)

and increase it value.

BTW

does nothing here as S3 is not polling remote.

There's no issue when using cp. Weird, somehow it really seems to be Dolphin's fault.

Thanks, but I don't use VFS file caching and don't intend to.

I know, I set it just to prevent rclone from logging this:

2023/09/14 02:42:45 INFO : S3 root: poll-interval is not supported by this remote

Maybe rclone could autodetect when --poll-interval isn't explicitly set by user and in such case don't log this message. Or change its default to 0 when it's unsupported by running remote.

Then here you are. It let me believe that problem is your files browser - maybe it uses some staged copy operation etc. If cp works than clearly it is not mount issue.

Sure it is your call. But if you use mount for write operations than without VFS level >= "write" you will experience many surprises - there are plenty of posts on this forum related to it. Simply a lot of programs assume that they write to well behaved POSIX filesystem - and mount without cache is very far from it. It is ok to use VFS = off for read only but for writes it is lottery.

Maybe it would make sense to move this message to DEBUG level. Very easy change if you want to give it a go.

I used to use cache, but I had a feel it drains my drives lifespan unnecessarily. So I use tmpfs and was placing cache there, but then I was unable to upload files larger than the available space in tmp because it needs to make a copy in cache before uploading... Not sure if all of this makes sense, but that's basically why I avoid using cache. I'd like to be wrong if VFS fixes this Dolphin issue.

Difficult to comment as I do not know your workload but SSD drives in most cases last much longer than people think. I am heavy user I think and I use 1% of drive writes per year. Even if I used 5% than drive would last 20 years... I stopped paying any attention to this.

You can test it easily.

Here another example (there are many) where some programs get confused when writing to uncached mount:

Without any cache you have to carefully test your software and e.g. in your case stop using this particular file browser - or try to fix it with its authors.

I just tried with --vfs-cache-mode writes --cache-dir /tmp/rcache/ --vfs-cache-max-size 4G and it fixed the issue, thanks.

Would I have issues if I try to upload a file >4GB?

You should not... But I would test. E.g. set it to something small like 10M and test with some bigger file(s)

actually I am not sure if --vfs-cache-max-size flag has any use for write mode... I am not sure.

I just tried and --vfs-cache-max-size didn't have any effect, a file larger than the max size was copied to cache folder. So it would still fail if there's no enough space for the copy in cache folder.

So for giant files I guess I should avoid pasting by file manager and use rclone copy instead.

Good to know it - thx for testing.

Yes this is the way.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.