Readahead to RAM on idrive e2 remote for plex streaming

Hi, i'm using idrive e2 remote- rclone (docker plugin) - plex server (in docker) to to stream movies, series, and music mostly to local network (where the home server with the docker plex server is). The problem is, if the movie bitrate is over about 15 mbits, the streaming stops every 10-50 seconds for buffering.

I'm using parameters: --buffer-size=512M -vfs-fast-fingerprint -vfs-cache-mode=minimal -vfs-read-chunk-size=16Mi, but actually doesn't matter, all the same. as buffer-size doesn't means that the buffer will be filled with data (readahead).
The remote is not fast but fast enought to stream high bitrate movie, if the streaming in more or less continous. But it seems plex requests a chunk, then rclone do a handshake with the remote, and request a chunk, then again and again, and that takes time, so with these the overall streaming bandwith is lower than it should be.
What should i do? I don't want to use disk cache, a fraction of RAM (fe 1 gbyte) is enough to store minutes of movie.
I'm thinking about setting up a ramdrive and point the vfs cache to it, but cant i directly set rclone to store vfs cache in RAM without creating a ramdrive seperately?
Or should i just set bigger vfs chunks? As i remember rclone gets the newer chunk if something is requested from the new chunk, which is already too late.

Any hints, experiences with this? For all similar things i see gdrive as remote, which is much faster, and with that i had no problem (but they doubled the monthly fee, so i replaced gdrive).

Assuming the file stays open:


rclone mount

I'd remove buffer-size and you'd want to use full for read ahead.

That really relates to API calls and nothing more. If. you want to get a 1GB file and have 256M chunk size, it takes 4 'chunks' / API requests to get that 1GB.

If you have at 16M, that's quite a chatty number of API hits.

Generally, I'd leave things at default unless I have a good reason to change it.

1 Like

--vfs-read-ahead did not change anything alone. Now i'm trying to set vfs cache to ramdisk (tmpfs), but i can't as -cache-dir is not allowed in docker compose yml (as the rclone runs as docker plugin), probably allowed at plugin settings, which is very hard to change.
But it seems no setting have any effect on buffering problem. Nothing makes it better or worse

using --vfs-cache-mode=minimal, not sure if rclone even uses the vfs file cache for read-only media file.
you have not posted the rclone version, the remote config, exact command or debug log, so very hard to know what is going on?

fwiw, as a test, try --vfs-cache-mode=full --buffer-size=0 and have the cache stored on disk.

i was curious how idrive would stream a video, as it is something i was discussing with a fellow rclone forum member.

this was the exact command
rclone mount izork_crypt: b:\rclone\mount\izork_crypt

using vlc, playback started in less then three seconds, no buffering and skipping was instant.

this was the exact config

type = s3
provider = IDrive
access_key_id = XXX
secret_access_key = XXX
no_check_bucket = true
server_side_encryption = aws:kms
endpoint =

type = crypt
remote = izorkitnow:zorkitnow/store
password = XXX
password2 = XXX

did more testing, same setup as above.
playback started in 9 seconds, there were two very minor buffering issues and skipping was instant.

i did, removed buffer size setting, set vfs to full, nothing is changed.

All? That isn't even a setting.

If you'd like help, please fill out the actual help and support template as saying it something doesn't work without sharing that template makes this really frustrating and tough for all parties.

as i see (however i don't know what are these data rates like Mi/s, i only know mbit/sec, or mbyte/sec...) idrive e2 servers are just slow to be able to stream high bitrate movies without stops. Sometimes its faster, but mostly not. It seems i need to search for another cloud storage provider.

we are just volunteers and you are making this way too complicated.
so either post the requested information or no one can help you.

i have proven that idrive e2 is very fast, can handle 74 Mb/s video stream without using --vfs-cache-mode, without using rclone vfs file cache.

Your transfer speed experience depends on so many factors that it is difficult to determine where the issue is - clearly it is not rclone. As @asdffdsa tests shows it is not iDrive per se. Of course you can try any other provider you wish - maybe something will work for you - maybe not. But before changing provider I would try to move data to another iDrive location. They have quite few in US, Europe and Asia - sometimes network geography is not the same as real one. The problem might be as simple as your Internet provider having issue with specific iDrive hosting centre. I remember somebody on this forum moving data from iDrive Frankfurt to iDrive Paris despite being based in Germany to solve problems.

You have to find by experimenting what works the best for you. There is no single the best solution for all.

if i list the regions, no Paris for me. I'm using frankfurt as region for my bucket, which is the closest geograpically, and in data links too. Now i've created a bucket with ireland region, and uploaded the same files for both buckets, and downloaded them. Frankfurt 1,5 mi/s, ireland 6 mi/sec. The upload was significantly faster on both regions, but the max was 10 mi/sec.

If i find the best region for me how can i move my data to it from other bucket? Moving with rclone, so i need twice of space to do it?

You are right - maybe it was London. Do not remember exactly. Below all iDrive locations.

If you rclone move you will never use more space - forgetting maybe space taken by few files being transferred.

It will only happen if you use rclone copy - but even then if you are happy with new destination you can delete old data very quickly. Assuming you are on pay as you go plan then you only pay for this short time period when data is present on both source and destination.

I would also suggest to investigate iDrive migration tool - FAQs about IDrive® e2 Cloud data migration
I have not tried it myself yet but they support any S3 compatible storage - so in theory it should work with their own buckets. If it does then you do not have to use rclone and your home internet connection at all.

thats what i was thinking too. I'll try the migration tool.
Now just tested 3 regions, by uploading the same file. Frankfurt is the slowest (but the closest in every way from Hungary) however now its faster than on workdays: 4,974 MiB/s down, 6.404 MiB/s up
The fastest is ireland: 8.459 MiB/s down, 10.624 MiB/s up
In the middle is london: 5.996 MiB/s down, 10.365 MiB/s up

After using idrive e2 for nearly one year, i've never seen data rates above 25 Mib/s, i've only seen around 70 with google drive. My 1gbit down, 300 mbit up optical access shouldn't be the problem.

I'll test again tomorrow, and decide.

Things are heavily network geography dependent..

e.g. testing from computer in DK I can get 20 MiB/s download from London but only 10 from Frankfurt

You can try to increase default S3 transfer values, e.g.:

--multi-thread-streams 8
--s3-upload-concurrency 8
--s3-chunk-size 16M

It should give you some noticeable improvements

thanks, it really improves both download, and upload speeds on all regions, now i could reach 29 down, and 26 up with ireland.

Will such settings going to be default for idrivee2 remotes, or always should be set manually?

Nope - not default so you have to add them to your mount command.

Defaults are extremely conservative IMO - but they have to work out of the box on all systems including likes as raspberry pi with 512MB RAM.

You can keep increasing these values to improve performance but at some point it is diminishing gain. And they come at the cost of RAM usage.

sadly, i can't use for the most problematic mount(?):

Creating volume "plex_filmvol" with rclone driver
ERROR: create plex_filmvol: VolumeDriver.Create: unsupported backend option "multi-thread-streams"

It was good try though.

some like these can be set in advanced config, for example chunk size, and upload concurrency, however these setting are without s3, and saved in config without s3.
but what if i write these manually to config file, following the format that i see there? for example:

type = s3
provider = IDrive
access_key_id = keyid
secret_access_key = secret
acl = private
endpoint =
s3_chunk_size = 16Mi
s3_upload_concurrency = 8
multi_thread_streams = 8

How can i check in a specific running rclone instance which parameters are actually used, to check if this works, or these params are just ignored?

Here only the backend flags are stated as useable in config, but multi_thread_streams is global.