I experimented with using HTTP, and created a “fake” directory listing. And that actually did work, however, it seems to only work with files relative to that directory – if I put a fully qualified http:// url in the list it ignores it. I wonder if HTTP can be updated to support fully qualified URLs – or could a new source be added to support a basic list of URLs?
I have very big file list. For example 300 file.Every link is different. I wanna do mount, cache off and buffer size off.I researched for 1 month.
Run the command 'rclone version' and share the full output of the command.
I had a look at doing that - it is harder than I thought as internally the http backend just keeps names, not URLs because it assumes all URLs are relative the current one.
Here is an idea for you...
You could make an index page with relative URLs, then add a redirect in the webserver for all those relative URLs. You can do this with a .htaccess file in apache for example. That would work and require no changes to rclone. So something like this
i tried problem is link, probably links have a lot of redirect.And rclone not understanding.
Also I found 200-300 gb a file successfully mounted.But my links not working.Need to track network, maybe not support, head method.