Http Fake Lists Not Showing on rclone

What is the problem you are having with rclone?

I experimented with using HTTP, and created a “fake” directory listing. And that actually did work, however, it seems to only work with files relative to that directory – if I put a fully qualified http:// url in the list it ignores it. I wonder if HTTP can be updated to support fully qualified URLs – or could a new source be added to support a basic list of URLs?

I have very big file list. For example 300 file.Every link is different. I wanna do mount, cache off and buffer size off.I researched for 1 month.

Run the command 'rclone version' and share the full output of the command.

I am using last version. / Windows-amd64


That's correct.

If it isn't under the current directory it will be ignored. That's how rclone simulates a file system.

It sounds like you want a new feature where rclone reads a list of URLs from a file, or maybe a flag for rclone to ignore the relative check?

Probably A flag for rclone to ignore the relative check. I wanna mount vfs.

I had a look at doing that - it is harder than I thought as internally the http backend just keeps names, not URLs because it assumes all URLs are relative the current one.

Here is an idea for you...

You could make an index page with relative URLs, then add a redirect in the webserver for all those relative URLs. You can do this with a .htaccess file in apache for example. That would work and require no changes to rclone. So something like this

.htaccess file is something like

Redirect file1.jpg
Redirect file2.jpg

Then your index file is something like

<a href="file1.jpg">file1.jpg</a>
<a href="file2.jpg">file2.jpg</a>

Would that work for you?

I tried a lot of file some file is working but my links its not working.

When i mount, i think, its making cache and buffer.

rclone mount test:/ /dani --vfs-cache-mode off --multi-thread-streams 30 --low-level-retries 2 --retries 2 --vfs-read-chunk-size 16M --buffer-size off --max-backlog 20000 --contimeout 9s --fast-list --no-traverse --no-modtime --read-only --log-level INFO --stats 1m
type = http
url = my website example

This file http://myfileurl

index .html

					<tr class="file">
							<a href="/">
								<svg width="1.5em" height="1em" version="1.1" viewBox="0 0 265 323"><use xlink:href="#file"></use></svg>
								<span class="name"></span>


Redirect / https://myfileurl

@ncw so how can i fix that?

I sent on dm my file? Can you try for me?

Maybe link have different struct, for example get method.I am waiting for you

I tried to download your files but they seem to be 100 GB? I don't have the bandwidth to download files that big.

Try with test: first.

First step, does rclone lsf test: show the files? If this doesn't work then your HTML is not working how rclone expects.

Second step does rclone -P copy /tmp/ copy the file? If not run with -vv --dump headers and post the output.

i tried problem is link, probably links have a lot of redirect.And rclone not understanding.
Also I found 200-300 gb a file successfully mounted.But my links not working.Need to track network, maybe not support, head method.

You can disable HEAD with the no_head flag as a test.

I think you will need HEAD requests to work to make the VFS work though.

yes i activated no head. but i can t access file not opening.Accessing with head, getting details.Beacuse including how many file size.

Ho_head active

File size 0

how can browser see file size? I think same solution.So you have any solution? Maybe need coding

I think you can't use no-head with mount.

You will have to make the HEAD requests work.

I tried all actions, any you have news? Thank you.

and ı sent my list domain and original link

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.