Using rclone http endpoint(operations/uploadfile) for client side chunking using content-range header doesn't work

What is the problem you are having with rclone?

We are using rclone http endpoint(operations/uploadfile) for client side chunking using content-range header.
We are passing whole file data (from fs.createReadStream) with content range
headers: {
'Content-Type': 'application/octet-stream',
'Content-Range': 'bytes=855638016-859832319'
something like'operations/uploadfile', chunkData, config)

Problem is the response is 200 OK for all chunks but the actual files are not being uploaded to remote.

Run the command 'rclone version' and share the full output of the command.

using bitnami/rclone:1.62.2 image

Which cloud storage system are you using? (eg Google Drive)

Azure blob storage

I think operations/uploadfile is expecting the whole file and doesn't support upload chunking.

Thanks for quick response @ncw .
We are having timeout issue on really large (like if 5 mins+) from client to rclone, not from rclone to storage.

So we thought of using this Content-Range in headers as it allowed in config. As we could also support pause/resume etc later.

Is there any other way to avoid this timeout issue and achieve these?

might be related to this:

try latest beta.

This should be controlled by these flags I think

  --rc-server-read-timeout Duration    Timeout for server reading data (default 1h0m0s)
  --rc-server-write-timeout Duration   Timeout for server writing data (default 1h0m0s)

How are you doing the upload? It might be your client timing out or rclone timing out - you should see something in the rclone log with -vv which will help us to decide.

Not a bad thought. I think it of low probability using Azure blob storage as the backend but I could be wrong!

@kapitainsky Thanks for the suggestion.

@ncw We are using only http endpoints..
The -vv and --rc-server-read-timeout Duration are to be tried only from command line ?
Is there any latest docker image we could use?
Is there way any other we can get logs while using http?

Our example call for each content range:

const chunkData = chunkData = fs.createReadStream((file as ReadStream).path, { start: rangeStart, end: rangeEnd });

const config = {
headers: {
'Content-Type': 'application/octet-stream',
'Content-Range': 'bytes=1019215872-1023410175'
params: {
fs: 'azure:',
remote: 'path/tmp'
onUploadProgress: [Function: onUploadProgress]
}'operations/uploadfile', chunkData, config)

You put those on the rclone rcd instance you are running.

You can use the beta tag on rclone/rclone:beta

Content-Range won't work - it will be ignored.

@ncw Ok thanks. Tried using this command
docker run --rm -p 5572:5572 -e RCLONE_CONFIG_AZURE_TYPE=azureblob -e RCLONE_AZUREBLOB_ACCOUNT=name -e RCLONE_AZUREBLOB_KEY=key rclone/rclone:beta rcd --rc-serve --rc-no-auth --rc-addr :5572 --rc-allow-origin 'http://localhost:8000' -vv --rc-server-read-timeout 1h

no issues as such on logs... not sure why the actual file is not uploaded though.

But yes Content-Range is what we wanted. Will it be supported in future?

There was a PR which didn't get merged to add the tus uploader which has similar features.

In general though rclone can't support content ranges as cloud providers don't support writing in the middle of blocks.

We could do a simpler multipart upload where you send a file in chunks in sequence. There are some providers which work exactly like this (Google drive for example).

So yes, we could support content range in theory, but we'd rely on you uploading the parts in sequence.

1 Like

If you deploy rclone serving http in a web app on azure then the azure load balancer is timing out the upload ~5 minutes. I am not sure if deploying rclone remotely to manage/decouple clients from multiple backend providers is a common pattern. @ncw from your reply it seems like this is something rclone doesn't support.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.