Hi,
is it possible to continue a download process that was somehow interrupted before?
Hi,
is it possible to continue a download process that was somehow interrupted before?
yes, run the command again.
I tried it and concluded the interrupted download would not continue, but a new download would start from scratch. The old .partial file was deleted and a new one was created. Maybe there is a special use with „rclone copy“?
that is how rclone works.
I think this function is necessary for big file downloads. I’m not sure if the created .partial file can be used to remember the interrupted position.
most cloud providers do not offer that, so rclone does not offer that.
Can't agree more with you. It is an old issue tracked here:
You are more than welcomed to work on this.
--s3-leave-parts-on-error
i tested that a long time ago and tried again just now.
the flag does work, the parts are left in the bucket.
however, next time rclone is run, rclone will not resume, rclone will re-upload all the parts over again.
what is interesting is the following works.
s3browser
, can resume the upload of just the remaining 100 parts.There are bits and pieces available for some remotes (like S3 example you gave) including local where rclone supports sparse files. But as I understand what is missing is unified "resumer" interface which could make resume operations possible where available - effectively different parts of rclone internals have to understand what "resume" is. There are multiple opened issues on github but so far only resulting with years long discussions:) Lack of it is not only painful for large transfers over poor internet connections but also in some cases creates unwanted garbage left behind. This is dark side of chunker for example:) Interrupted transfer can leave huge amount of invisible chunks and rclone has no mechanism to clean it. I even posted in Howto
how to clean them manually.
that is a nice howto but a bit confusing for newbie.
i would re-edit your first post, to mention default scheme only, not your custom scheme.
then delete the second post.
For downloads can use rclone mount and then software like ddrescue to copy file in small chunks. Note that ddrescue keeps mapfile (basically the progress of download) and you can't resume uploading file this way
interesting, did not know that about ddrescue.
but not sure about the added benefit of ddrescue?
rclone mount already does chunked downloads.
if you try to download a file, rclone mount will figure what chunks are not in the cache, and download only those chunks on-the-fly.
That is true. I never use(d) cache so did not think it. While rclone vfs cache is easier for big-ish files I still use ddrescue for really huge files (over 50gb, depends on connection reliability) in size. Both do the work fine on few gigabyte file. As a disadvantage in ddrescue way it can only do one file at a time (can of course use multiple processes at the same time)
This is actually quite good workaround for resuming downloads. Use rclone mount! Only cache expiry has to be set large enough not to lose already downloaded chunks too early.
Never thought about it. Very clever life hack. Good thinking @minesheep
I do not think ddrescue is needed. Any program reading all file will do.
But I think that there is one caveat. It will only work with remotes supporting reading object ranges (S3, OneDrive, GDrive). I am sure not all remotes are capable of it.
Thankfully all remotes are capable of Range
requests! There is too much rclone functionality which relies on it.
I tried this:
rclone copy --checksum --progress --delete-during --metadata --fast-list --inplace OneDrive: something.mp4 G:\cloudfolder –P
It cannot resume the download which was interrupted. Is this the correct usage?
correct. as i stated up above, that is how rclone works.
Note that because of the chunking of mount it does not verify data integrity. Can run rclone hashsum sha256 remote:filename
and sha256sum filename_of_downloaded_file
and compare them to manually verify integrity of downloaded file (sha256sum is a terminal command not rclone subcommand)
(didn't post this to not refresh the topic activity to top, but now that this got new post why not post it)
Do you mean man shall use “rclone mount” and then use conventional copy command to download a big file? Is it foreground or background mount?
that could resume downloads that are interrupted.
but then rclone cannot verify file transfer using checksums.
would not matter. tho, might be a good idea to run the mount in the background.