Copy local to remote 0 progress afeter 1h

What is the problem you are having with rclone?

copy single file with size of 3.5TB local to Oracle Cloud bucket (s3) dont work

Run the command 'rclone version' and share the full output of the command.

rclone v1.45

  • os/arch: linux/amd64
  • go version: go1.11.6t.

Which cloud storage system are you using? (eg Google Drive)

oracle cloud

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy -vv *****************************B4B.vbk oracle:DIR-PATH/DIR-PATH/ -P

The rclone config contents with secrets removed.

[mapeamento-cloud]
type = s3
provider = Other
env_auth = false

region = *****************
endpoint = ***************

A log from the command with the -vv flag

rclone copy -vv *****************************B4B.vbk oracle:DIR-PATH/DIR-PATH/ -P
2023/06/13 15:03:08 DEBUG : rclone: Version "v1.45" starting with parameters ["rclone" "copy" "-vv" "******************************4B.vbk" "oracle:DIR-PATH/DIR-PATH/" "-P"]
2023/06/13 15:03:08 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2023/06/13 15:03:08 DEBUG : pacer: Reducing sleep to 0s
2023-06-13 15:03:08 DEBUG : ********************************4B.vbk: Couldn't find file - need to transfer
Transferred:             0 / 3.518 TBytes, 0%, 0 Bytes/s, ETA -
Errors:                 0
Checks:                 0 / 0, -
Transferred:            0 / 1, 0%
Elapsed time:   1h9m57.3s
Transferring:
 *     *************************4B.vbk:  0% /3.518T, 0/s, -

Update rclone and re-test.

hi, ok
i will try and back here to say

You want to change to oracle storage as I'm assuming on the ancient version, you didn't see it.

i run in that issue with large .vbk.

with s3 remotes, rclone has to calculate the md5 hash of the source file before uploads starts.
For large objects, calculating this hash can take some time so the addition of this hash can be disabled with --s3-disable-checksum. This will mean that these objects do not have an MD5 checksum.

i dont urderstand what you say.
i can move small file, like .txt. But with this 3.5TB. Dont work

LOL i will try. I suposed that is a bad pratice, disable this feature?

Transferred: 989.996 MiB / 3.518 TiB, 0%, 54.427 MiB/s, ETA 18h49m24s
Transferred: 0 / 1, 0%
Elapsed time: 19.0s
Transferring:

so is that, thanks for remove this problem from my head, Dont resolve but now i know why the problem. I LOVE YOU

1 Like

well, it is bad, and all the more so, given the source file is a .vbk
i have never disabled that feature, even on a super slow, old windoz server with REFS soft raid.

recent versions of rclone can still verify the file transfer using --s3-use-multipart-etag
tho not sure that works with Oracle.

you can test that using something like this, using a file larger then 5MiB in size.
rclone copy ./file_20MiB.ext wasabi01:zork --s3-upload-cutoff=5Mi -vv
and would see
DEBUG : file.ext: Multipart upload Etag: 0aaed9e7647684db13a56542622ed560-4 OK

i will try, but for curiosity you know how long time to check 1 TB with md5?

do you run it on raspberry pi? or on 32 cores, 1TB RAM server? I think you understand there is no answer to your question

that depends on your local machine.

can do a test, something like:
take a file of size 10GiB, let's call it file.ext

  1. rclone md5sum file.ext
    or
  2. rclone copy file.ext remote -vv, check the debug log for time spent to calculate the md5

take that time duration, from 1 or 2 and multiple by 100
that is the approx time to md5 the 1TiB file.

note: the larger the test file, the more accurate the prediction.

i think i provided it in my last post.

1 Like

for TB/PT size files it is all very complex IMHO - this is what I mean there is no answer - if they seat on NVME NAS with 40 GBit network it will be different than if they are on rust HDD with 1GBit network - whatever 10 GB test shows - as usually when people deal with these file sizes I would assume 10GB might be just in local RAM cache.

Of course your approach is right in principle - test with something smaller to get feeling what to expect.

i will try tomorow. this vm have just 4 gb of ram. But we work with VmWare solution, depends the result we can put more recurses in this machine

thanks for your help <3

memory is not really a problem with rclone, as i could upload a 10TiB file using a vm with just 128MiB memory.

welcome. let us know the results of your testing.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.