Upload (copy) stalls for InternetArchive remote

What is the problem you are having with rclone?

when I've uploaded files to Internet Archive, rclone stalls for a long time after it uploaded all the file to the server
confirmed by using Internet Archive's history page

Run the command 'rclone version' and share the full output of the command.

$ rclone version
rclone v1.59.0-beta.1.deab868
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.11.0-1028-oracle (aarch64)
- os/type: linux
- os/arch: arm64
- go/version: go1.18.1
- go/linking: dynamic
- go/tags: none

note that i'm using a custom build based on deab86867c236c466e5785ebfbcbfdddf6d1e60b with following patch:

$ git diff
diff --git a/fs/operations/operations.go b/fs/operations/operations.go
index 2807657..9320d76 100644
--- a/fs/operations/operations.go
+++ b/fs/operations/operations.go
@@ -53,6 +53,10 @@ import (
 //
 // If an error is returned it will return equal as false
 func CheckHashes(ctx context.Context, src fs.ObjectInfo, dst fs.Object) (equal bool, ht hash.Type, err error) {
+       ci := fs.GetConfig(ctx)
+       if ci.IgnoreChecksum {
+               return true, hash.None, nil
+       }
        common := src.Fs().Hashes().Overlap(dst.Fs().Hashes())
        // fs.Debugf(nil, "Shared hashes: %v", common)
        if common.Count() == 0 {
@@ -520,7 +524,7 @@ func Copy(ctx context.Context, f fs.Fs, dst fs.Object, remote string, src fs.Obj
        }
 
        // Verify hashes are the same after transfer - ignoring blank hashes
-       if hashType != hash.None {
+       if false {
                // checkHashes has logged and counted errors
                equal, _, srcSum, dstSum, _ := checkHashes(ctx, src, dst, hashType)
                if !equal {

even without it, results are same, for reference

Which cloud storage system are you using? (eg Google Drive)

Internet Archive, not in release yet

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy --transfers=4 -P --order-by size,descending --ignore-checksum queue/ bdmv-all:squashfs/ --order-by size

The rclone config contents with secrets removed.

[ia-bdmv]
type = internetarchive
access_key_id = 
secret_access_key = 
wait_archive = 0

[bdmv-all]
type = union
upstreams = ia-bdmv:squashfslibrary01:ro ia-bdmv:squashfslibrary02:ro ia-bdmv:squashfslibrary03:ro ia-bdmv:squashfslibrary04:ro ia-bdmv:squashfslibrary05 ia-bdmv:squashfslibrary06 ia-bdmv:squashfslibrary07:ro ia-bdmv:squashfslibrary08:ro ia-bdmv:squashfslibrary09:ro ia-bdmv:squashfslibrary10:ro
create_policy = lfs

A log from the command with the -vv flag

:man_facepalming: sorry I didn't knwo I need log with -vv, will paste once I can do that again (in 4~5 hours or so, maybe)

$ rclone copy --transfers=4 -P --order-by size,descending --ignore-checksum queue/ bdmv-all:squashfs/ --order-by size
2022-05-05 04:01:24 NOTICE: Time may be set wrong - time from "s3.us.archive.org" is 10m50.095792121s different from this computer
Transferred:       82.615 GiB / 173.365 GiB, 48%, 10.330 MiB/s, ETA 2h29m56s
Transferred:            0 / 7, 0%
Elapsed time:      18m2.4s
Transferring:
 *                             weh/s1/disc1.sqfs: 76% /30.883Gi, 10.330Mi/s, 12m1s
 *                             weh/s1/disc2.sqfs:100% /20.414Gi, 0/s, 0s
 *                      edens/zero/s1/disc3.sqfs:100% /19.300Gi, 0/s, 0s
 *                      edens/zero/s1/disc2.sqfs:100% /19.301Gi, 0/s, 0s

Thanks for testing this :slight_smile:

Can you make this issue on github and I'll tag the author of the ia backend there and we can work out what the problem is. If you could attach a log with -vv --dump headers of the point rclone locks up, that would be very useful.

We should also talk about your patches to ignore checksums - using --ignore-checksums should mean that rclone doesn't even attempt to read checksums - so I think it shouldn't be needed - did you find otherwise?

thanks for reply

let me open a issue at github

i'm sure i have --ignore-checksum in options, but it still stalls to me, hence the patch

edit: opened at Upload (copy) stalls for InternetArchive remote · Issue #6150 · rclone/rclone · GitHub

1 Like

is there any possibility on rclone side (not ivernet archive) that casuing this, except checksums?
actually i have private channels with lesmiscore (who wrote internet archive support) but he says "IAS3 should respond without wait, like how ia command works"

edit:
i've tried adding --dump headers but it cause OOM on system, causing it completely stop (even though it has 10GB memory!)

(I'm not sure if I'm allowed to write here but) I asked @gippeumgwa to apply the following patch:

diff --git a/backend/internetarchive/internetarchive.go b/backend/internetarchive/internetarchive.go
index ce994c2..411f1c6 100644
--- a/backend/internetarchive/internetarchive.go
+++ b/backend/internetarchive/internetarchive.go
@@ -177,6 +177,7 @@ func (f *Fs) Features() *fs.Features {
 
 // Hashes returns type of hashes supported by IA
 func (f *Fs) Hashes() hash.Set {
+       return hash.Set(hash.None)
        return hash.NewHashSet(hash.MD5, hash.SHA1, hash.CRC32)
 }
 
@@ -277,6 +278,8 @@ func (o *Object) Fs() fs.Info {
 
 // Hash returns the hash value presented by IA
 func (o *Object) Hash(ctx context.Context, ty hash.Type) (string, error) {
+        return "", hash.ErrUnsupported
+
        if ty == hash.MD5 {
                return o.md5, nil
        }
@@ -861,6 +864,7 @@ func (f *Fs) waitFileUpload(ctx context.Context, reqPath string, newHashes map[h
                modTime: time.Unix(0, 0),
                size:    -1,
        }
+       return ret, nil
 
        if f.opt.WaitArchive == 0 {
                // user doesn't want to poll, let's not

Now waiting for reply.

Of course you are - nice to have you on the forum!

I try to keep discussion either on the forum or on an issue though otherwise you end up saying everything twice!

I'd love to keep this discuss here
@gippeumgwa okay?

1 Like

Let's continue here, as gippeumgwa agreed.

@ncw So about the symptom, I guess some tests in rclone (which is invoked even with --ignore-checksum???) is causing a long-time stop after upload.
As the config explicitly stops waiting after upload (and I confirmed upload check isn't running), it must be waiting for something outside backend code.

I'm not sure, but there is a way to find out.

If you run rclone until it is stalled, then kill -QUIT the rclone process. It will produce a very long backtrace - can you paste that here or attach it as a file or put it on pastebin so we can see it.

That will show exactlly what rclone is waiting for on.

Are you uploading very big files? Unfortunately --dump causes the bodies to be buffered in memory.

A log with just -vv would be helpful.

When using --ignore-checksum it shouldn't call the Hash() unless the source has a different modtime to the destination and rclone is trying to work out if it needs to upload it or not. I don't think this should happen for ia since with wait_archive = 0 Precision is fs.ModTimeNotSupported.

The log with -vv will show for sure.

Presumably if rclone is calling Hash and we told rclone not to wait with wait_archive = 0 then it should return either a hash if it has one right now, or an empty string meaning it doesn't have a hash?

okay, runnign a not modified rclone to check

yes, a lot of big files

thanks

Actually it should be an empty string, but with current code, it returns the old value, if it was previously existed. (of course, it'll be empty if there wasn't)

Thank you very much for helpful debugging tips for us.

here's thelog after it started to stall

2022/05/06 09:39:11 DEBUG : rclone: Version "v1.59.0-beta.6115.a3d430789" starting with parameters ["/home/manriki/.local/rclone-v1.59.0-beta.6115.a3d430789-linux-arm64/rclone" "copy" "--transfers=4" "-P" "--order-by" "size,descending" "--ignore-checksum" "--exclude" ".*" "queue/precure/tropical-rouge/movie-ring.sqfs" "bdmv-all:squashfs/" "-vv"]
2022/05/06 09:39:11 DEBUG : Creating backend with remote "queue/precure/tropical-rouge/movie-ring.sqfs"
2022/05/06 09:39:11 DEBUG : Using config file from "/home/manriki/.config/rclone/rclone.conf"
2022/05/06 09:39:11 DEBUG : fs cache: adding new entry for parent of "queue/precure/tropical-rouge/movie-ring.sqfs", "/home/manriki/sqfs/queue/precure/tropical-rouge"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "bdmv-all:squashfs/"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary03"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary04"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary06"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary05"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary08"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary09b"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary07"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary01"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary02"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary10b"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary08/squashfs"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary05/squashfs"
2022/05/06 09:39:11 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary01/squashfs"
2022/05/06 09:39:12 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary04/squashfs"
2022/05/06 09:39:12 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary10b/squashfs"
2022/05/06 09:39:12 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary06/squashfs"
2022/05/06 09:39:12 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary07/squashfs"
2022/05/06 09:39:12 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary02/squashfs"
2022/05/06 09:39:12 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary09b/squashfs"
2022/05/06 09:39:12 DEBUG : Creating backend with remote "ia-bdmv:squashfslibrary03/squashfs"
2022/05/06 09:39:12 DEBUG : union root 'squashfs/': actionPolicy = *policy.EpAll, createPolicy = *policy.Lfs, searchPolicy = *policy.FF
2022-05-06 09:39:12 DEBUG : movie-ring.sqfs: Need to transfer - File not found at Destination
2022-05-06 10:07:48 NOTICE: Time may be set wrong - time from "s3.us.archive.org" is 28m35.36078711s different from this computer
Transferred:       38.877 GiB / 38.877 GiB, 100%, 9.820 MiB/s, ETA 0s
Transferred:            0 / 1, 0%
Elapsed time:     28m50.2s
Transferring:
 *                               movie-ring.sqfs:100% /38.877Gi, 9.820Mi/s, 0sSIGQUIT: quit
PC=0x7bdec m=0 sigcode=0

goroutine 0 [idle]:
runtime.futex()
	runtime/sys_linux_arm64.s:596 +0x1c
runtime.futexsleep(0x0?, 0x0?, 0xffffd3544cd8?)
	runtime/os_linux.go:66 +0x2c
runtime.notesleep(0x27fa548)
	runtime/lock_futex.go:159 +0x94
runtime.mPark(...)
	runtime/proc.go:1449
runtime.stopm()
	runtime/proc.go:2228 +0x88
runtime.findrunnable()
	runtime/proc.go:2804 +0x964
runtime.schedule()
	runtime/proc.go:3187 +0x238
runtime.park_m(0xe9ef576?)
	runtime/proc.go:3336 +0x144
runtime.mcall()
	runtime/asm_arm64.s:186 +0x54

goroutine 1 [syscall]:
syscall.Syscall(0x3f, 0x9, 0x40008c8000, 0x8000)
	syscall/asm_linux_arm64.s:9 +0x10
syscall.read(0x4000a72180?, {0x40008c8000?, 0x400077cbe8?, 0x14fc8?})
	syscall/zsyscall_linux_arm64.go:696 +0x48
syscall.Read(...)
	syscall/syscall_unix.go:188
internal/poll.ignoringEINTRIO(...)
	internal/poll/fd_unix.go:794
internal/poll.(*FD).Read(0x4000a72180?, {0x40008c8000?, 0x8000?, 0x8000?})
	internal/poll/fd_unix.go:163 +0x210
os.(*File).read(...)
	os/file_posix.go:31
os.(*File).Read(0x40003c0178, {0x40008c8000?, 0x8000?, 0x8000?})
	os/file.go:119 +0x68
github.com/rclone/rclone/backend/local.fadviseReadCloser.Read({0x40008c6900?, {0x1ab7ee8?, 0x40003c0178?}}, {0x40008c8000?, 0x1?, 0x40008c1610?})
	github.com/rclone/rclone/backend/local/fadvise_unix.go:157 +0x48
io.copyBuffer({0x1aafc20, 0x40009d65b8}, {0xffff7d711350, 0x40009d65a0}, {0x0, 0x0, 0x0})
	io/io.go:426 +0x168
io.Copy(...)
	io/io.go:385
github.com/rclone/rclone/fs/hash.StreamTypes({0xffff7d711350, 0x40009d65a0}, 0x0?)
	github.com/rclone/rclone/fs/hash/hash.go:118 +0x234
github.com/rclone/rclone/backend/local.(*Object).Hash(0x4000d68720, {0x4000055130?, 0x400077d038?}, 0x1)
	github.com/rclone/rclone/backend/local/local.go:906 +0x638
github.com/rclone/rclone/backend/internetarchive.(*Fs).getHashes(0x4000c0e1e0?, {0x1abee88, 0x4000118000}, {0xffff7d4e2490, 0x4000d68720})
	github.com/rclone/rclone/backend/internetarchive/internetarchive.go:788 +0x1a8
github.com/rclone/rclone/backend/internetarchive.(*Object).Update(0x400049eb60, {0x1abee88?, 0x4000118000}, {0x1aae060, 0x4000eca100}, {0xffff7d4e2490, 0x4000d68720}, {0x83ae26f4?, 0x400077d408?, 0x1c290?})
	github.com/rclone/rclone/backend/internetarchive/internetarchive.go:715 +0x7d8
github.com/rclone/rclone/backend/internetarchive.(*Fs).Put(0x400062c790, {0x1abee88, 0x4000118000}, {0x1aae060, 0x4000eca100}, {0xffff7d4e2490?, 0x4000d68720?}, {0x4000367780, 0x1, 0x1})
	github.com/rclone/rclone/backend/internetarchive/internetarchive.go:442 +0x168
github.com/rclone/rclone/backend/union/upstream.(*Fs).Put(0x40003c2080, {0x1abee88?, 0x4000118000?}, {0x1aae060?, 0x4000eca100?}, {0xffff7d4e2490?, 0x4000d68720?}, {0x4000367780?, 0x4000abd558?, 0xf7cd3c?})
	github.com/rclone/rclone/backend/union/upstream/upstream.go:171 +0x68
github.com/rclone/rclone/backend/union.(*Fs).put(0x40009ae2a0, {0x1abee88?, 0x4000118000}, {0x1aae060, 0x4000eca100}, {0xffff7d4e2490, 0x4000d68720}, 0x0, {0x4000367780?, 0x1, ...})
	github.com/rclone/rclone/backend/union/union.go:467 +0x230
github.com/rclone/rclone/backend/union.(*Fs).Put(0x4000eca100?, {0x1abee88, 0x4000118000}, {0x1aae060, 0x4000eca100}, {0xffff7d4e2490?, 0x4000d68720?}, {0x4000367780, 0x1, 0x1})
	github.com/rclone/rclone/backend/union/union.go:518 +0xfc
github.com/rclone/rclone/fs/operations.Copy({0x1abee88, 0x4000118000}, {0x1acf7f8, 0x40009ae2a0}, {0x0?, 0x0?}, {0xffffd3545619, 0xf}, {0x1ace7c8?, 0x4000d68720})
	github.com/rclone/rclone/fs/operations/operations.go:472 +0x1304
github.com/rclone/rclone/fs/operations.moveOrCopyFile({0x1abee88, 0x4000118000}, {0x1acf7f8, 0x40009ae2a0}, {0x1ace758?, 0x400058ea20?}, {0xffffd3545619, 0xf}, {0xffffd3545619, 0xf}, ...)
	github.com/rclone/rclone/fs/operations/operations.go:1894 +0x734
github.com/rclone/rclone/fs/operations.CopyFile(...)
	github.com/rclone/rclone/fs/operations/operations.go:1916
github.com/rclone/rclone/cmd/copy.glob..func1.1()
	github.com/rclone/rclone/cmd/copy/copy.go:88 +0x98
github.com/rclone/rclone/cmd.Run(0x1, 0x1, 0x2?, 0x400077dd20)
	github.com/rclone/rclone/cmd/cmd.go:255 +0x10c
github.com/rclone/rclone/cmd/copy.glob..func1(0x27dc960?, {0x40007f9c20, 0x2, 0xa})
	github.com/rclone/rclone/cmd/copy/copy.go:84 +0xa0
github.com/spf13/cobra.(*Command).execute(0x27dc960, {0x40007f9ae0, 0xa, 0xa})
	github.com/spf13/cobra@v1.4.0/command.go:860 +0x4c4
github.com/spf13/cobra.(*Command).ExecuteC(0x27d80e0)
	github.com/spf13/cobra@v1.4.0/command.go:974 +0x360
github.com/spf13/cobra.(*Command).Execute(...)
	github.com/spf13/cobra@v1.4.0/command.go:902
github.com/rclone/rclone/cmd.Main()
	github.com/rclone/rclone/cmd/cmd.go:553 +0x78
main.main()
	github.com/rclone/rclone/rclone.go:14 +0x20

goroutine 8 [select]:
go.opencensus.io/stats/view.(*worker).start(0x40000d2000)
	go.opencensus.io@v0.23.0/stats/view/worker.go:276 +0x88
created by go.opencensus.io/stats/view.init.0
	go.opencensus.io@v0.23.0/stats/view/worker.go:34 +0xa8

goroutine 112 [chan receive]:
github.com/rclone/rclone/backend/local.(*fadvise).worker(0x4000ea8ac0)
	github.com/rclone/rclone/backend/local/fadvise_unix.go:115 +0x3c
created by github.com/rclone/rclone/backend/local.newFadvise
	github.com/rclone/rclone/backend/local/fadvise_unix.go:75 +0x130

goroutine 133 [select]:
github.com/rclone/rclone/cmd.startProgress.func3()
	github.com/rclone/rclone/cmd/progress.go:57 +0xe0
created by github.com/rclone/rclone/cmd.startProgress
	github.com/rclone/rclone/cmd/progress.go:49 +0x190

goroutine 111 [select]:
github.com/rclone/rclone/fs/accounting.(*StatsInfo).averageLoop(0x4000003520)
	github.com/rclone/rclone/fs/accounting/stats.go:312 +0x104
created by github.com/rclone/rclone/fs/accounting.(*StatsInfo).startAverageLoop.func1
	github.com/rclone/rclone/fs/accounting/stats.go:341 +0x78

goroutine 528 [chan receive]:
github.com/rclone/rclone/backend/local.(*fadvise).worker(0x40008c6900)
	github.com/rclone/rclone/backend/local/fadvise_unix.go:115 +0x3c
created by github.com/rclone/rclone/backend/local.newFadvise
	github.com/rclone/rclone/backend/local/fadvise_unix.go:75 +0x130

goroutine 177 [select]:
github.com/rclone/rclone/fs/accounting.(*Account).averageLoop(0x4000eca100)
	github.com/rclone/rclone/fs/accounting/accounting.go:212 +0xb8
created by github.com/rclone/rclone/fs/accounting.newAccountSizeName
	github.com/rclone/rclone/fs/accounting/accounting.go:110 +0x3f8

r0      0x27fa548
r1      0x80
r2      0x0
r3      0x0
r4      0x0
r5      0x0
r6      0x1
r7      0x7
r8      0x62
r9      0x1
r10     0x0
r11     0x3
r12     0x0
r13     0x0
r14     0x1
r15     0x1
r16     0xffffd3535270
r17     0xffffd3544ce0
r18     0x0
r19     0x8
r20     0xffffd3544c90
r21     0x27fa400
r22     0x4000004000
r23     0x1772300
r24     0x4000b052f8
r25     0xf2edf6a68a8c2da7
r26     0x1777dc0
r27     0x27c03e8
r28     0x27f9b20
r29     0xffffd3544c48
lr      0x4399c
sp      0xffffd3544c50
pc      0x7bdec
fault   0x0

Didn't know ObjectInfo.Hash from local fs would actually calculate hashes!
That's why it's bypassing --ignore-checksum!

Oh, I see, it is the local hashes which are taking ages to calculate, not the internet archive ones.

There is no point calculating them here is there? So the backend should just not calculate them if wait_archive = 0 and all should be well.

It is intended to be used as one evidence to detect file changes...
Or since I know how to add custom metadata to a file at upload time, should the backend use it?

Here's a quick test to see how metadata changes:

first: python3 ~/ia upload lesmi-rclone-test /bin/bash -H 'x-amz-filemeta-testcode:hellothisisoldfile'

image

second: python3 ~/ia upload lesmi-rclone-test /bin/sh -H 'x-amz-filemeta-testcode:boysnextdoor' --remote-name bash

image

(notice difference of testcode)

Custom metadata seems usable for the purpose. Now no point in using checksums. Let's use it

edit: backend/internetarchive: fix uploads can take very long time by Lesmiscore · Pull Request #6156 · rclone/rclone · GitHub

I have merged @Lesmiscore changes into the beta - it should be ready in 15-30 mins. Thank you.

2 Likes

thank you very much!
much appreciated

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.