Running rclone in single-threaded mode

What is the problem you are having with rclone?

I'm trying to run clone on iOS app iSH, which "emulates x86 instructions".
It is known to have certain bugs when it comes to concurrency, such as git clone stop while `resolving deltas` · Issue #943 · ish-app/ish · GitHub. It is therefore recommended to run git in single-threaded mode via global flags.

I wonder if the same thing is possible with rclone. rclone on iSH sometimes works well, but randomly crashes after certain time of operation, and I would like to try single-threaded mode to test out if it works better. When I run certain commands with -i flag, it does not seem to crash.

I have seen that flags like --multi-thread-streams=N exists, but it only governs usage of threads for downloading large files. I am wondering to know if

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0

  • os/version: alpine 3.14.3
  • os/kernel: 4.20.69-ish (i686)
  • os/type: linux
  • os/arch: 386
  • go/version: go1.17.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone check -P --size-only [my-remote-name]: /mnt/[my-directory-name]

The rclone config contents with secrets removed.

[my-remote-name]
type = drive
client_id = [redacted]
client_secret = [redacted]
scope = drive
token = {"access_token":"redacted","token_type":"Bearer","refresh_token":"redacted","expiry":"2022-01-19T16:29:19.519009Z"}

A log from the command with the -vv flag

There's no way to limit threads as that's buried way down.

You can reduce transfers/checkers to 1 for each.

What you want is to tell the Go runtime to only use one thread... Go threads (as set with --transfers/--checkers are not OS threads and the go runtime multiplexes them onto OS threads)

I think if you set the environment variable export GOMAXPROCS=1 that should make it so Go runs with only one OS thread active at once. As I understand the docs there may be other inactive OS threads though so this may not work depending on exactly what the bug in iSH is!

It would probably be worth setting the environment variable export GODEBUG=asyncpreemptoff=1 to turn off preemptive scheduling of the go threads as that is a likely source of bugs in iSH (there have been quite a few bugs in the Linux kernel with this!)

Let me know if any of those help!

1 Like

I thought that does processes, not threads:

felix@gemini:/$  export GOMAXPROCS=1
felix@gemini:/$ rclone size DB:

and

felix@gemini:~$ cat /proc/94703/status | grep Threads
Threads:	6

Setting GODEBUG=asyncpreemptoff=1 have fixed the issue. Thanks a lot! I mainly use rclone for Obsidian vault synchronization on iOS. It is great that I can use existing great CLI tools on closed garden like iOS.

It isn't a very well named variable, but yes it controls the number of threads the go runtime starts. (In the Linux world a thread is pretty much a new process but with more memory sharing going on so I expect that is where the confusion came from!).

I think what is happening here is when the go runtime is blocked on a syscall it starts a new thread regardless of GOMAXPROCS - a thread which will die once the syscall has returned.

So GOMAXPROCS regulates the number of active threads - there may be other threads waiting on syscalls.

I think that is right - the Go runtime is complicated!

Great!

This is technically speaking a bug in ish so you should probably report it upstream if you can.

That seems pretty close and I'd say good enough as the 2nd thread does stuff here/there but not all 6.

felix@gemini:~$ top -H -p 921752
top - 08:23:07 up 16:15,  4 users,  load average: 1.99, 2.08, 1.89
Threads:   6 total,   0 running,   6 sleeping,   0 stopped,   0 zombie
%Cpu(s):  6.4 us,  1.8 sy,  1.6 ni, 88.4 id,  1.1 wa,  0.0 hi,  0.8 si,  0.0 st
MiB Mem :  31992.6 total,    255.7 free,   6430.7 used,  25306.2 buff/cache
MiB Swap:   8192.0 total,   6846.0 free,   1346.0 used.  25100.4 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
 921755 felix     20   0  746780  46276  28300 S   7.3   0.1   0:01.53 rclone
 921757 felix     20   0  746780  46276  28300 S   5.0   0.1   0:02.50 rclone
 921753 felix     20   0  746780  46276  28300 S   0.7   0.1   0:00.19 rclone
 921752 felix     20   0  746780  46276  28300 S   0.0   0.1   0:00.01 rclone
 921754 felix     20   0  746780  46276  28300 S   0.0   0.1   0:00.00 rclone
 921756 felix     20   0  746780  46276  28300 S   0.0   0.1   0:00.00 rclone

As you get this without it.

%Cpu(s):  2.5 us,  1.8 sy,  2.5 ni, 91.1 id,  1.2 wa,  0.0 hi,  1.0 si,  0.0 st
MiB Mem :  31992.6 total,    237.8 free,   6328.2 used,  25426.6 buff/cache
MiB Swap:   8192.0 total,   6845.5 free,   1346.5 used.  25202.8 avail Mem

    PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
 922005 felix     20   0  748828  46448  28896 S   5.3   0.1   0:00.36 rclone
 922003 felix     20   0  748828  46448  28896 S   3.0   0.1   0:00.18 rclone
 922000 felix     20   0  748828  46448  28896 S   2.7   0.1   0:00.27 rclone
 921996 felix     20   0  748828  46448  28896 S   2.0   0.1   0:00.40 rclone
 922006 felix     20   0  748828  46448  28896 S   2.0   0.1   0:00.22 rclone
 921999 felix     20   0  748828  46448  28896 S   1.7   0.1   0:00.23 rclone
 921995 felix     20   0  748828  46448  28896 S   1.3   0.1   0:00.21 rclone
 921998 felix     20   0  748828  46448  28896 S   1.3   0.1   0:00.37 rclone
 921993 felix     20   0  748828  46448  28896 S   1.0   0.1   0:00.12 rclone
 921994 felix     20   0  748828  46448  28896 S   0.7   0.1   0:00.24 rclone
 921997 felix     20   0  748828  46448  28896 S   0.7   0.1   0:00.17 rclone
 922004 felix     20   0  748828  46448  28896 S   0.7   0.1   0:00.22 rclone
 922001 felix     20   0  748828  46448  28896 S   0.3   0.1   0:00.46 rclone
 922010 felix     20   0  748828  46448  28896 S   0.3   0.1   0:00.26 rclone
 921992 felix     20   0  748828  46448  28896 S   0.0   0.1   0:00.01 rclone
 922002 felix     20   0  748828  46448  28896 S   0.0   0.1   0:00.00 rclone
 922007 felix     20   0  748828  46448  28896 S   0.0   0.1   0:00.00 rclone
 922008 felix     20   0  748828  46448  28896 S   0.0   0.1   0:00.18 rclone

Interesting analysis. I seem to remember reading that goroutines can swap between OS threads which might mean threads that were used for blocked syscalls become active threads later. If that is the case then that means Go is keeping those threads in a threadpool which makes sense too.

That other thread could also be some kind of master thread. I don't know enough about the go runtime internals to be sure!

I've never done that thread analysis before so was neat to learn/look at as well!

Always love the questions and learning something new out of it.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.