Listing command line slower than others and web ui

What is the problem you are having with rclone?

rclone ls or any listing command runs pretty slow compared to web ui or from any tutorials I have seen on Youtube. I've seen it run near instant for other YouTubers but for me it takes 2-4 sec for that command . I only tested with Google Drive. I am not sure if this is expected or not. If this is normal then sorry for wasting time. My Google drive is 90% full (from 15GB), not sure if that causes any issues. Or if any advice to make this faster. At a time I am only listing about 2-10 files with maybe 1 or 2 nested folders max. In the 2-4 sec I am only listing a folder (about 3 levels deep from root) and there are only 1-2 files in it.

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.1

  • os/version: Microsoft Windows 10 Pro 21H2 (64 bit)
  • os/kernel: 10.0.19044.2006 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.18.5
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

./rclone.exe ls gcloud:/emulation/saves

The rclone config contents with secrets removed.

type = drive
client_id =
client_secret = mysecret
scope = drive
token = {"access_token":"longtoken"}

A log from the command with the -vv flag

2022/09/16 17:22:11 DEBUG : rclone: Version "v1.59.1" starting with parameters ["C:\\Programming\\repositories\\CloudSync\\windows\\rclone.exe" "ls" "gcloud:/emulation/saves" "-vv" "--log-file=file.log"]
2022/09/16 17:22:11 DEBUG : Creating backend with remote "gcloud:/emulation/saves"
2022/09/16 17:22:11 DEBUG : Using config file from "C:\\Users\\matth\\AppData\\Roaming\\rclone\\rclone.conf"
2022/09/16 17:22:11 DEBUG : Google drive root 'emulation/saves': 'root_folder_id = 0AMD5PyLxHOt8Uk9PVA' - save this in the config to speed up startup
2022/09/16 17:22:11 DEBUG : fs cache: renaming cache item "gcloud:/emulation/saves" to be canonical "gcloud:emulation/saves"
2022/09/16 17:22:12 DEBUG : 4 go routines active

Where's the error? That took 1 second.

You can add that to your rclone.conf to speed up startup.

There is no error, it's just slow. I added it and there was no difference in speed times, still get 2sec to 4 sec each run while web ui is instant.

I'm not sure what you mean by the WebUI.

Your log file is 1 second so that's all I can see.

This is the web ui GUI

I can tell you that trying it again over and over again right now, I timed it and it's 2-4 sec most of the time 3sec or longer

In this video at timestamp you can see how much faster this user gets when running the command

You'd have to share the log file with that timing as the one you shared is 1 second.

The Rclone WebUI is the same command so would take the same time so nothing different.

The timestamp shows the time off before, but ok here another one. I can try to send a screenrecord or something instead
file.log (886 Bytes)

the last run here was 3 seconds.

Edit: sometimes I see 1 sec (its more like 1.7sec) but most of the time its 2.5sec or more

If you are running a command, it'll be variable generally as it just depends on how many files, how Google responds as nothing is cached.

There's no files output or anything so I can't tell what you are listing out.

etexter@seraphim Downloads % rclone ls GD: -vv
2022/09/16 21:27:32 DEBUG : rclone: Version "v1.59.1" starting with parameters ["rclone" "ls" "GD:" "-vv"]
2022/09/16 21:27:32 DEBUG : Creating backend with remote "GD:"
2022/09/16 21:27:32 DEBUG : Using config file from "/Users/etexter/.config/rclone/rclone.conf"
2022/09/16 21:27:32 DEBUG : Google drive root '': 'root_folder_id = 0AGoj85v3xeadUk9PVA' - save this in the config to speed up startup
2022/09/16 21:27:33 DEBUG : AppTestScript: No export formats found for "application/"
2022/09/16 21:27:33 DEBUG : Untitled project: No export formats found for "application/"
       -1 Joeisms.docx
      327 hosts
536870912 sparse_file
       -1 test.xlsx
     8739 test.xlsx
       -1 Copy of Bell Curve - II.xlsx
       -1 Copy of Bell Curve - II.xlsx
       -1 Untitled spreadsheet.xlsx
      278 nothosts
      278 hostsagain
        0 test.tar
       -1 TestDoc.docx
      184 testcopy
   306520 RDManager
1504953150 jellyfish-400-mbps-4k-uhd-hevc-10bit.mkv
      375 crypt/rs88l6p51j4kp0g9if3j294ts0
       -1 test/testsheet.xlsx
      316 crypt/fns8mdipv34944sh8cv8kqpntc/fns8mdipv34944sh8cv8kqpntc/hosts
      364 crypt/vgiitg6j8vuehta15ci1ig135g/vgiitg6j8vuehta15ci1ig135g/rs88l6p51j4kp0g9if3j294ts0
      364 crypt/fns8mdipv34944sh8cv8kqpntc/vgiitg6j8vuehta15ci1ig135g/rs88l6p51j4kp0g9if3j294ts0
       -1 appsheet/data/SimpleInventory-1001251045/Items.xlsx
        0 appsheet/data/SimpleInventory-1001251045/empty.txt
     9112 appsheet/data/SimpleInventory-1001251045/Items_Images/item-567.Image.211512.png
    13181 appsheet/data/SimpleInventory-1001251045/Items_Images/item-345.Image.211436.png
    32409 appsheet/data/SimpleInventory-1001251045/Items_Images/item-123.Image.211420.png
2022/09/16 21:27:34 DEBUG : 20 go routines active

For a small of files, I generally get a 1-2 seconds for my times.

I understand there is some variable in time but I can see people running this get almost realtime feedback in the tutorials. Also maybe the webui caches but its pretty much instant, i am trying to make a screen record.

also my output is

C:\Programming\repositories\CloudSync\windows>rclone.exe ls gcloud:emulation/saves -vv
2022/09/16 18:31:44 DEBUG : rclone: Version "v1.59.1" starting with parameters ["rclone.exe" "ls" "gcloud:emulation/saves" "-vv"]
2022/09/16 18:31:44 DEBUG : Creating backend with remote "gcloud:emulation/saves"
2022/09/16 18:31:44 DEBUG : Using config file from "C:\\Users\\matth\\AppData\\Roaming\\rclone\\rclone.conf"
2022/09/16 18:31:45 DEBUG : Google drive root 'emulation/saves': 'root_folder_id = 0AMD5PyLxHOt8Uk9PVA' - save this in the config to speed up startup
  2744320 0100d12014fc2000/savedata1.enc
    81920 0100d12014fc2000/savedata0.enc
2022/09/16 18:31:49 DEBUG : 4 go routines active


There is only 2 files in that folder.

You'd want to add that to your rclone.conf to speed up startup.

I added the line and I still get over 2sec, some of the runs around 1.6 sec but nothing under or close to 1 sec long.

I am working on the video for the GUI

Here is the with the gui

you run

rclone rcd --rc-web-gui

then it launches your browser. You can see with inspection tools each command runs less than 1 sec, most quick small folders take about 200ms-400ms while with command line each takes 1.7sec to 4 sec to run. I dont think it caches until you run it at least once again, then I killed the server and run through again to get non-cached times. With cache it pretty much is instant. 300 ms is a lot better than 1.7-3.5 sec or more. I am not sure why it is like this (i haven't fully read the source code to figure out why). It seems that web is just running commands possibly from the rclone.exe remotely (hence RC), i cant imagine why it is faster.

There's also this How to speed-up opening folders/traversing directories? - #2 by Daniel_Krajnik says that it's precached. Though from my testing web was still faster because i would change the folder structure on google drive and open the file in the gui explorer and it was 300ms with the updated values.


I'm trying to figure out the first part as rclone without any cache/mount directly to a remote is a fresh call every time. Nothing will ever be cached.

Comparing that to a mount or WebUI won't do much as the mount and the WebUI is caching things so you really can't compare the two.

Though it seems first run on web ui isnt cached. I can unplug internet or change the file remotely and under 300ms it will report the correct file so it cant be cached until the next same run (maybe).

Is there a way to have whatever initialization to be cached and just have it ping the server already logged in and everything? I am trying to do stuff with the steam deck prefer to not keep a long running service in the background if possible, is there any way i can do this with cache or mount that would be faster but not suck up battery life?

Most folks use mounts:

rclone mount

That does caching for files.

Hi Matthew,

It is probably something that makes rlcone slow to start or establish the first connection, since it is fast once started and connected (as seen in the webgui). My best guess is something related to your antivirus, firewall, proxy or dns.

You may try to rule out slow rclone startup (due to antivirus etc) by testing the speed of:

rclone version
rclone config show

try to establish a best case baseline by listing a local folder:

rclone lsd .

try testing another cloud provider:

rclone lsd onedrive:

try testing from another computer on the same LAN

try testing when your computer is connected to another LAN/router

If this doesn't help then you can try tracing the communication with:

rclone lsd gcloud: --dump headers -vv

Hello. Any suggestions of lessening my protection to see if I can get it faster? I turned off Windows defender but didn't see a difference.

Local folder is instant.

Dump headers is useful, i guess this makes sense, it's doing 4 http requests.

I set up onedrive, i get about the same speed as google drive.

The other thing I found out reading Google drive's API is that you can read by folder id

rclone.exe ls gcloud: --drive-root-folder-id=<folder id>

This command is really fast, probably around 450-600ms which is what I expect (i wonder if sync and copy is faster too with the root id), and the onedrive id works too. I can run normal way and cache the ids, I assume these folder IDs dont change unless the user changes them? Do you know If there a way to detect the provider from the remote name?

Edit: found it

rclone listremotes --long

Is it possible folder IDs will change as long as user doesnt change them (rename, move etc)? Also is there a way to use lsf to a folder path and get the folder id of the folder itself or do i need to check from the parent?

I am confused, didn't you follow the first suggestion you got:

What is your redacted output from:

rclone config show gcloud:
type = drive
client_id =
client_secret = ......
scope = drive
token = {"......"}
root_folder_id = 0AMD5PyLxHOt8Uk9PVA

I have it but it but it made no difference. What I meant from my last post is I found the folder id of the specific folder I wanted to ls and that seems to skip a few http requests looking for the folder id to get the files so it is much faster. I get 3x speed up compared to before and about 2x on download and 3x on upload. Now my code I am looking for the folder id of the folder I want to upload to and cache it in db until i need it next time for a specific remote path. Too bad folder id only works on 6 cloud services.