rclone ls or any listing command runs pretty slow compared to web ui or from any tutorials I have seen on Youtube. I've seen it run near instant for other YouTubers but for me it takes 2-4 sec for that command . I only tested with Google Drive. I am not sure if this is expected or not. If this is normal then sorry for wasting time. My Google drive is 90% full (from 15GB), not sure if that causes any issues. Or if any advice to make this faster. At a time I am only listing about 2-10 files with maybe 1 or 2 nested folders max. In the 2-4 sec I am only listing a folder (about 3 levels deep from root) and there are only 1-2 files in it.
Run the command 'rclone version' and share the full output of the command.
rclone v1.59.1
os/version: Microsoft Windows 10 Pro 21H2 (64 bit)
os/kernel: 10.0.19044.2006 (x86_64)
os/type: windows
os/arch: amd64
go/version: go1.18.5
go/linking: static
go/tags: cmount
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
2022/09/16 17:22:11 DEBUG : rclone: Version "v1.59.1" starting with parameters ["C:\\Programming\\repositories\\CloudSync\\windows\\rclone.exe" "ls" "gcloud:/emulation/saves" "-vv" "--log-file=file.log"]
2022/09/16 17:22:11 DEBUG : Creating backend with remote "gcloud:/emulation/saves"
2022/09/16 17:22:11 DEBUG : Using config file from "C:\\Users\\matth\\AppData\\Roaming\\rclone\\rclone.conf"
2022/09/16 17:22:11 DEBUG : Google drive root 'emulation/saves': 'root_folder_id = 0AMD5PyLxHOt8Uk9PVA' - save this in the config to speed up startup
2022/09/16 17:22:11 DEBUG : fs cache: renaming cache item "gcloud:/emulation/saves" to be canonical "gcloud:emulation/saves"
2022/09/16 17:22:12 DEBUG : 4 go routines active
I understand there is some variable in time but I can see people running this get almost realtime feedback in the tutorials. Also maybe the webui caches but its pretty much instant, i am trying to make a screen record.
also my output is
C:\Programming\repositories\CloudSync\windows>rclone.exe ls gcloud:emulation/saves -vv
2022/09/16 18:31:44 DEBUG : rclone: Version "v1.59.1" starting with parameters ["rclone.exe" "ls" "gcloud:emulation/saves" "-vv"]
2022/09/16 18:31:44 DEBUG : Creating backend with remote "gcloud:emulation/saves"
2022/09/16 18:31:44 DEBUG : Using config file from "C:\\Users\\matth\\AppData\\Roaming\\rclone\\rclone.conf"
2022/09/16 18:31:45 DEBUG : Google drive root 'emulation/saves': 'root_folder_id = 0AMD5PyLxHOt8Uk9PVA' - save this in the config to speed up startup
2744320 0100d12014fc2000/savedata1.enc
81920 0100d12014fc2000/savedata0.enc
2022/09/16 18:31:49 DEBUG : 4 go routines active
C:\Programming\repositories\CloudSync\windows>
then it launches your browser. You can see with inspection tools each command runs less than 1 sec, most quick small folders take about 200ms-400ms while with command line each takes 1.7sec to 4 sec to run. I dont think it caches until you run it at least once again, then I killed the server and run through again to get non-cached times. With cache it pretty much is instant. 300 ms is a lot better than 1.7-3.5 sec or more. I am not sure why it is like this (i haven't fully read the source code to figure out why). It seems that web is just running commands possibly from the rclone.exe remotely (hence RC), i cant imagine why it is faster.
I'm trying to figure out the first part as rclone without any cache/mount directly to a remote is a fresh call every time. Nothing will ever be cached.
Comparing that to a mount or WebUI won't do much as the mount and the WebUI is caching things so you really can't compare the two.
Though it seems first run on web ui isnt cached. I can unplug internet or change the file remotely and under 300ms it will report the correct file so it cant be cached until the next same run (maybe).
Is there a way to have whatever initialization to be cached and just have it ping the server already logged in and everything? I am trying to do stuff with the steam deck prefer to not keep a long running service in the background if possible, is there any way i can do this with cache or mount that would be faster but not suck up battery life?
It is probably something that makes rlcone slow to start or establish the first connection, since it is fast once started and connected (as seen in the webgui). My best guess is something related to your antivirus, firewall, proxy or dns.
You may try to rule out slow rclone startup (due to antivirus etc) by testing the speed of:
rclone version
rclone config show
try to establish a best case baseline by listing a local folder:
rclone lsd .
try testing another cloud provider:
rclone lsd onedrive:
try testing from another computer on the same LAN
try testing when your computer is connected to another LAN/router
If this doesn't help then you can try tracing the communication with:
Hello. Any suggestions of lessening my protection to see if I can get it faster? I turned off Windows defender but didn't see a difference.
Local folder is instant.
Dump headers is useful, i guess this makes sense, it's doing 4 http requests.
I set up onedrive, i get about the same speed as google drive.
The other thing I found out reading Google drive's API is that you can read by folder id
rclone.exe ls gcloud: --drive-root-folder-id=<folder id>
This command is really fast, probably around 450-600ms which is what I expect (i wonder if sync and copy is faster too with the root id), and the onedrive id works too. I can run normal way and cache the ids, I assume these folder IDs dont change unless the user changes them? Do you know If there a way to detect the provider from the remote name?
Edit: found it
rclone listremotes --long
Is it possible folder IDs will change as long as user doesnt change them (rename, move etc)? Also is there a way to use lsf to a folder path and get the folder id of the folder itself or do i need to check from the parent?
I have it but it but it made no difference. What I meant from my last post is I found the folder id of the specific folder I wanted to ls and that seems to skip a few http requests looking for the folder id to get the files so it is much faster. I get 3x speed up compared to before and about 2x on download and 3x on upload. Now my code I am looking for the folder id of the folder I want to upload to and cache it in db until i need it next time for a specific remote path. Too bad folder id only works on 6 cloud services.