Rclone cache panic error in latest beta

Sorry i have only mobile access at the moment please see attached log

Cache info is set for 1 hour that may help trigger the buggy rclone move operation.
Upload speed was limited only 800kBps but not via command line.

There definitely no IO error on the server local file system. No read/write access error on local file. It was always like this only when doing operation via rclone cache.

If i do the same operation via rclone, not cache, then all operation would finish without error. But then the file won’t be available in cache despite past its file metadata info age. It will only be available when i manually purge cache.

Last command is in the bottom screenshot.

Am i correct to understand that the local file ladybird was deleted because rclone move operation was completed AND the file that was transferred via rclone cache to my gdrive was also deleted because the cache expired?
Because i can’t seem to find the file both locally in my server and in gdrive…
Is its true then cache should be only read only and don’t offer user to do any operation other than read access to files.

EDIT: it seems that i was one version behind on beta, it was 1.39something. I just updated rclone to v1.39-259-g4924ac2fβ-linux-amd64
And Rclone lsd GDCache: doesn’t throw panic error so i will try rclone move via cache writes again and will report if i encounter the same bug…

EDIT2: I confirmed we have a bug. See the second and third screen shot when i tried the same command via rclone cache and non cache in the latest beta…
So what is the correct and safest way to do rclone move then?
My use case is like this : i have Gdrive and GDcache wrapping gdrive for plex. I sometimes upload file to gdrive from this server which i can use rclone move via cache or non cache. Some other time i have to upload via other server from Gdrive web UI and windows Google file stream.
I thought by setting the cache info age to 1 hour and uploading file via rclone move to GDcache will do the trick but there is error below…
If i upload via rclone move to GDrive directly it wont available in GDCache until i manually purge cache and delete cache files…

@okcomp can you check your version with rclone -V?

Looking at the trace that runs on an old build. By the looks of it, the clean 1.39 if not even 1.38

Make sure you’re copying the rclone binary to the location of your PATH

Nevermind, read your edits better.

From the look of it the move command seemed to work. I don’t understand exactly the issue.

The 2nd screenshot still has the panic which suggests it was run on the same older version

@remus.bunduc

[detached from 31380.move]
root@jupiter:~# rclone -V
rclone v1.39

  • os/arch: linux/amd64
  • go version: go1.9.2
    root@jupiter:~# rclone version
    rclone v1.39
  • os/arch: linux/amd64
  • go version: go1.9.2
    root@jupiter:

Binary used was always the correct latest beta. The second and third screen shot use the latest beta version mentioned above. The first screen shot used one version older beta of 1.39.

Don’t you see the problem? File operation via rclone cache (GDCache:) throws panic error again that was resolved in older beta version last few weeks ago.
Please look at the first and second screen shot for this.

So when I use rclone cache mount for plex, i can’t do file operation via rclone to GDCache: because this will trigger panic error. See the bottom first screen shot for the example command i used.
Thus i have to use rclone move to Gdrive1: by passing cache for any file operations such as rclone move. This is the third screen shot.
But when i do this, the file won’t be available via rclone cache mount for plex until i reboot my server which then restart the systemctl rclone mount which will - - purge-cache-db at the start of mount. Then i would have to delete the cache folder manually.

I had thought that setting cache info age into shorter time would fix that problem, that it would invalidate cache and force rclone to find latest file and folder list from Google server, but it doesn’t…
I guess this is related to the panic error that seems to happen when rclone do file and folder operations via GDCACHE (rclone on gdrive cache)

It’s clear from the output. The rclone in your PATH is the vanilla 1.39 without all the beta updates (one that included the fix for your panic).

This is what you should have:
bunduc-mac:rclone bunduc$ ./rclone -V
rclone v1.39-259-g4924ac2f-master

  • os/arch: darwin/amd64
  • go version: go1.10

Do a which rclone, get the full path and replace that rclone from there.
@okcomp

Where do you see that it was old version of rclone?
I have only one version of rclone installed via

curl https://rclone.org/install.sh | sudo bash -s beta

Your version: rclone v1.39
A beta version (mine as an example): rclone v1.39-259-g4924ac2f-master

I see. I’m not sure about that script. Perhaps it’s bugged? You can get the binary directly from here for your OS: https://beta.rclone.org/v1.39-259-g4924ac2f/

@remus.bunduc

Thank you very much fir this. You are right. I have multiple version of rclone. It seems Ubuntu prioritize rclobe that was in the /usr/sbin than the one in /usr/bin/rclone which is the path when installing rclone.deb or rcloneinstall.sh

I must have manually compiled rclone to /usr/sbin in the past that was keep being prioritize by Ubuntu.
I will test rclone move via GDCACHE again.
Apologize for wasting your time…

[detached from 31380.move]
root@jupiter:~# which rclone
/usr/sbin/rclone root@jupiter:~# /usr/bin/rclone -V
rclone v1.39-259-g4924ac2fβ - os/arch: linux/amd64

  • go version: go1.10
    root@jupiter:~# which rclone
    /usr/sbin/rclone
    root@jupiter:~# rclone -V
    rclone v1.39
  • os/arch: linux/amd64
  • go version: go1.9.2
    root@jupiter:~# rm /usr/sbin/rclone
    root@jupiter:~# ln -s /usr/bin/rclone /usr/sbin/rclone
    root@jupiter:~# rclone -V
    rclone v1.39-259-g4924ac2fβ
  • os/arch: linux/amd64
  • go version: go1.10

No problem. Glad we got it sorted

Thank you. I confirmed that the latest beta does solved the panic error and file and folder operation via GDCACHE: is OK.
In bonus point, plex plays much better and plex scans better. I can see from debug log that the cache info files were correctly invalidated when it past the configured cache info age

That’s by design, the information expires at that time and it’s refreshed from source.

Once remote API polling is solid and known to be working you could set that to a week or more.

What I meant was that it’s working in the latest beta. It wasn’t working for me as in the cache info age was retained way past configured value, it never tried to invalidated cache and refresh from gdrive

Yeah I currently experience something similar where changes aren’t being polled from Google drive with one of the recent betas.

Thats great to know I’m not the only one…
I thought I’m getting shadow banned by google that sometimes the file i uploaded via rclone cache or outside rclone is not available via cache. Utbalso happen sort of randomly to recent files that i uploaded or modified.
I