Mount: Create drive BEFORE vfs fully processed?

What is the problem you are having with rclone?

I have a remote FTPS drive I'm mounting locally to a drive letter, so that I can backup stuff to it with another service easily (actually, a downloader that pulls large amounts of data). The mount works fine, I can use it very nicely - since the drive is accessed by FTP, it is quite slow (regardless of whether I use --network-mode or not). This isn't a problem, except for the fact that the VFS cache fills up much faster than it empties, given the source downloader.

If I stop and start rclone, it will continue to empty the VFS to the FTP server as intended - BUT the mount does not complete until it does so. That means the desired mount volume doesn't seem to appear until the VFS is entirely empty, which could take hours.

Is there a way to get the volume to show up at the beginning of the process instead of the end, so I can continue to use it?

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.0

  • os/version: Microsoft Windows 10 Pro N 22H2 (64 bit)
  • os/kernel: 10.0.19045.3448 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.21.1
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Remote FTPS hard drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount Berlin: X: --vfs-cache-mode full --transfers=10 -vv --vfs-cache-max-size 500G --vfs-cache-max-age 5000h

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

type = ftp
host = XXX
user = XXX
port = XXX
pass = XXX
explicit_tls = true
no_check_certificate = true

That's why we ask for a log as that's a common misconception.

The way it works is that it has to check the each item against the remote before it starts or you have potential data loss/corruption/etc.

When using slower remotes, the number of files it has to check is the slow point as it has to validate each and every one. The size of each file is irrelevant. The number of files in the cache on startup is the challenge.

Issue documented here:

It's not a simple fix as there's no easy way to combat that as that's why it's been open for so long.

Generally, if you have a slow remote and a large number of items in the cache, you will see a lengthy startup process for the mount so don't stop it :slight_smile:

That was fast! Fair enough - so it's syncing the remote with the VFS? In which case, what is the difference between these two types of log entries:

2023/09/20 18:18:47 DEBUG : [remote_path]/img_20220406_074840.jpg: vfs cache: truncate to size=1971299 (not needed as size correct)
2023/09/20 18:18:47 DEBUG :  [remote_path]/img_20220406_074840.jpg: vfs cache: setting modification time to 2022-04-06 02:18:40 -0700 PDT
2023/09/20 18:18:47 INFO  :  [remote_path]/img_20220406_074840.jpg: vfs cache: queuing for upload in 5s
2023/09/20 18:18:47 DEBUG :  [remote_path]: Added virtual directory entry vAddFile: "img_20220406_074840.jpg"
2023/09/20 18:18:47 DEBUG : ftp://[remote URL]: dial("tcp","---------")
2023/09/20 18:18:47 DEBUG : ftp://[remote URL]: > dial: conn=*tls.Conn, err=<nil>
2023/09/20 18:18:47 DEBUG : ftp://[remote URL]: dial("tcp","----------")
2023/09/20 18:18:47 DEBUG : ftp://[remote URL]: > dial: conn=*tls.Conn, err=<nil>
2023/09/20 18:18:50 DEBUG : ftp://[remote URL]: dial("tcp","------------")
2023/09/20 18:18:50 DEBUG : ftp://[remote URL]: > dial: conn=*tls.Conn, err=<nil>
2023/09/20 18:18:52 DEBUG : [remote_path]/img_20220406_074840.jpg: vfs cache: starting upload

and

2023/09/20 18:28:42 DEBUG : [remote_path]/img_20220406_075423.jpg.lusuwup4.partial: renamed to: [remote_path]/img_20220406_075423.jpg
2023/09/20 18:28:42 INFO  : [remote_path]/img_20220406_075423.jpg: Copied (new)
2023/09/20 18:28:42 DEBUG : [remote_path]/img_20220406_075423.jpg: vfs cache: fingerprint now "3810541"
2023/09/20 18:28:42 INFO  :[remote_path]/img_20220406_075423.jpg: vfs cache: upload succeeded try #2

?

Thanks,
R

Say you have 1000 files in the cache.

It has to check 1 by 1 each file from the source against the remote.

FTP is already a very slow remote.

A full log file would show the count of items to be validated.

Heh, it's been going on since before I made my first post and still scrolling (with -vv). Judging by the filenames and the order it's going, it's less than 1/4 of the way through. The scrollback buffer on my terminal isn't big enough to see the beginning, and there are no lines that have counts of items anywhere in-between... it's just a lot of the above two types of things repeating.

However, on an earlier invocation that completed after a few hundred items, I DID see something like "items remaining: x, Items processed: y, (size: z)" - did I omit a flag that would show me that? Should I interrupt it?

R

This would the line to look for:

2023/08/15 00:43:39 INFO  : vfs cache: cleaned: objects 24122 (was 24122) in use 0, to upload 0, uploading 0, total size 999.799Gi (was 999.799Gi)

as an example. The "to upload" is the number that's key.

Yeah.... that's the one I saw before, but is definitely NOT in the current set of things that are going by.

Here's a picture (sorry, it's running on a different machine from where I'm creating this):

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.