Feature request - buffering file list?

Is it possible to add such option that rclone will act not according to list of files on some cloud storage,
but upon list of files buffered on local disk or database saved after first scan and automatically updated during transfer?

In case of transfer interrupt it would not have to rescan cloud, only load from list already saved and updated.

Does the --files-from option help?


thanks, but how to create such list database in rclone?

I have for example 100000 files on amazon cloud drive,
i need to transfer them into google driv, but halfway, after 500 001 connection broke.
Now rclone normally checks all files, but it is time consuming cause it checks on cloud.
So it would be better to do smth like this:

rclone --scan-to-local-file ls.txt amazoncloud:

if now this ls.txt has such content:

and rclone copy --source-as-list=ls.txt amazoncloud: gdrive:
it automatically updates this ls.txt
so after sucessfull copy of file1.txt
ls.txt will look like this:
(file1.txt was copied so it is removed from ls.txt).

Now if connection breaks or smth happen rclone would not have to rescan clouds, only use this ls.txt as reference point what to copy next.

You can use rclone ls to generate the file list. You’ll need to munge the output a bit…

and will it automatically start from correct point in the list where it stopped?

I think that if you wait for the database cache to be implemented then this problem goes away because the sync will be MUCH faster basing off of the cache rather than actually looking at the providers themselves.

I’ve personally started to use the mounts themselves to sync rather than going directly to a provider because that mount does in fact have a cache. I will warn you though that while it is much faster to ‘check’ because of the cache, there are currently some bugs around retries related to the mounts as well. So YMMV.

do you know in what milestone or when is this database cache planned?

That is issue #711. .