Updated to rclone 1.50 - elapsed timer no longer works

Hi,

Does anyone have any idea why this would have stopped working after updating to the new 1.50 release?

Elapsed stays at '0' ... previous version counted just fine.

From the other stats it looks like it hasn't done anything yet ...

Are you using fast-list? If so it's normal behaviour for the timer to not start counting until it starts getting listings (which all come at once after some delay when using --fast-list )

Would this explain what you are seeing?

What operation are you doing?

The elapsed time now counts time transferring files to make the transfer rate calculations look sensible.

So I would guess you are not transferring any files?

Thanks for the replies.

It's a Windows .bat script and includes the '--fast-list' flag.

Previous versions would always see the Elapsed time counter counting ... then after usually 5-10 mins (depending on the size of the job) the transfers would begin.

It was useful as I could see how much time had elapsed before the transfers commenced.

When using fast-list I can not remember that being the case in previous version - not relatively recent ones anyway. I believe the timer has started from the point where listings have completed.

But if you don't use fast-list then the timer will start (almost) immediately since the first listings come back fast and they are done piecemeal instead of all up front.

No guarantees my memory is perfect though...

Does it really take you 5-10minutes to fast-list? That is a very long time for fast-list...
On Gdrive, for ca 84.000 files and ca 4100 folders a fast-list takes me 50-60sec.
It will of course vary depending on file tree complexity, provider any other factors - but it does seem abnormally long to me.

Sometimes --fast-list takes almost 15 mins ... but that is on a volume containing more than 200TB.

Without the --fast-list flag, the timer does indeed start immediately - as does the upload, so I have no idea which version of rclone I installed 1.50 over.

Now I'm actually wondering why I even added the --fast-list flag in the first place.

I know what it does, but shouldn't the following ensure that no duplicates are uploaded:

--ignore-case --size-only --checkers 8 --transfers 8 --max-transfer 750G

Oh, well on 200TB that might indeed be normal. My archive is just about 3,5TB.
Using 10-15 times longer to list almost 60 times more data does not seem unreasonable :slight_smile:

You probably added fast-list because without fast-list the full drive listing may take 10-15x longer :stuck_out_tongue: so that could be hours in your case..

On teamdrives spesifically there seems to be a bug (or limitation? we aren't sure yet) that listings may not be perfect and can time-lag a little bit (not detect very recent changes). regular listing should not be vulnerable to this. So that's a tradeoff you have to make... fast-list is in general great, but if it has to be perfectly accurate and the location frequently changes, it might nto be appropriate (for now at least, Nick is investigating this issue).

One of the most common causes of dupes is multiple concurrent uploads to overlapping locations, so try to avoid that. The perfect situation is if you can make one process do all your uploading.

Interesting point about Shared Drives. I noticed the lag ... particularly when moving from Shared > GDrive. For some reason, the Shared Drive listing is always a little laggy and 'behind' what is actually on the drive at any given point.

As I only run a single instance at a time for backup - always to the main gcrypt and always to the same location - I think I will omit --fast-list as it saves a considerable amount of time per job.

No issues using it with Shared Drives as I only ever send to an empty Shared Drive, so the send starts immediately.

Can't see any potential downside to omitting --fast-list now I think about it...

It really depends on how much you are trying to list.

If transferring something to am empty folder the listing will be "instant" with or without fast-list.
It's on tasks like trying to sync an entire drive where it really starts to matter because that will literally require every file and folder to be listed (at least on the source side, usually less on the destination).

I think to be on the safe side, I'll use --fast-list on all folders that have more than 100,000 files.

Most don't, so the instant start is preferable to that pesky 10 minute delay :slight_smile:

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.