Issues with File copy's pausing

Hi All,

Newbie to all things Rclone.
I seem to of gotten it working and i can upload files which is great!
However i'm having the below issues.

Anything you can suggest for me to try? My upload is only 35mbps which isn't allot however it's good enough for me :slight_smile:

Pausing issues when copying to rclone folder on windows.
Rclone is mounted on Linux with a samba share to windows for Plex server write access and Sonarr/Radarr.
Using plex drive for plex libs

I'm still learning rclone however i'm not sure what im doing wrong.
Attached is a photo of rclone output and what windows is doing.

I'm mounting rclone with the attached
rclone --vfs-cache-mode writes mount --cache-chunk-size 10M --buffer-size 0M --cache-chunk-clean-interval 15m --allow-non-empty --allow-other --umask=0 GDrive: ~/mnt/GDrive -v

Using GSuite
Setup is
Plex server - Ubuntu VM for rclone and plex drive
Google drive shared to windows with samba share so i can easily upload all content stored on local disks.
Plex drive mounted to plex PC for plex to search libs
Sonarr and Radarr used for downloading (This issue seems to happen when it auto copies files also as it's freezes/get's stuck)

I think it makes sense :frowning:

Config file
[GDrive]
type = drive
client_id = ****
client_secret = *******
scope = drive
token = ***

(The speed below is while it is slowing down, when it's going it's quick. Then it will pause and sit at zero, kicking it back off again after a few minutes)

The goal would be to use rclone copy or move and go directly to the remote rather than writing the mount first as that's going to be much slower.

The process for the mount and using the cache mode is that is copies to the local disk first, uploads it to the remote (one file at a time) and does not happen in parallel.

If you use rclone copy/move you can run multiple files at the same time.

@Animosity022 Ah i see, how would i go about setting this up?

It took me a little while to get to this stage.
I have used RClone browser to uplaod large amounts of files.

Are you able to help me change my config?

Thank you very much

Unfortunately, I personally don't use Windows and have never touched rclone browser.

@thestigma is a Windows guy and might be able to add some input :slight_smile:

As animosity points out - the apparent spiking in speed and "pausing" is likely due to the write cache on the mount.
The cache likely isn't actually slowing you down much unless you work with particularly slow HDDs and very large files. It just appears that way because of how windows is writing to the cache rather than directly to the cloud.

The phenomenon is basically this: Windows will copy to the cache (fast) and rclone then starts uploading the file in a transfer(slower). Then windows copies more files to cache and rclone starts another transfer. By default on a mount (and I don't think this can be changed currently) you have 4 concurrent upload transfers, so after rclone is already busy with 4 files it will tell windows to pause the transfer until and upload slot opens up (causing the windows transfer to apparently halt for no reason), and then the rest will fill in over time. This is why it seems to start and stop - but it's only the local copying giving rclone the time it needs to actually finish with it's current transfers.

If you want to look at your actual transfer progress use something like --stats 10s in rclone and it will tell you periodically. and give you a much better idea of what is going on I think it is also worth noting that the apparent pausing in the windows copy dialoge isn't actually slowing things down. Rclone is working at full capacity in the background while this is happening and your bandwidth is going to be the limiting factor here.

Setting up a system that does not copy via the cache is certainly an option - but it is unlikely to be much faster in practice. You will only save as much time as it takes to copy the first file locally to cache (likely only a few seconds to a minute depending on file size). I can help you set up an alternative if you want - but I don't think with actually "solve" anything here.

If you want to speed up uploads you will probably see more practical gains from adding these two lines in your config for your Gdrive:
upload_cutoff = 128M
chunk_size = 128M

The default is only 8MB and assuming you can afford to use up to 4x128=512MB of memory during uploads (only) this will give you far better utilization of bandwidth (much less TCP saw-toothing). if you have less memory, set any lower number as long as it is a multiple of 8. 64MB is quite adequate too. Less return the higher you go. I have seen little to no benefit over 128-256MB, so don't go overboard and assume more is necessarily better.

Oh, and in relation to sonarr /radarr , make sure to set this software to download to a temporary local folder before moving the finished files to the ultimate destination. A temp-folder for incomplete files is a pretty common option to have in torrent software, so look around.

In general you want to avoid saving "in-progress" files directly to the cloud-destination (whether using cache or not) because these files tend to get open and closed many times during the process - and this can lead to them being transferred to and from cloud dozens of times which is obviously extremely inefficient and slow. Torrents are the classic example, but video renders, "scratch disk" files and similar things can sometimes have the same basic issue. This is entirely dependent on how the spesific software handles it's files.

Hi!

Thank you very much for the info!

Sonnar/Radarr download using NZBGet which moves everything to a completed folder, sonarr/radarr then move this to the share. so i think that's setup okay.

I think i'll leave it the way it is if it's running as intended, it does only appear to be uploading 1 file at a time though. I have adde those lines to my file to see if it helps.

~Machine has 6GB ram so that's all good. I'm going to move the Ubuntu VM to ESXI later on, however for now it's virtual box on the plex server.

Am using Sonarr V3 (As it'll move the files for me if i change root fol;der) However it seems to fail moving the files as it says it cannot find the moved file at the new destination. So i need to see if this is a Rclone issue or a Sonarr issue.

Many thanks for your help!
Any other recommendations for my setup?
the 35mbps is the fastest that i can get unfortunately :frowning: I do have 352 down though! So i'm slowly uploading all my stuff to the cloud, with all new stuff going straight in there.

(Photo is of the stats screen)

3.946MB/sec means you are getting optimal utilization. That is 90% of 35Mbit - which is about as high as you can hope to get since TCP inherently has some overhead. Looking good.

The mount should be able to do 4 transfers at a time, but that assumes you try to move more than 1 file at a time (which your software may not). If you want to test then just manually transfer more than 4 files and watch the stats. I assume this explains it. Don't worry - for files that have non-trivial transfer times multiple transfers aren't faster anyway so this isn't a problem.

About Sonarr - What you just told me sounds very similar to something another user discussed with me in a thread not too long ago. I just can't remember if we solved that or not. I will come back to this topic if I remember or find that topic. In any case I doubt it's rclone actually failing to do the move, but some spesific software sometimes do need special considerations to work smoothly with Cloud drives.

If you run rclone with -vv
(enables debug output)
and capture a log of the event where radarr fails, then we may be able to say something more about what is happening. It will only really tell us about problems happening in rclone though - assuming the problem is in rclone at all.
for long longs (which debug logs tend to be), you can use --log-file=myrclonelog.txt

Hey! Thank you for this.

So i noticed Sonarr says it's failed but it does actually do it so i think it'll leave it for now.

However i'm now noticing that it's copying more than one file at a time and causing my bandwidth to be spread about more files causing it to take even longer to upload!
Is there a way i can limit this to say 1 transfer so it would actually end up faster!

i know there is a -transfers flag however i don't think this works with mount?
Screenie below of what's happening :frowning: I think i might need to change to rclone -move?

All help greatly appreciated!
Amazing what all you guys do with this software

If this image is from a mount then this must be downloads, not uploads? ..

A mount technically has "unlimited" download transfer slots (it has to in order to simulate a regular harddrive so this is not something that could be changed)
By default a mount uses 4 upload transfers at a time. This number could be changed in theory but I don't think an option for that has been added yet (--transfers exists but that only takes effect in non-mounts)

That many uploads shouldn't be possible via a mount - and if that are downloads then you will probably have to see if you can configure the other software to not ask for so many concurrent files - because an rclone mount has no other option than to obey. rclone doesn't download anything by itself - it only follows the requests of other software.

Having so many transfers should actually impact the total time it takes to transfer all of them but I agree that it can be annoying and inconvenient, and I would try to fix it too.

When using Plex and similar software you should go throuygh teh settings carefully and especially look out for any options like automatically creating thumbnails, overly frequent scanning of files, and especially "deep analysis" sort of scans. This is because such options frequently involve having to read the entire file - and it does this to your entire library. This owrks tolerably well on a local harddrive, but depending on the library size and internet speed this could take days or weeks to complete over the internet and should not be done automatically.

This is kind of what it looks like to me be honestly - Plex doing some sort of full scanning opening up a bunch of files at the same time. If you need more spesific info on plex, ask Animosity for help. I don't use Plex myself, so I don't know the options well enough.

Hey!

Thank you very much for your response. I have turned all the analysis Stuff off as suggested by the internet on api hits!

They are 100% uploads. That’s a new tv show that I downloaded, radars moved them to the gdrive and then it started uploading them all.

I thought this too but it seems to be able to upload them all at once. My downing is 352mbps so I wouldn’t expect to see this speed on a download.

Once a file hits 100% I see the blah blah file name (copied) Plex picks it up and then I can play it. :slight_smile:

It’s an odd one

A lot of the issues with such features aren't necessarily the API calls you use - but rather that they need to read the entire file for the operation. For a large library that just isn't viable for most people over a normal internet connection. API calls come more into play in the actual streaming settings in programs like Plex I think - but Animosity is the expert on this - so I will defer to him regarding Plex :slight_smile:

Regarding the transfers:
Very strange. I'm not sure if --transfers maybe can work on a non-cached mount but

  • You DO use write cache according to your config, and that's definitely 4 upload transfers
  • I can't even see that you attempt to use --transfers anywhere
  • --transfers is 4 by default anyway even if you don't use a mount
    So I just can't make sense of this

You aren't using the cache backend in here somewhere are you? (a separate caching system from the VFS cache). Cache backend with --cache-writes and/or temp-upload enabled would maybe impact how this works.

--cache-chunk-size 10M , this is a cache-backend (not VFS cache) flag and will do nothing unless you actually use the cache backend, so that is why I ask.

The closest thing to this for the VFS are:
--vfs-read-chunk-size 32M
--vfs-read-chunk-size-limit off
(numbers are only examples not necessarily a recommendation - read the documentation to learn more)
but the VFS inherently works a bit different than cache so it's not exactly the same - for example a much higher chunk-size won't make a stream start slow unlike with cache backend. It's mostly useful I think to save some download bandwidth when partially reading non-streamable files - like maybe fetching small chunks to torrent seed for example.

I think that if you want to dig deeper into this issue then I will need to see your full config file (remember to redact passwords and secrets before posting).

Also I would still recommend double-checking the software settings in whatever program you think is performing the move operation. It may have some sort of limiter setting on the maximum files to transfer at once. I'm not familiar enough with radarr either to tell you outright, sorry.

Hi,

thank you for your response - i understand it a little more now.

Below is my config file straight from the server. This is the only one i could find it's called rclone.conf and located in home/.config
[GDrive]
type = drive
client_id = **
client_secret = **
scope = drive
token = ***
upload_cutoff = 128M
chunk_size = 128M

I start rClone with - rclone --vfs-cache-mode writes mount --cache-chunk-size 10M --buffer-size 0M --cache-chunk-clean-interval 15m --allow-non-empty --allow-other --umask=0 GDrive: ~/mnt/GDrive -v --stats 10s

however i understand i can take out --cache-chunk-size 10m as it's not doing anything?
Perhaps this could be throwing rClone out?

I'll remove and monitor it. Is there anything you can see from my config file as to why it would act this way? i have also added -transfers 1 to the start command to see what that does.

I have checked in radarr and sonarr and there doesn't seem to be any max file transfer limits etc, so i don't think that#s something i can control at the moment. The odd thing is if i copy using file explorer it'll only upload 1 file at a time.

Perhaps i need to blow it all away and start again with a new VM etc. I am using plex drive for plex to view it's media so i may remove this and put rclone back on windows. However i'm not sure how to view the logs/stats when it's running as a service.

Many thanks
Jon.

Your config shows you use a very straight and uncomplicated remote setup. The only way it could be any simpler is if you didn't use the VFS cache (which I do not recommend). So in short I don't see anything that could cause it to behave this way.

You might want to check you aren't using some very old version. run "rclone version" to see.
You should be using 1.48 . Many linux repositories have very outdated versions if that's where you downloaded from. I would suggest grabbing the install package straight from this site if you need to update. Updating is uncomplicated as all configuration files will remain compatible and you literally just have to replace the main files.

You can remove the --cache-chunk-size yes. It will not do anything without the cache backend and the flag is exclusive to it. It won't hurt you either, but I'd remove it to prevent later confusion with the VFS cache.

if you add -vv you get debug output which will tell you everything that happens (but it's a lot of cryptic text - feel free to post it but try to keep it short).
--log-file=mylogfile.txt can also be used to dump output to file for long logs

Lastly, it is normal for windows to transfer files into the cache one at a time (not sure if these internal transfers display as transfers in rclone?) but what should happen is that the cache keeps accepting more files until it has 4 uploads going and then pauses - so you should be getting 4 active uploads pretty quick.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.