Rclone mount service taking too long to restart

rclone v1.55.1
ubuntu 20.04 lts server
google cloud storage

RCLONE_LOG_FILE=/var/log/gcloud/rclone.log
RCLONE_CACHE_DIR=/var/cache/gcloud
RCLONE_VFS_CACHE_MODE=full
RCLONE_VFS_CACHE_MAX_AGE=175200h
RCLONE_VFS_MAX_SIZE=1024G
RCLONE_VFS_POLL_INTERVAL=24h
RCLONE_DIR_CACHE_TIME=12h
RCLONE_POLL_INTERVAL=6h

[Service]
Type=notify
User=root
Group=root
ExecStart=/usr/bin/rclone mount gcloud:bucket /gcloud --umask=0002 --allow-other --default-permissions
TimeoutStartSec=600
ExecStop=/bin/fusermount -uz $PATH_LOCAL
Restart=on-failure
RestartSec=5

find /var/cache/gcloud/ -type f | wc -l
370600
du -sh /var/cache/gcloud/
32G /var/cache/animati/gcloud/

3 days working very well. no errors in log file.
I enter the command "systemctl restart gcloud.service" and it takes more than 3 minutes to restart. Than I enter the same command again and It takes few seconds.

Why the service is taking soo long time after a day production?
Would it be a problem that i should worry about?
(I had to add "TimeoutStartSec=600" (10min) to it get started ok.)
Is this because the cache size is growing or because it has been running for a long time?
I intend to have a local vfs cache of up to 10tb. Can it be a problem?

The full debug log file has the exact information on what it is doing.

Each time the help template is deleted, an angel loses its wings. Please help save the angels.

sorry, forgot the -vv.

I get a restart from another service (more than 3 days working)...

the average file size is: 0.09mb (lot of files)
total of 55gb in cache.

the -vv log:

2021/05/25 00:41:39 DEBUG : Using config file from "/etc/gcloud/rclone.conf"
2021/05/25 00:41:39 DEBUG : rclone: Version "v1.55.1" starting with parameters ["/usr/bin/rclone" "mount" "gcloud:bucket" "/gcloud" "--umask=0002" "--allow-other" "--default-permissions" "-vv"]
2021/05/25 00:41:39 DEBUG : Creating backend with remote "gcloud:bucket"
2021/05/25 00:41:39 INFO : bucket: poll-interval is not supported by this remote
2021/05/25 00:41:39 DEBUG : vfs cache: root is "/var/cache/gcloud/vfs/gcloud/bucket"
2021/05/25 00:41:39 DEBUG : vfs cache: metadata root is "/var/cache/gcloud/vfs/gcloud/bucket"
2021/05/25 00:41:39 DEBUG : Creating backend with remote "/var/cache/gcloud/vfs/gcloud/bucket"
2021/05/25 00:52:10 DEBUG : bucket: Mounting on "/gcloud"

(more than 10 minutes to start.)
get stuck here "Creating backend with remote"
a new restart 2 minutes after that. take only 13seconds.

Why the service is taking soo long time after a day production?
Would it be a problem that i should worry about?
(I had to add "TimeoutStartSec=600" (10min) to it get started ok.)
Is this because the cache size is growing or because it has been running for a long time?
I intend to have a local vfs cache of up to 10tb. Can it be a problem?

This is probably the reason - rclone scans the cache on startup which means it has to check those 370k files.

Currently it doesn't start up until the cache has finished scanning.

It could probably scan faster too...

1 Like

how is just one instance of rclone working with this backend would work set RCLONE_DIR_CACHE_TIME=175200h ?

the second time it get started is too fast is because the cache is still alive in the 12h?

would be some improvement? we don't would need to check all files at startup maybe only the files has been not sent yet (write back)

If you goal is data integrity and moving things around and moving large amounts of data, I would definitely suggest looking at the union backend or something like mergerfs.

This allows for much more control in moving data when you want and you don't hit issues with items in the cache waiting to be uploaded.

I personally use mergerfs and a rclone move script to move local data to the cloud at scheduled intervals.

I use mergerfs as it allows for hard links which are a requirement for my use case which union does not work yet. Depending on what you do, union might be sufficient.

This means my cache is only for reading majority of the time and anything written is done locally and uploaded at intervals.

I'd be pretty scared leaving large chunks of files/data in the cache to upload, but that's me.

1 Like

you mean this union backend (https://rclone.org/union) ?

I would need something like AWS Storage Gateway (https://aws.amazon.com/pt/storagegateway) that would work with any backend. Have all data on the storage backend (50tb) but a big local cache (10tb and lot of k files). the data is not in the local cache be retrieve instantly... just need one instance with this backend.

Mergerfs/union backend both do the same principle as it takes 2 or more local/remotes and makes them appear as one.

In my example, I take a local disk and my Google Drive remote and combine them into a single mounted directory.

Through the policies, you can configure that to work how you want. My policy always writes to the first item, which is my local disk.

To the OS/application, it doesn't know or care there are multiple things underneath it.

At night, I rclone move my local disk to the cloud and nothing changes from an OS / application perspective as the paths never change.

1 Like

whats is the website of Mergerfs and union backend?

You linked the rclone union backend link already.

Mergerfs:

1 Like

if I get it well.... wold be something like:

/local_mount
/remote_mount (rclone mount)

and than create another rclone mount with union of /local_mount and /remote_mount ?

I have my setup pretty well explained here:

If you want to take a peek and ask any questions.

1 Like

I see. So you move the files from local to remote.
But. Is possible to have the same file local and remote? and after long time just in remote?

Yes, duplicates don't matter as my policy will always read local first and then check the 2nd item (my GD in this case).

You can configure the vfs cache options just like you have so that would act exactly the same as it does now.

The mergerfs is just a layer on top that brings together the local and the cloud remote so I get the best of both worlds.

1 Like

ok. I get it.
but who will make the job to clean de local (when the disk is full) and make sure the files are in remote?

I run a command to upload nightly and have disk space monitoring so I can't say I've hit those issues.

My local disk a 10TB drive. My rclone cache is a 1TB SSD which is separate. Rclone manages the space on the cache drive.

1 Like

turning back this question...

find /var/cache/gcloud/ -type f | wc -l
551210

2021/05/25 00:41:39 DEBUG : Using config file from "/etc/gcloud/rclone.conf"
2021/05/25 00:41:39 DEBUG : rclone: Version "v1.55.1" starting with parameters ["/usr/bin/rclone" "mount" "gcloud:bucket" "/gcloud" "--umask=0002" "--allow-other" "--default-permissions" "-vv"]
2021/05/25 00:41:39 DEBUG : Creating backend with remote "gcloud:bucket"
2021/05/25 00:41:39 INFO : bucket: poll-interval is not supported by this remote
2021/05/25 00:41:39 DEBUG : vfs cache: root is "/var/cache/gcloud/vfs/gcloud/bucket"
2021/05/25 00:41:39 DEBUG : vfs cache: metadata root is "/var/cache/gcloud/vfs/gcloud/bucket"
2021/05/25 00:41:39 DEBUG : Creating backend with remote "/var/cache/gcloud/vfs/gcloud/bucket"
2021/05/25 00:52:10 DEBUG : bucket: Mounting on "/gcloud"

I just restart the service now and it takes 5 minutes.
after 2 minutes I restarted it again and takes 10 seconds.

Why after a few minutes it takes a lot less time?
I set RCLONE_DIR_CACHE_TIME=175200h

If you share the full log, it's in there :slight_smile:

this is the full log. the first time stuck on "Creating backend with remote"
and the second don't take so long time.

If items are queued for upload, you'd see that in the log.

felix@gemini:~$ rclone mount gcrypt: /home/felix/test --vfs-cache-mode full --transfers 1 --bwlimit 1M -vv
2021/05/27 07:51:06 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2021/05/27 07:51:06 INFO  : Starting bandwidth limiter at 1MBytes/s
2021/05/27 07:51:06 DEBUG : rclone: Version "v1.55.1" starting with parameters ["rclone" "mount" "gcrypt:" "/home/felix/test" "--vfs-cache-mode" "full" "--transfers" "1" "--bwlimit" "1M" "-vv"]
2021/05/27 07:51:06 DEBUG : Creating backend with remote "gcrypt:"
2021/05/27 07:51:07 DEBUG : Creating backend with remote "GD:crypt"
2021/05/27 07:51:07 DEBUG : vfs cache: root is "/home/felix/.cache/rclone/vfs/gcrypt"
2021/05/27 07:51:07 DEBUG : vfs cache: metadata root is "/home/felix/.cache/rclone/vfs/gcrypt"
2021/05/27 07:51:07 DEBUG : Creating backend with remote "/home/felix/.cache/rclone/vfs/gcrypt"
2021/05/27 07:51:07 DEBUG : test1: vfs cache: truncate to size=1504953150
2021/05/27 07:51:07 DEBUG : test1: vfs cache: setting modification time to 2021-05-27 07:50:45.299465571 -0400 EDT
2021/05/27 07:51:07 INFO  : test1: vfs cache: queuing for upload in 5s
2021/05/27 07:51:07 DEBUG : : Added virtual directory entry vAddFile: "test1"
2021/05/27 07:51:07 DEBUG : test2: vfs cache: truncate to size=1504953150
2021/05/27 07:51:07 DEBUG : test2: vfs cache: setting modification time to 2021-05-27 07:50:52.498600658 -0400 EDT
2021/05/27 07:51:07 INFO  : test2: vfs cache: queuing for upload in 5s
2021/05/27 07:51:07 DEBUG : : Added virtual directory entry vAddFile: "test2"
2021/05/27 07:51:07 DEBUG : Encrypted drive 'gcrypt:': Mounting on "/home/felix/test"
2021/05/27 07:51:07 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item test1 not removed, freed 0 bytes
2021/05/27 07:51:07 DEBUG : vfs cache RemoveNotInUse (maxAge=3600000000000, emptyOnly=false): item test2 not removed, freed 0 bytes
2021/05/27 07:51:07 INFO  : vfs cache: cleaned: objects 2 (was 2) in use 2, to upload 2, uploading 0, total size 2.803G (was 2.803G)
2021/05/27 07:51:07 DEBUG : : Root:
2021/05/27 07:51:07 DEBUG : : >Root: node=/, err=<nil>
2021/05/27 07:51:12 DEBUG : test1: vfs cache: starting upload
2021/05/27 07:51:12 DEBUG : test2: vfs cache: delaying writeback as --transfers exceeded
2021/05/27 07:51:13 DEBUG : d7pe9pia5d9a0tud31p8lkdpi0: Sending chunk 0 length 1073741824
^C2021/05/27 07:51:17 INFO  : Signal received: interrupt
2021/05/27 07:51:17 DEBUG : vfs cache: cleaner exiting
2021/05/27 07:51:17 INFO  : Exiting...
felix@gemini:~$