find /var/cache/gcloud/ -type f | wc -l
370600
du -sh /var/cache/gcloud/
32G /var/cache/animati/gcloud/
3 days working very well. no errors in log file.
I enter the command "systemctl restart gcloud.service" and it takes more than 3 minutes to restart. Than I enter the same command again and It takes few seconds.
Why the service is taking soo long time after a day production?
Would it be a problem that i should worry about?
(I had to add "TimeoutStartSec=600" (10min) to it get started ok.)
Is this because the cache size is growing or because it has been running for a long time?
I intend to have a local vfs cache of up to 10tb. Can it be a problem?
I get a restart from another service (more than 3 days working)...
the average file size is: 0.09mb (lot of files)
total of 55gb in cache.
the -vv log:
2021/05/25 00:41:39 DEBUG : Using config file from "/etc/gcloud/rclone.conf"
2021/05/25 00:41:39 DEBUG : rclone: Version "v1.55.1" starting with parameters ["/usr/bin/rclone" "mount" "gcloud:bucket" "/gcloud" "--umask=0002" "--allow-other" "--default-permissions" "-vv"]
2021/05/25 00:41:39 DEBUG : Creating backend with remote "gcloud:bucket"
2021/05/25 00:41:39 INFO : bucket: poll-interval is not supported by this remote
2021/05/25 00:41:39 DEBUG : vfs cache: root is "/var/cache/gcloud/vfs/gcloud/bucket"
2021/05/25 00:41:39 DEBUG : vfs cache: metadata root is "/var/cache/gcloud/vfs/gcloud/bucket"
2021/05/25 00:41:39 DEBUG : Creating backend with remote "/var/cache/gcloud/vfs/gcloud/bucket"
2021/05/25 00:52:10 DEBUG : bucket: Mounting on "/gcloud"
(more than 10 minutes to start.)
get stuck here "Creating backend with remote"
a new restart 2 minutes after that. take only 13seconds.
Why the service is taking soo long time after a day production?
Would it be a problem that i should worry about?
(I had to add "TimeoutStartSec=600" (10min) to it get started ok.)
Is this because the cache size is growing or because it has been running for a long time?
I intend to have a local vfs cache of up to 10tb. Can it be a problem?
If you goal is data integrity and moving things around and moving large amounts of data, I would definitely suggest looking at the union backend or something like mergerfs.
This allows for much more control in moving data when you want and you don't hit issues with items in the cache waiting to be uploaded.
I personally use mergerfs and a rclone move script to move local data to the cloud at scheduled intervals.
I use mergerfs as it allows for hard links which are a requirement for my use case which union does not work yet. Depending on what you do, union might be sufficient.
This means my cache is only for reading majority of the time and anything written is done locally and uploaded at intervals.
I'd be pretty scared leaving large chunks of files/data in the cache to upload, but that's me.
I would need something like AWS Storage Gateway (https://aws.amazon.com/pt/storagegateway) that would work with any backend. Have all data on the storage backend (50tb) but a big local cache (10tb and lot of k files). the data is not in the local cache be retrieve instantly... just need one instance with this backend.