Refresh Docker volume pointing to a dropped rclone mount

Hi,
I have configured rclone to mount google drive to a local folder in my Ubuntu 16 server with systemd and some docker containers pointing to that folder as a bind volume. The problem is that sometimes the mount gets dropped and put back up but the containers are not able to see it anymore.
If I attach into a container and try to enter the mount I get ls: cannot access 'cloud': Transport endpoint is not connected but outside the containers the mount is working.
How could I force Docker to refresh the volume if the mount gets dropped?
My unit file:
[Unit]
Description=rclone Service
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
Environment=RCLONE_CONFIG=/home/**USERNAME**/.config/rclone/rclone.conf

ExecStart=/usr/bin/rclone mount gdcrypt: /home/**USERNAME**/mediabox/gd \
--config=/home/**USERNAME**/.config/rclone/rclone.conf \
--allow-other \
--buffer-size 1G \
--dir-cache-time 96h \
--log-level INFO \
--log-file /home/**USERNAME**/logs/rclone.log \
--umask 002 \
--user-agent rcloneapp \
--fast-list \
--drive-chunk-size 64M \
--vfs-read-chunk-size 32M \
--vfs-read-chunk-size-limit off \
--vfs-cache-mode writes 
ExecStop=/bin/fusermount -uz /home/**USERNAME**/mediabox/gd
Restart=always
RestartSec=1s
User=**USERNAME**
Group=**USERNAME**

[Install]
WantedBy=multi-user.target

My rclone.conf:

[gd]
type = drive
scope = drive
token = 
client_id = 
client_secret = 

[gdcrypt]
type = crypt
remote = gd:crypt
filename_encryption = standard
directory_name_encryption = true
password = 
password2 = 

One of the container pointing to the mount:

mediabox-sonarr:
    image: linuxserver/sonarr
    environment:
      - ...
    volumes:
      - /home/**USERNAME**/mediabox/gd:/cloud
      - ...
    ports:
      - ...

Personally, I'd recommend not putting the data directly onto a remote mount. Too many issues to tend to. Any of the devs or mods can correct me on this if they like.

What I would recommend, if you're desperate for local storage and need to put it on a remote mount: Use mergerfs/rclone move to write the data to a local disk then move/upload to the cloud at an interval (nightly, for example). @Animosity022 has a github of scripts he uses to accomplish this. I can confirm, this works very well. https://github.com/animosity22/homescripts

Please note, I have had issues with radarr/sonarr and some other containers failing when it's data is mapped to a host folder on a mergerfs volume. For example, Radarr/Sonarr uses sqlite. Mergerfs with direct_io enabled does not support nmap, which sqlite uses. So personally, I simply mount radarr/sonarr's /config directory to a local directory (very small footprint), but put everything else that eats up the space on the mergerfs.

Thanks for the answer.
At the moment I am testing pointing Sonarr/Radarr directly to the mount but if at the end I'm not happy with the result I'll definitely check up those scripts.
This question seggested to change the bind propagation of the docker volumes and I am testing if it resolves most problems, "unfortunately" the mount was always up in the last 24h.