What is the problem you are having with rclone?
The rclone data mount is not accessible on the host
Run the command 'rclone version' and share the full output of the command.
rclone v1.56.0
- os/version: alpine 3.14.0 (64 bit)
- os/kernel: 5.16.10-arch1-1 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.16.6
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
command: "mount gd:Cloud /data --allow-other --daemon --log-file /log/rclone.log --log-level DEBUG --poll-interval 10s --umask 002 --drive-pacer-min-sleep 10ms --drive-pacer-burst 200 --vfs-cache-mode full --cache-dir /cache --vfs-cache-max-size 250G --vfs-cache-max-age 5000h --vfs-cache-poll-interval 5m --bwlimit-file 32M"
The rclone config contents with secrets removed.
[gd]
type = drive
client_id = -Obfuscated-
client_secret = -Obfuscated-
scope = drive.readonly
token = -obfuscated-
team_drive =
root_folder_id = -obfuscated-
A log from the command with the -vv
flag
rclone: Version "v1.56.0" starting with parameters ["rclone" "mount" "gd:Cloud" "/data" "--allow-other" "--daemon" "--log-file" "/log/rclone.log" "--log-level" "DEBUG" "--poll-interval" "10s" "--umask" "002" "--drive-pacer-min-sleep" "10ms" "--drive-pacer-burst" "200" "--vfs-cache-mode" "full" "--cache-dir" "/cache" "--vfs-cache-max-size" "250G" "--vfs-cache-max-age" "5000h" "--vfs-cache-poll-interval" "5m" "--bwlimit-file" "32M"]
2022/02/21 14:56:36 DEBUG : Creating backend with remote "gd:Cloud"
2022/02/21 14:56:36 DEBUG : Using config file from "/config/rclone/rclone.conf"
2022/02/21 14:56:36 DEBUG : gd: detected overridden config - adding "{cHldw}" suffix to name
2022/02/21 14:56:36 DEBUG : fs cache: renaming cache item "gd:Cloud" to be canonical "gd{cHldw}:Cloud"
2022/02/21 14:56:36 DEBUG : vfs cache: root is "/cache/vfs/gd{cHldw}/Cloud"
2022/02/21 14:56:36 DEBUG : vfs cache: metadata root is "/cache/vfs/gd{cHldw}/Cloud"
2022/02/21 14:56:36 DEBUG : Creating backend with remote "/cache/vfs/gd{cHldw}/Cloud"
2022/02/21 14:56:36 DEBUG : Google drive root 'Cloud': Mounting on "/data"
2022/02/21 14:56:36 INFO : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2022/02/21 14:56:36 DEBUG : : Root:
2022/02/21 14:56:36 DEBUG : : >Root: node=/, err=<nil>
rclone docker-compose.yml
version: "3.7"
services:
rclone:
image: rclone/rclone
container_name: rclone_vfs
restart: unless-stopped
environment:
- PUID=1002
- PGID=1004
volumes:
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
- /DATA/DockerData/Plex/data:/data:rshared
- /DATA/DockerData/Plex/rclone/config:/config/rclone
- /DATA/DockerData/Plex/rclone/log:/log
- /DATA/DockerData/Plex/rclone/cache:/cache
- /etc/fuse.conf:/etc/fuse.conf
privileged: true
cap_add:
- SYS_ADMIN
devices:
- /dev/fuse
security_opt:
- apparmor:unconfined
command: "mount gd:Cloud /data --allow-other --daemon --log-file /log/rclone.log --log-level DEBUG --poll-interval 10s --umask 002 --drive-pacer-min-sleep 10ms --drive-pacer-burst 200 --vfs-cache-mode full --cache-dir /cache --vfs-cache-max-size 250G --vfs-cache-max-age 5000h --vfs-cache-poll-interval 5m --bwlimit-file 32M"
Additional information -
The guid/puid matches with the user I would like rclone to run under and create the docker under
I have also attempted setting the guid/puid to 0/0 and launching docker as root user with same result
I have also attempted setting the guid/puid to 1000/1000 and launching docker as this user with same result
I have added privileged: true with same result as without
I have verified that the data from google is mounting to /data inside of the docker
I have verified that all volumes are mounting properly aside from 1 | /DATA/DockerData/Plex/data
I have attempted to use :shared and :rshared on this mount and the end result is the same
I do have rclone installed on host system, which includes it's fuse dependency
I have edited /etc/fuse.conf and have added "user_allow_other" to this file
I have chmod the /DATA/DockerData/Plex/data directory with 777 with the result remaining the same
The host is running arch linux with kernel 5.16.10-arch1-1
I do have a successful rclone mount that exists on this system that is not using docker-compose, the deployment config is below.
#! /bin/bash
######## Variables ########
#### Static Variables (Do Not Change!) #####
# Gets current location of scripts
target_PWD="$(readlink -f .)"
#### Dynamic Variables ####
# Docker Network Name
dNET=biNET
# User that will have access to files
Username=terry
# Name of group for file access
GroupName=bimedia
# Docker Command
dCMD=create
# TimeZone
tZone=America/Chicago
# root directory specified for all databases (recommended on SSD) (no trailing forwardslash)
rdbDir=/opt/blueiris
# root directory specified for heavy IO
rioDir=/DATA/blueiris
# rclone cloud name
rcN='gdrive:Terry/BackUps/BlueIris'
# Your preferred PUID (run "id youruser" to find your uid/guid)
prefPUID=1000
# Your Preferred GUID
prefGUID=1000
# Rclone preferred PUID (rclone must be run as root)
RcprefPUID=1000
# Rclone Preferred GUID
RcprefGUID=1000
# Some more commonly edited rclone settings
rcloneBufferSize=100M
rcloneCacheDir=${rioDir}
rcloneVfsCacheSize=100G
docker ${dCMD} --name rclone-vfs_BI \
--cap-add SYS_ADMIN \
--device /dev/fuse \
--security-opt apparmor:unconfined \
--network=${dNET} \
-e PUID=${RcprefPUID} \
-e GUID=${RcprefGUID} \
-e TZ=${tZone} \
-v ${rcloneCacheDir}/cache:/cache \
-v ${rdbDir}/config/rclonevfs:/config \
-v ${rioDir}/Cloud:/data:shared \
rclone/rclone mount ${rcN} /data \
--cache-dir /cache/rclone-vfs \
--config /config/rclone.conf \
--allow-other \
--allow-non-empty \
--buffer-size ${rcloneBufferSize} \
--cache-dir /cache/rclone-vfs \
--fast-list \
--log-level INFO \
--rc \
--timeout 1h \
--tpslimit 4 \
--umask 002 \
--vfs-cache-mode writes \
--vfs-cache-max-size ${rcloneVfsCacheSize} \
--vfs-read-chunk-size-limit 500M \
--vfs-read-chunk-size 100M \
--rc \
--rc-addr='localhost:15491' \
--log-file /config/rclone-vfs.log \
--log-level INFO
I am in process of converting any dockers that I have to docker-compose and I find groups of dockers inside 1 .yml file to much easier to manage in the long run, Any assistance would be greatly appreciated.