I'm setting up a rclone v1.48.0 -> Gdrive setup currently on 2 servers, Debian 9 & 10, let them be A and B. While A contains about 5TB data which I currently upload and B is my new home server which will replace A (hopefully) in the long term and is in sync.
Now on A I just run all the mounts as root and just copy from the original source /home/user/downloads/subfolders to /home/plex/subfolders from where there are being uploaded.
On B I started out almost the same, just using /mnt/google/subfolders and fuse-mounting them to /home/user/downloads/subfolders.
Since I download via rtorrent/rutorrent, have stuff automatically unpacked and then served via Plex I have to make sure the user under which rtorrent runs has permission to write.
I now changed the owner of the upload folder to the user and edited the systemd file to run the mount as the user (I've chosen a test-dir with just the empty file "mountcheck"). And the file which was already there disappeared. Now on the second test I created the same file as the user in the upload folder, it gets uploaded and again is not appearing in the encrypted Gdrive folder.
Mount-log:
2019/08/30 19:29:34 ERROR : Encrypted drive 'dec-auto:/': Statfs failed: failed to read disk usage: permission denied
Upload log:
30.08.2019 18:59:01 RCLONE UPLOAD STARTED
2019/08/30 18:59:14 NOTICE: Encrypted drive 'auto-gd:/': --checksum is in use but the source and destination have no hashes in common; falling back to --size-only
30.08.2019 18:59:14 RCLONE UPLOAD ENDED
if pidof -o %PPID -x "upload-auto.cron"; then
exit 1
fi
LOGFILE="/home/scripts/logs/upload-auto.cron.log"
FROM="/mnt/google/auto/"
TO="auto-gd:/"
# CHECK FOR FILES IN FROM FOLDER THAT ARE OLDER THEN 15 MINUTES
if find $FROM* -type f -mmin +5 | read
then
echo "$(date "+%d.%m.%Y %T") RCLONE UPLOAD STARTED" | tee -a $LOGFILE
# MOVE FILES OLDER THEN 5 MINUTES
rclone move $FROM $TO -c --no-traverse --transfers=300 --checkers=300 --delete-after --min-age 5m --log-file=$LOGFILE
echo "$(date "+%d.%m.%Y %T") RCLONE UPLOAD ENDED" | tee -a $LOGFILE
fi
exit
I wonder why the files disappear in the first place, because I already uploaded 3TB+.
And also why recreating the file doesn't make it appear as well.
It's a little hard to tell without seeing your config, but could it be that you are uploading files to a drive through a non-crypted remote, but then reading them back through a crypted one?
If so, then the remote is trying to decrypt files that aren't encrypted in the first place - usually resulting in such a garbled mess that the filesystem just ignored them completely and they seem to disappear. I've come across a few threads already where users had that issue and were confused by it.
If you have both encrypted and unencrypted files on the same drive then you'd want to seperate the two into different folders and use 2 different remotes (one with crypt and one without) to access them.
It's also very possible this is user-permissions related based on the changes you said you made, but I'm a noob when it comes to those on Linux, so you'd want to ask @Animosity022 for input on that.
no no they have both the same setup with the encrypted and decrypted rclone remote. Just as said, A is running fully as root and when changing B to the required user it seems the files disappear. On B I can see all the other files owned by root and I can access them in Plex. But I need the user to be able to write the torrents and seed them back.
For the initial setup I used https://hoarding.me/rclone/ because it was the actually only one I found. Now I just try to move as much as possible to systemd files and adapt to my setup, like the user permissions.
logfile="/home/scripts/logs/fuse-mount-auto.cron.log"
if pidof -o %PPID -x "fuse-auto.cron"; then
echo "$(date "+%d.%m.%Y %T") EXIT: fuse-auto.cron already running."
exit 1
fi
if [[ -f "/home/user/downloads/auto/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Check successful, fuse mounted." | tee -a "$logfile"
exit
else
echo "$(date "+%d.%m.%Y %T") ERROR: Drive not mounted, remount in progress." | tee -a "$logfile"
# Unmount before remounting
fusermount -uz /home/anon/downloads/auto | tee -a "$logfile"
/usr/bin/unionfs-fuse -o cow,allow_other /mnt/google/auto=RW:/mnt/google/auto-gd=RO /home/user/downloads/auto
if [[ -f "/home/user/downloads/auto/mountcheck" ]]; then
echo "$(date "+%d.%m.%Y %T") INFO: Remount successful." | tee -a "$logfile"
else
echo "$(date "+%d.%m.%Y %T") CRITICAL: Remount failed." | tee -a "$logfile"
fi
fi
exit
And well, as said, the file in the directory disappeared after changing from root to user.
Yes, definitely permissions issue then. I pinged animosity for you so chances are he will pop in here and give some advice soon as he is very active on the forum
That's quite the overly complex setup you got going on with number of remotes.
You'd probably want to peel back the problem a bit and try to get a small use case / test setup to reproduce your issue and work with that config.
In general, plexdrive or rclone has no concept of a user or permissions. If you mount something, it runs as that user and is supplied whatever permissions you have when you mount it. You can't change them as cloud storage has no concept of Linux permissions.
Take 1 file upload to a remote.
Validate you can see that file on the remote
Validate you can see that file on the mount
If that does not work, share the work flow and what you did.
So I retested it again.
I only concentrate on my test folder "auto" for upload and "auto-gd" where the decrypted files show up. I have them in a fusemount but I tested it without.
All mount related things run as root (plexdrive mount, rclone mount, upload script), the service that needs to write to it is a user, let's call it anon. Therefor I either chmod the upload folder 777 or chown it to anon, either way a file created by anon is being uploaded, shows up encrypted on the Gdrive Website but it does not show up in the auto-gd folder (which is ofc owned by root).
I added "-o allow_other" to the plexdrive.service and changed the rclone mount service back to root. Those files are above.
And the rclone config looks like that regarding the mentioned folder:
[gd]
type = drive
client_id = client
client_secret = secret
scope = drive
token = x
[dec-auto]
type = crypt
remote = /mnt/plexdrive/auto-gd
filename_encryption = standard
directory_name_encryption = true
password = 1
password2 = 2
[auto-gd]
type = crypt
remote = gd:/auto-gd
filename_encryption = standard
directory_name_encryption = true
password = 1
password2 = 2
I can't check if I see it on the servers filesystem since I have gd: not mounted anywhere. Only the folders. I can see the file in the Gdrive overview, just as well as I can see the other files there (encrypted). What do you want me to test exactly? Want me to repeat in a new folder? Or in the existing auto-gd folder? Encrypted or decrypted? Or should I just mount gd somewhere and check there?
EDIT: Ok it looks like it isn't syncing anymore at all. When refreshing Plex new stuff added since 2 days ago doesn't appear at all there. They're also not in the Plex container. I will need to check that after work.
ls -al /mnt/google/auto-gd
total 0
-rw-r--r-- 1 root root 0 Aug 28 12:57 mountcheck
-rw-r--r-- 1 root root 0 Aug 31 09:44 test
That looks to me that something on the route just stopped syncing at all.
I wonder, if files uploaded as user "anon" appear as "root" then and if so if rtorrent can still pick them up with just the read permission on others to seed them back.
I also wonder if I couldn't just leave plexdrive off, there are so many different setups I'm reading through that I feel like there is no "golden way to go".
The other thing is the encryption and if it maybe slows down streaming so much that I can't flawlessly watch common 4k movies then, I couldn't yet extensively test since the only 4k movie yet still on watchlist doesn't get synced. But I guess crypt is recommended when using Gdrive?
However is there any idea about the sync problem in the first case, how to resolve it and how to keep it off from happening?
I use rclone + crypt and stream without any issues.
If you want to focus on a single item rather than multiple things, we can probably get to the issue.
As I was suggested, can you take 1 file, walk it through the process and run those commands with rclone and take plexdrive out of the mix.
Take 1 file.
Copy it up with rclone
Verify it's on the remote with rclone ls
Verify it's on the mount and share the mount command that is the same remote you used in the previous steps
If you can share that, we can step through the issue.
I've thrown out Plexdrive after reading some more explanations and configured it to only have 1 remote per folder without the dec-auto dec-.. and so on folders. I also already tested a new torrent and made sure it downloads, appears, gets in Plex indexed and stays in seed even after service restart.
Next I will fight mount-options and stutter-free 4k streaming but for that I guess I will open a new topic if needed.