Yeah, I don’t use samba at all so not sure I can help there.
Yeah no worries, the annoying part if lack of information in the logs, if I copy the same files to the local storage on the server via samba, they copy fine with no issues, the connection only drops when copying to the rclone share via samba.
Been trying to google the issue, and seems I am the only one having the issue, either that or I am the only one who has attempted it.
Putting layers on things on makes it more complex and harder to maintain/solve issues.
Samba is great for local sharing but very bad at cloud based sharing from what I would gather as it expects responses to be fast and cloud isn’t fast.
You’d probably be better served to just put rclone on places and use that as changes get polled every minute by default and it would be easier to maintain/support.
Just my two cents.
Going back to webdav… Can you ellaborate here? What do you mean ‘writes start failing’? Do you see failures in the rclone logs? Is the webdav windows mount hanging? What symptom are you seeing when using webdav (without samba on top)?
webdav should be more resilient to latency.
I used teracopy to do the transfer and not explorer, and this reports errors about being unable to trim the file this occurs in the last 5% of the transfer everytime.
When the transfer fails this shows up in the samba log.
[2019/01/09 17:22:21.000626, 2] ../source3/smbd/close.c:788(close_normal_file) localadmin closed file rclone_issue/LocalTest/TestFile 1.mp4 (numopen=2) NT_STATUS_OK [2019/01/09 17:24:04.255213, 2] ../source3/smbd/close.c:788(close_normal_file) localadmin closed file rclone_mount/RemoteTest/TestFile 1.mp4 (numopen=1) NT_STATUS_CONNECTION_ABORTED [2019/01/09 17:24:04.255324, 3] ../source3/smbd/smb2_server.c:3097(smbd_smb2_request_error_ex) smbd_smb2_request_error_ex: smbd_smb2_request_error_ex: idx status[NT_STATUS_CONNECTION_ABORTED] || at ../source3/smbd/smb2_close.c:109 [2019/01/09 17:24:04.255859, 2] ../source3/smbd/service.c:1098(close_cnum) uni-pc (ipv4:192.168.0.58:61768) closed connection to service localadmin [2019/01/09 17:24:04.255952, 2] ../source3/smbd/service.c:1098(close_cnum) uni-pc (ipv4:192.168.0.58:61768) closed connection to service mnt
For some reason samba aborts the connection when finishing one file and starting the next, it usually does the first 2-3 ok, but then fails on the next one.
Not sure what ‘trim the file’ means. But reading about terracopy it will wait till the end to report errors that may be occurring throughout the transfer. Mixing tools on tools on tools is going to make it really hard to debug. You won’t know if terra copy or rclone or windows or whatever is the source of your problem.
Sure I understand, but windows explorer gives poor error messages, all it tells me is that the server isn’t accessible and asks me to check if I am connected to the network.
When trying to debug the samba issue, I am using only explorer.
Can you use the rclone move command to push your files to gdrive? Use the mount strictly for reading.
I could yes, but that would mean having a cron to do that, and in some cases the cron may run again when the previous hasn’t finished. So that isn’t ideal.
Must just isn’t working out for me as others seem to be able to do what I am trying to do with no issues.
Just to piggyback on this thread, my current setup involves:
Local Storage attached to a Windows PC
Windows PC running Plex Media Server
Windows PC sharing Local Storage via Samba as Network Storage
Vero 4K+ accessing Network Storage via Samba
Apple TV 4K accessing Network Storage via Kodi via Samba
iPad / iPhone accessing Network Storage via Kodi via Samba
I have begun uploading my Media to a Google Drive Unlimited account and would like to setup a rclone mount with Plex Media Server scanning the mount and serving the files found. I would also like this mount to be able to be accessed by Kodi players.
I’ve seen some issues with sharing rclone mount via Samba so I am looking for any advice as to how to accomplish this. I am also willing to consider a different architecture as long as it can meet my objectives. Thanks for the help!
Can you post the rclone version you are using, your configuration files for both rclone and samba. You should also post your logs from rclone and samba, for samba level 3 logging should be enabled to aid in the debugging of your issue.
I don’t currently use plex, but am using next pvr instead, but if I was able to get this working reliably I thought I would get plex instead as it does seem like a good product.
Just use samba to serve up Animosity‘s vfs mount + mergerfs + cron based move with lock file script. See his VFS sweet spot post on this forum, and then point samba at the last merged directory path. I have that running on CentOS 7, with sonarr and the like running on windows pointed at it and it works great.
I’ve never used or heard of unionfs or mergerfs until recently, didn’t really understand how to use unionfs, although found a better page of info for mergerfs and it does sound really good.
Whilst I could do this, I am confused as what paths I would use for example in next pvr I can set the paths where my recordings go to, and also setup media folders. So I guess I would use the local hard drive for the local where my recordings go, and the mergerfs folder for the media folder, that’s fine.
You mentioned having a cron based move with lock script also I checked the post you mentioned, and didn’t realise that was possible, but what if the cron runs whilst a recording is in progress? the script will surely try and move this file whilst it is being written to surely? Is there a way to prevent that?
I asked Animosity about that in his vts post. It will see the modification happening and not move it, so then it would just get moved the next time crown runs and the mods are done.
Oh I see, that is pretty good then!
A quick google suggests that they should be put in
/etc/systemd/system. Is this where you put the files to mount the drives and so on?
Can I use /etc/fstab to mount the rclone drive?
I personally use a service file:
felix@gemini:/etc/systemd/system$ cat gmedia-rclone.service [Unit] Description=RClone Service PartOf=gmedia.service RequiresMountsFor=/data [Service] Type=notify Environment=RCLONE_CONFIG=/data/rclone/rclone.conf ExecStart=/usr/bin/rclone mount gcrypt: /GD \ --allow-other \ --bind 192.168.1.30 \ --buffer-size 256M \ --dir-cache-time 72h \ --drive-chunk-size 32M \ --log-level INFO \ --log-file /home/felix/logs/rclone.log \ --timeout 1h \ --umask 002 \ --vfs-read-chunk-size 128M \ --vfs-read-chunk-size-limit off \ --rc ExecStop=/bin/fusermount -uz /GD Restart=on-failure User=felix Group=felix [Install] WantedBy=gmedia.service
What advantages does a service file have over having the mount points setup in /etc/fstab?
As I have multiple local drives, I thought I would create a mergerfs local mount that gathers all them up, and then a 2nd mergerfs mount that would gather the mergerfs local mount and the rclone mount.
The reason for that was I can use your upload_cloud cron file to copy/move all the data from the local mounts to the cloud, instead of having to have a seperate cron for each drive.
It’s really up to your personal choice and how you like to do things.
I don’t like using /etc/fstab as I like the service file. You can also use a mount file as there are 2 options for systemd to get it working.
Mergerfs would be a great solution for that as it combines drives well and presents a single point. You can have some options in terms of how you want to write to them.