Rclone mount alternative (Kodi + dmdsoftware + STRM)

I use rclone to move files from my VPS to ACD & gDrive in sync (in case one goes down) and syncing a selection of all my media to a local server. As many of you, I’ve been playing around with various settings to get a nice streaming setup via rclone mount with either ACD or gDrive. My own results have varied greatly & I haven’t found a truly reliable setup.

The alternative that has been working for me very reliably is using ddurdle’s XBMC / KODI Cloud Service Plugins with both ACD & gDrive (http://dmdsoftware.net/). Save STRM place holder files. Have Kodi index them. My results have been:

  • almost instant video playback
  • no buffering issues or playback errors
  • seeking works

Anyone else given this method a try?

2 Likes

you don’t use any encryption of your files, right?

Nope, otherwise this method wouldn’t work.

How do you handle updating streams as you add media ? Is this a manual process?

Right now I have to do it manually but I'm expecting an update as the creator has added a github request for this feature.
https://github.com/ddurdle/Amazon-Cloud-Drive-for-KODI/issues/27

You are not worried of account suspension due illegal content ?

Iam using rclone mount and rclone move (encfs encrypted) to have all my data on amazon cloud + Iam making a copy of all my encrypted library on unlimited google drive.

My results

  • playback in average will take from 3 to 10 seconds. ( in rare cases it can take 10 to 20 seconds but thats mostly when internet peering to amazon is a bit slower or I have 10+ simultaneous streams running )
  • average upload to acd is between 40 and 60MB
  • no buffering issues
  • seeks work

My Plex server settings
Transcoder temp directory /dev/shm ( eg it always uses 50% of available ram for transcoding files )
Transcoder default throttle buffer: 300 ( this is the key to have as low values my cause more buffering issues )

1 Like

Not too worried. I got all my stuff on ACD & gDrive. I know lots of people who don’t encrypt. I’ve heard of some occasional cases when they have been blocked from sharing.

Also I’ve read on reddit that lots of people who do encryption & store large amounts of data run into problems. Some suspect because it doesn’t allow for deduplication the big companies might have a bigger issue with those users than any illegal content you might be storing.

Well my main worry is once you reach over 100TB as amazon drive consumer they may just check hashes if you have any illegal content and auto ban you.

p.s. Iam tempted of buying one more amazon drive and just have unencrypted content there.

I’m no where near 100TB yet & since I’ve been an excessive spender on Amazon for over a decade I would be surprised if they find it worth it to piss off a loyal customer. Either way, I’m hedging my bets and that’s why I keep a copies on gdrive.

how are u encrypt your files with Rclone that drive-add on can read it?

I have a VM with Ubuntu running - Rclone mounts encrypted google and ACD and the mount is then given to Kodi with Samba (Cif-Shares) … also a FTP ist in place in this VM giving access to all cloud ressources under Linux in LAN …

Runs nearly perfect Kodi scans the directories of the cifs shared Unionfs witch gives instant access to all files - even if not allready uploaded (moved) to cloud (witch happens in the background).

So the VM itself has Google + Amazon (as backup) mounted and unionfs is giving me a writeable system with local hdd space.(buffering upload - 250 GB virtual HDD attached) - local data from union fs is permanently uploaded after hitting the VM over CIFS (older than X hours so complete copy to vm is secured before uploading) … second script copys google permanently to amazon (HA) with help of a cloud server instance.

in this setup kodi runs very good … (Full HD X264 is converted to H265 so big files are no problem anymore and also bandwith use is very low).

Kodis DB runs in my inviroment under mysql (advancedsettings.xml)… this setup and also the shares are contributed to all Kodi devices in my LAN … sharing the complete setup with all maniacs in my family…8)

External devices can access the ressources under Kodi with the FTP solution … port forwarding to the VM on router (don’t miss the oportunity to watch all shares with Kodi on android smartphone/device - good for business trips .8)).

Local upload to cloud goes through home bandwith … syncing the clouds runs on server in cloud (scaleway - very cheap - 200 Mbit connection)

thanks to the new Windows mount of Rcone i can now sort uploaded data also directly under Windows (little bit easier than doing that under linux …8) ).

This setup also gives me the possibility to take the VM with me … and giving access to all shares in another LAN enviroment … or even transfer it in future to cloud - but i feel much more safe when encryption/decryption is done in LAN on my personaly owned devices (VM) …8) )

The VM itself is also encrypted - shutting down the vm secures the hole setup (PAsswords/Scrips/transfers/logs/External access/User accounts/…)…

cheers

FoGBaV

I have very similar. The only difference from what you describe is that I use webdavs on nginx instead of ftp. I also have MySQL encryption and then expose my mysql database to the outside so I get watched statuses and everything as if I am home. Been working great for a long time. I’ll bring my firestick to hotel and plugin in. It “just works”.

1 Like

Hello @calisro … i tried to use webdav and OwnCLoud in the Past - giving full access over OwnCloud to the descripted infrastructure - but it was kind of slow - and the worst thing - it had a 4 GB limit (WebDAV) …

I never got the setup running to pass the 4 GB limit … and then decided to try it with ftp …

cheers

FoGBaV

If you want to try it again, don’t use own cloud. Mine streams at full bandwidth and no size limits. Just run your own nginx webdav. Want my config, just ask.

Perfect … will try nginx … Thanks for offer … will contact you if problems appear … Thank you very much
!

This is the setup that I was trying to figure out but failed hard on the webdav and sql.
Decided on Emby and it worked for a bit but now it’s broken with the newest update and transcoding doesn’t work at all (i5)
Gonna have to wipe my server and take another stab at this setup with MySQL.

I’m a complete newb so my understanding of Linux is very minimal at the moment

Hi there. I am interested in playing with this concept. Would you mind sharing your nginx webdav config so I can give this a shot? Thanks!

So, im running compiling nginx with webdav support (among other things which are not required for this).

✓ robert [~/nginx-1.11.10] $
HS -> cat run
./configure --with-http_dav_module --add-module=nginx-dav-ext-module-master --with-http_ssl_module --sbin-path=/usr/local/sbin --add-module=Nginx-limit-traffic-rate-module --with-http_geoip_module --add-module=nginx-rtmp-module-master --with-http_slice_module

default.conf

ssl_certificate /redacted/fullchain.pem;
ssl_certificate_key /redacted/privkey.pem;

ssl_session_timeout 5m;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES128-SHA256:DHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES256-GCM-SHA384:AES128-GCM-SHA256:AES256-SHA256:AES128-SHA256:AES256-SHA:AES128-SHA:DES-CBC3-SHA:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!MD5:!PSK:!RC4";
# Removing DH broke xbmc: !kEDH


ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_dhparam /usr/local/nginx/conf/cert/dhparam.pem;
#ssl_stapling on;

auth_basic "Authorized users only";
auth_basic_user_file /usr/local/nginx/conf/.passwordbasic;

server {
   listen 7443 ssl;
   server_name redacted;
   error_page 497  https://$host:$server_port$request_uri;

   if ($allowedtoconnect = no) {
        return 444;
   }

   location / {
            index index.php;
            autoindex on;
            root /var/www;
            index index.php index.html;
            create_full_put_path  on;
            dav_access user:rw group:rw all:r;
            dav_methods PUT DELETE MKCOL COPY MOVE;
            dav_ext_methods PROPFIND OPTIONS;

            #dont seem to work for kodi
            expires -1;
            #keepalive_timeout 180s;
 
  location ~ \~$          { access_log off; log_not_found off; deny all; }
   location ~ \.recycle    { access_log off; log_not_found off; deny all; }
   location ~ \.unionfs    { access_log off; log_not_found off; deny all; }
   location ~ \.nomedia    { access_log off; log_not_found off; deny all; }

}

My /var/www is actually a BIND mount to another directory where i keep my media called /data/Media.
/data/Media is a unionfs on both my local media (/data/Media1) and my cloud media which is a rclone mount (/data/Media2).

I keep stuff locally that I know I will watch soon and then delete them outside of the union at the /data/Media1 area so they stay on /data/Media2. The remote cloud contains all the local media AND a larger set of media. It is a superset of my local.

also, I run a https server. you can do either. For testing, you can try it with just http but Id highly recommend https for security. You can get free certs from letsencrypt.org. They work fine with kodi. Ive had to modify the cert strings though to get a FULL GREEN on the security review sites yet have kodi connect.