Rclone xtream codes backend

Hi there. Looking for a new backend type for the xtream codes api. This is a common setup for iptv services, but their UIs are generally junk and I’d like to be able to use rclone to abstract away the iptv service and interact with the files via mount. This way, it can be added to things like Plex. This would be especially useful for vod services offered by the provider.

Fortunately there is already a go module for it.

By nature of the service, the xtream remote would be read only, would need some mechanism to create and keep the directory structure updated, etc. The json would need to be parsed and from the json, abstract a directory structure. L

I’d be happy to assist in this, as I have a little go knowledge. But obviously not as much as the creator.
.

1 Like

Other projects to look at:

Trying to figure out the rclone connection as that doesn't seem to be any cloud storage.

To watch free tv through rclone?

Paid.
But the connection here is that IPTV services generally have terrible interfaces. Nothing like that of Plex.

Rclone would step in as a middleman, allowing these services to be used inside Plex/emby/jellyfin/etc by abstracting away the delivery of the content and present it in a way Plex is used to.

I don't get it.
What does IPTV have to do with rclone?
You are looking at the wrong tool here.

I appreciate your perspective on the Rclone and Xtream Codes setup. Let’s drill down into the technical fit here:

Rclone really shines as a Swiss Army knife for remote file systems, and that’s precisely where it intersects with Xtream Codes. Xtream Codes, at its core, is an API for accessing remote files. Rclone can bridge this zgap effectively.

By integrating Xtream Codes as a backend in Rclone, we can leverage Rclone’s robust toolkit for handling remote files. This means we could make Xtream Codes’ remote files accessible as if they were part of a native Rclone remote. This integration could open up possibilities like mounting these files seamlessly, or using Rclone’s array of features for interacting with these files more efficiently.

The value here lies in utilizing Rclone’s flexibility and strength in dealing with remote file systems to enhance access and management of Xtream Codes’ files. It’s about tapping into Rclone’s potential to make these remote files more accessible and functional.

@ncw i have a little go development experience and if you’ll guide me on what you need from me to be able to make the xtream api available, I’d love to contribute, even financially.

For a start have a look at:

1 Like

I guess you are thinking that the Xtream backend would show the existing data as a file system. Looking at the help for the Go package you posted I see Series and Episodes, so rclone could present a directory full of Series and within each of those would be Episodes. Though I see categories in the API so we could do something with that.

Is that the kind of thing you mean? How many series would a user get to see at once? If that was too many (say > 1000) then presenting a directory with 1000 directories in it to the user probably isn't a good UI so it might need sub directories.

That would be your first job - decide how to show the existing streams within a file system structure. Streams can appear in multiple places in the file system structure so a romcom might appear in both romance and comedy sections. You'd want to have a different file for each resolution supported (SD, HD, 4k) I guess.

I'm guessing this would be read only so you wouldn't be able to write new things.

It is relatively easy to write a read only backend, but they are difficult to test.

If you want to have a go at this, then I'm happy to help.

If you want to hire me to implement then and if you are working on behalf of a company you might be interested in taking out a support contract and I could do the implementation under the cover of a support contract.

I see Series and Episodes, so rclone could present a directory full of Series and within each of those would be Episodes.

Yes, and I have sample json returns to work with too.

I see categories in the API so we could do something with that.

Potentially, but categories are completely made up by the maintainer. It would be less important, but still a feature to be considered, nonetheless.

That would be your first job - decide how to show the existing streams within a file system structure

I had a working proof of concept to do this with a set of directories/placeholder files on my local machine, but the basic idea:

  1. Get VOD:
  • this returns a long json of videos, sample here: getv - Pastebin.com
  • A directory can be abstracted from the name, so "name": "Example Video 1" would render a directory with the same name, with the intent of loading that very same video file as a virtual file in that directory.
  1. Get Series
  • Series are a bit harder, but the idea is the same.
  • get a list of available series: gets - Pastebin.com
  • a directory name is abstracted from the name. I have never seen an exact duplicate name, but if that is an issue, we can append the series_id to the series top level directory, guaranteeing it to be unique.

Now we are left with

.
β”œβ”€β”€ movies
β”‚   β”œβ”€β”€ Example Video 1
β”‚   β”‚   └── Example Video 1.mkv
β”‚   β”œβ”€β”€ Example Video 2
β”‚   β”‚   └── Example Video 2.mkv
β”‚   └── Example Video 3
β”‚       └── Example Video 3.mkv
└── series
    β”œβ”€β”€ Example Series 1
    β”œβ”€β”€ Example Series 2
    └── Example Series 3

As you see, the movies directory and sub dirs can be constructed with 1 api call. The series directory and 1 level below (the series names) can be constructed also with 1 api call.

Note: at this point, querying for the vod file data is irrelevant. It's metadata, like plex or emby would generate.

The file's actual location can be derived into a simple http link:

http://<server:port>/movie/<username>/<password>/<streamid>.<extension>

We now know that movies/Example Video 1/Example Video 1.mkv is found at

http://example.com:80/movie/exampleuser/examplepass/97675.mkv

For series, we need to continue down the path and make some subsequent queries for each show. If I traverse Example Series 1, an api call would be made to get the series info. The child directories and all files of Example Series 1 can be derived from this 1 api call. Here are the returned results: seriesinfo - Pastebin.com

  • the naming convention used by the api is odd. episodes["1"] is actually season 1.
  • episodes["1"][0] is the first episode of season 1
  • episodes["2"][0] is the first episode of season 2

now with this api call, you've built out the entire directory structure of the Example Series 1:

.
β”œβ”€β”€ movies
β”‚   β”œβ”€β”€ Example Video 1
β”‚   β”‚   └── Example Video 1.mkv
β”‚   β”œβ”€β”€ Example Video 2
β”‚   β”‚   └── Example Video 2.mkv
β”‚   └── Example Video 3
β”‚       └── Example Video 3.mkv
└── series
    β”œβ”€β”€ Example Series 1
    β”‚   β”œβ”€β”€ Season.1
    β”‚   β”‚   └── Example Series 1 - S01E01 - First Episode Title.mkv
    β”‚   └── Season.2
    β”‚       └── Example Series 1 - S02E01 - Second Season First Episode.mkv
    β”œβ”€β”€ Example Series 2
    └── Example Series 3

Similarly to VOD, a series file's actual location can be derived into a simple http link:

http://<server:port>/series/<username>/<password>/<streamid>.<extension>

We now know that s01e01 of Example Series 1 is found at:

http://example.com:80/series/exampleuser/examplepass/83550.mkv

These api servers have basic api rate limiting, although undocumented and varies by host. Getting the series child directories should be done only on demand and then cached for a specified time, like we all know and love rclone to do already with other remotes. A limit should certainly be placed on making these calls, since apps like plex or emby might scan the series directory and use a lot of queries very quickly.

For example, one host of mine during testing limited me after about 15 api calls in 15 seconds. I had to build in a tick-tock timer to prevent any api call from exceeding approximately 2 per 5 second interval. When i set the series api calls to about 1 every 5 seconds it ran perfectly for about 20 minutes (I was querying about 200 shows). I see rclone's tpslimit and tpslimit-burst being very useful here. The good thing here is that a long cache time is ok, since (in my experience) once the file is set in the xcodes api, it doesn't get changed often. And if it does, typically, the stream ids all the stay the same.

I'm guessing this would be read only

You are correct.


That's a dump of info, sorry for the long wall of text. Admittedly, i need to study up on the code to see how this all plays out, but I wanted to get answers over to you as quickly as possible. What do you think?

Looks plausible!

Here are some notes

  • rclone mount doesn't work very well if it doesn't know how long (in bytes) the files are. So ideally we'd be able to find out exactly how long each "*.mkv" is.
  • If the file sizes are unknown then you end up with a similar situation to /proc/self/mounts - this appears a a 0 length file which a lot of applications will not deal well with but it is still perfectly readable.

Rclone is very good at rate limiting! It has an internal library called pacer which helps with this sort of thing.

This backend sounds similar to the Google Photos backend which has a similar scheme for laying out things.

For a read only backend you need to implement directory listings and Open - check out the http backend to see exactly what needs to be implemented.

This could be cached by the backend, or you could leave it to the VFS cache to cache it.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.