Motuz, a new web UI for rclone

All,

we have started a project to build a web interface for rclone. As more and more of our life sciences researchers need to move terabytes of data back and forth between our on-premise posix file system and cloud we decided to seed fund an OSS project and called it Motuz after we could not find any commercial multi-cloud solution that has comparable features to rclone. Motuz was first installable this Saturday and will be approaching Alpha status withing the next week or so.

The stack is a React JS frontend talking to a Python/Flask REST api which steers the underlying machinery of Celery, RabbitMQ, Postgres, Docker and rclone . We are welcoming pull requests and any other contributions!

Thanks
Dirk

6 Likes

Cool project. I can just hope for a windows release. :wink:

The web UI is much inspired by a great tool named Globus Online which itself was inspired by Norton Commander, a tool that was developed not long after punch cards became obsolete. (hint: NC was developed in the same year Ferris Bueller made his odometer improvements). Globus does not support rclone, unfortunatelty:

image

Motuz will support a config UI for the 3 big ones initially but hopefully we can grow this quickly

Motuz is for large scale data movements and we want to be sure that the user wants to kick of a copy job by confirming the action.

Ideally Motuz should be installed on physical hardware. As our Internet pipe is limited by 10G we plan to throttle Motuz at ca. 600 MB/s.
Motuz will plug into the underlying Linux authentication using PAM and will inherit user id mapping as configured in nsswitch (local, ldap, etc)

In our case we use Kerberos auth to ActiveDirectory and an openldap infrastructure that has a subset of users and groups replicated from AD using AD2Oenldap

We have now entered the beta phase with some early users, and are getting positive feedback, 358 MB/s upload to S3 isn't too bad

Quote User1:

I’ve found a conveniently-placed “test” directory with 96 1G files on the posix fs. To give the system just a little bit of stress, I’m running two copy jobs on the same source files to two different S3 buckets. Everything is performing admirably!

image

image