Python http post to rcd examples

What is the problem you are having with rclone?

need help writing python controller for multiple rcd minions
looking for examples of using Python, http post and rcd
Curious about formatting and sequencing for authorization, copy and sync commands
Other language examples are ok if Python examples are not available

What is your rclone version (output from rclone version)

latest

Which OS you are using and how many bits (eg Windows 7, 64 bit)

linux - Centos 7

Which cloud storage system are you using? (eg Google Drive)

All supported. Primarily AWS S3, Google Cloud Storage, Azure Blob

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Unclear.  I'm looking for examples

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

under development

I keep meaning to write a python interface module for this.

You should be able to use requests to post to the RC quite easily.

Something like this example

https://requests.kennethreitz.org/en/master/user/quickstart/#more-complicated-post-requests

1 Like

Ok, so I have a question. What's the basic sequence of events? For example, say I want to call rcd via http and have rcd do the equivalent of "rclone copy remo1:/dir1 remo2:/dir2". Is that one HTTP request to rcd, or multiple? I guess I'm missing how the authentication works. Say, for example I'm using simple username and password -- username=user password=pass

Thx

That is one call, unless you set the _async flag in which case you poll the job for completion.

You set the user and password in use with --rc-user and --rc-pass then use the requests facility for passing these in.

That makes sense. I'll give it a try

It's working ok so far with some basic noop tests. Will dig into specific calls for operations later.

Thx

So, I've got python/http/rc/rcd working with basic commands. I'm not seeing how to set options like I would from rclone cli. Specifically, I want to use --dry-run to generate a list of items to copy or sync. For example, here is a curl command to do a copy. Can we add --dry-run to this?

curl -u user:pass -X POST 'http://172.16.3.101:61002/sync/copy?dstFs=r1:tmp&srcFs=r1:usr'

For context, my intent is to get the dry run list, then put the items in a task queue and distributed the copy operations (or sync operations) across a set of nodes. More specifically, my storage system runs k8s on the storage cluster. I'm planning to run rcd on each node (in a docker container) and distributing the copy (or sync) operation across those "rcd nodes". The project is here -- https://github.com/cohsk/athena-alpine-rclone. The specific python script that acts as a controller (currently partially done / under construction) is here -- https://github.com/cohsk/athena-alpine-rclone/blob/master/gru/gru.py

thx

Great

You can set (global) options using https://rclone.org/rc/#options-set

It isn't particularly convenient and it is a global setting - both things I'd like to change at some point.

If you could distribute directories to sync across the nodes that would work well with rclone.

Nick, distributing directories sounds like the way to go. So, figuring out an algorithm to evenly distribute the directories seems to be the trick to doing this. The program needs to load balance based on transactional (small file) and data (large file) throughput.

Does rclone have tools onboard to do a "dry run" to gather statistics so that a balanced set of directories can be dispatched to worker nodes?

Thx

You can use rclone check - maybe use the --size-only flag so it doesn't spend ages doing checksums for this - see the docs - it can output lots of stuff to files for you.

Or you can do rclone sync --dry-run and parse the logs.

got it. I'll check it out.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.