Can rclone copy crypted files from one remote to another without decrypting?

I may have to change cloud providers and I am trying to think of the best way to migrate the data. I do not have access to high speed bandwidth at the moment.

I understand that rclone copy can not server side copy from one remote to another and that a "middle man" is required.

I was thinking I could purchase a VPS and run rclone copy thus using the bandwidth between the VPS and the new cloud provider.

My question: Can rclone copy the crypted data in tact from one remote to another? I do not want to download the data, decrypt it, re-encrypt it and then upload it again. Also, this method involves exposing my config file with passwords to the VPS which is not a good solution.

Assuming I am using the same crypt password and salt on both remotes, can rclone copy the encrypted files over in place using the copy (or another command)? This would be preferred as it uses the API of both remotes.

Also, is it possible to use setup rclone on the VPS and issue the command to it remotely, thus the config file stays safe on my computer?

If not, can anyone else suggest another solution? Webdav is available on both remotes however wedbav will not update file times so this solution will not work.

hello,

that is documented at
https://rclone.org/crypt/#backing-up-a-crypted-remote

if the dest remote is google, you can use a free/cheap google vm.

the more details you provide the better suggestion we can offer...

the problem with webdav, in most cases, does not support file verification using hashes.

another option might be to run rclone serve, perhaps sftp, on the source or dest.

not sure that is possible,
just create a set of client ids, and after the transfer, delete them.
depending on the backend, create a set of permissions/polices to lock that client id to read only access to one bucket/folder.

Okay, so let me check my understand.
I can create the client ids and transfer the data and later revoke and recreate the ids after the transfer, this is a good solution.

My main concern is protecting the crypt password and not exposing it in memory and on the HDD of VPS server I do not control. Looking at the backup crypted remote doc you provided, can I rclone sync the unencrypted remotes (i.e the encrypted data) and thus not expose the encrypted remote or the crypt password?

If so I think this would be the complete solution!

that is correct, no need to create a crypt remote so no need to expose the crypt password(s).

really, just a normal transfer of a set of files from source to dest.
that the files happen to be crypted does not come into play.

Wow excellent, so to confirm, as long as I set up a crypt to the new remote with the same password as the old crypt I should be able to decrypt the files?

Thank you very much for providing that link, I have been trying to think through this problem with all the various possibilities for a day now, you were, again, extremely helpful!

The remote host I am am evaluation is Koofr. I have started the rclone sync and I see one potential problem. The directories timestamps are not preserved. They are all getting a timestamp of: 1969-12-31 19:00:00. When I look on the source remote, the timestamps are correct. The times on the actual files appear to have been preserved correctly however.

Will this cause a problem when I go to sync the local data with the newly copied remote? Does rclone look at the directories timestamps and try to sync them again or does it just look at the files?

correct. this can be tested:

  • create a remote for koofr
  • create a folder on that remote
  • copy a crypt file to the source remote to that folder.
  • create a new remote. to prevent typp mistakes, use rclone config and c) Copy remote to create a new remote based on the current crypt remote
  • you can edit that new remote to point to that folder on koofr
  • rclone ls on that new crypt remote
  • live long and prosper

yes, that is how rclone works, has been discussed in the forum and github.

rclone saves that as meta data which is good, as based on the docs, rclone does support modtime on koofr.

https://www.dailyrazor.com/blog/december-31-1969-all-you-need-to-know-about-12-31-1969-in-30-secs/

this will not cause a problem.

rclone does not. rclone just looks at the files.

We are on the same page, this is exactly what I was thinking.

Live long and prosper friend, and thank you.

sure,

koofr looks very expensive.
does it have something other providers lack?

This documentation describes also using rclone check to check the integrity of the files.

I tried running this and I got an error,:
ERROR : No common hash found - not using a hash for checks

If I look at my remotes, one supports SHA1 and one supports MD5. I assume that is what this error is telling me? If rclone can't check hash validity then is it just checking for filesize and mod time in this case?

It would be a very good think to check hashes, once I am finished the local tranfer, can I set a "The local filesystem" remote which according to this table (Overview of cloud storage systems) supports all hashes and check MD5 hashes locally against the new remote, will this work?

you did not post the command or debug log.....
what are you trying to check, local and crypt remote or what?

if you want to check local against crypt remote use rclone cryptcheck
if you want to check remotes using different hashes, try rclone check --download

using debug output would show that rclone is checking hashes from local to remote.

DEBUG : kdbx.20210903.173212.7z: Need to transfer - File not found at Destination
DEBUG : kdbx.20210903.173212.7z: md5 = c486ec39bf00689e3e4702fd602e2e47 OK
INFO  : kdbx.20210903.173212.7z: Copied (new)

sorry the command I was trying to run was:

rclone check oldRemote: newRemove:
Here is what I see when I run the command with debug:

rclone check oldRemote: newRemote: --log-level "DEBUG"
2021/09/04 15:55:59 DEBUG : rclone: Version "v1.56.0" starting with parameters ["rclone" "check" "oldRemote:" "newRemote:" "--log-level" "DEBUG"]
2021/09/04 15:55:59 DEBUG : Creating backend with remote "oldRemote:
2021/09/04 15:56:01 DEBUG : Creating backend with remote "newRemote"
2021-09-04 15:56:01 ERROR : No common hash found - not using a hash for checks
2021-09-04 15:56:01 DEBUG : koofr:4b5ac43e-d3a2-1421-f398-b71d36342a: Waiting for checks to finish
2021-09-04 15:56:02 DEBUG : File 1: OK - could not check hash
2021-09-04 15:56:02 DEBUG : File 2: OK - could not check hash
2021-09-04 15:56:02 DEBUG : File 3: OK - could not check hash

So in this case, old remote only supports SHA1 and new remote supports MD5. So I assume this is what I am seeing with these errors? Rclone is reporting that it cannot check the hash from old remote against new remote because the remotes use different hashes? Is this correct?

When I am finished transferring I want to check the local MD5 hash against the newRemove MD5 hash. Form your post it seems I do not have to make a local remote and do a:

rclone check localRemote: newRemote:

I can just do an:

rclone sync --log-level "DEBUG"

and see if it is checking the md5 hashes at this time. I do not need to setup a local filesystem remote and do this:

rclone check localRemote: newRemote:

because rclone will show me the hashes with just a regular rclone:

rclone sync /local/files newRemote:

Is my understand correct?

sorry, getting confused, hard to follow what is going on...
and post the config file, redact id/secret/token

what are you posting about?
two remotes using different hash types
or
local to remote
or
what

Okay sorry for the confusion. I tried to follow the documentation and run a rclone check from my old cloud provider to my new cloud provider:

rclone check oldRemote: newRemote:

and I got the error which I posted above.

I think this error is because the oldRemote supports SHA1 and the newRemote supports MD5. I wanted to confirm if this is correct or if there is some other problem with the hashes on the newRemote.

The second thing I though of is, if I can't check hashes from one remote to another I would like to check hashes with a third source of data which is my local filesystem. According to the documentation the local file system supports all hashes.

So even though I can't check the hashes from oldRemote to newRemote, because I have a 3rd copy of the data locally, I should be able to check the hashes by doing a simple:

rclone sync /local/files newRemote: --log-level "DEBUG"

because during a regular sync hashes are checked and can be seen in the debug log. Is this correct?

if you have a local copy of the data, then why use the oldRemote?

not correct, a debug log would show that.
by default, rclone compares source and dest using modtime and size.
if rclone transfers a file, then rclone will check the hash.

to check files, rclone check

okay so in the case because my two remotes do not have the same hash method there is no way to check the hash?

If only modtime and size are checked, then isn't data corruption possible? Also, I am currently transfering the encrypted files. If a file is corrupt during the transfer then perhaps there would be a problem when crypt tries to decode it, is there a way to check this using some sort of crypt operation? Or is there an any option to check hashes with a rclone sync operation?

i am getting confused, you are posting about several different remote combinations.

local, cloud, crypt, same hash method , different hash method or what.

what type is the source, what type is the dest?

please ask just one question per post about just one remote combination.
let's get that fully answered and then move onto the next question/combo

Okay,

I currently have data in the following 3 locations:

  1. Old cloud provider
  2. New cloud provider
  3. Local data on PC

I am currently migrating data from my old cloud provider to my new cloud provider directly:

rclone sync oldRemote: newRemote:

oldRemote supports SHA1 hashes.
newRemote supports MD5 hashes.

When I run rclone check oldRemote: newRemote: as per the documentation, I get an error which I reported above. I believe this is because the two different remotes use two different hash methods so therefore no hash check can happen between these two remotes.

Errors:

2021-09-04 15:56:01 ERROR : No common hash found - not using a hash for checks
2021-09-04 15:56:01 DEBUG : koofr:4b5ac43e-d3a2-1421-f398-b71d36342a: Waiting for checks to finish
2021-09-04 15:56:02 DEBUG : File 1: OK - could not check hash
2021-09-04 15:56:02 DEBUG : File 2: OK - could not check hash
2021-09-04 15:56:02 DEBUG : File 3: OK - could not check hash

I will stop here to see if we are in agreement. If so, then the next part is an idea I had for a next step.

in that case, as i posted above, try rclone check --download with a debug log.

Okay, so I don't think I can use download due to the data size. I did have another idea however it is a totally separate operation that I think is causing the confusion but I may be able to accomplish the goal this way.

I still have a local copy of the data, totally separate from the remotes. My idea was to mount a local file system remote and do an rclone check against the new remote:

rclone check localFileSystem: newRemote:

The documentation states that the local file system remote has all hashing capabilities so it should be able to check the local file MD5 against the newRemote MD5 for each file. Even though the source is different, the files are the same because it is just another copy of the data. So the hashes should also be the same and I should be able to check the local MD5 hash against the new remote.

Does this sound right?

about newRemote:, does that contain crypted data files?

if it does, then you need to use rclone cryptcheck localFileSystem: newRemote:

would be helpful to post the config file, redact id/secret/token/password