I may have to change cloud providers and I am trying to think of the best way to migrate the data. I do not have access to high speed bandwidth at the moment.
I understand that rclone copy can not server side copy from one remote to another and that a "middle man" is required.
I was thinking I could purchase a VPS and run rclone copy thus using the bandwidth between the VPS and the new cloud provider.
My question: Can rclone copy the crypted data in tact from one remote to another? I do not want to download the data, decrypt it, re-encrypt it and then upload it again. Also, this method involves exposing my config file with passwords to the VPS which is not a good solution.
Assuming I am using the same crypt password and salt on both remotes, can rclone copy the encrypted files over in place using the copy (or another command)? This would be preferred as it uses the API of both remotes.
Also, is it possible to use setup rclone on the VPS and issue the command to it remotely, thus the config file stays safe on my computer?
If not, can anyone else suggest another solution? Webdav is available on both remotes however wedbav will not update file times so this solution will not work.
if the dest remote is google, you can use a free/cheap google vm.
the more details you provide the better suggestion we can offer...
the problem with webdav, in most cases, does not support file verification using hashes.
another option might be to run rclone serve, perhaps sftp, on the source or dest.
not sure that is possible,
just create a set of client ids, and after the transfer, delete them.
depending on the backend, create a set of permissions/polices to lock that client id to read only access to one bucket/folder.
Okay, so let me check my understand.
I can create the client ids and transfer the data and later revoke and recreate the ids after the transfer, this is a good solution.
My main concern is protecting the crypt password and not exposing it in memory and on the HDD of VPS server I do not control. Looking at the backup crypted remote doc you provided, can I rclone sync the unencrypted remotes (i.e the encrypted data) and thus not expose the encrypted remote or the crypt password?
If so I think this would be the complete solution!
Wow excellent, so to confirm, as long as I set up a crypt to the new remote with the same password as the old crypt I should be able to decrypt the files?
Thank you very much for providing that link, I have been trying to think through this problem with all the various possibilities for a day now, you were, again, extremely helpful!
The remote host I am am evaluation is Koofr. I have started the rclone sync and I see one potential problem. The directories timestamps are not preserved. They are all getting a timestamp of: 1969-12-31 19:00:00. When I look on the source remote, the timestamps are correct. The times on the actual files appear to have been preserved correctly however.
Will this cause a problem when I go to sync the local data with the newly copied remote? Does rclone look at the directories timestamps and try to sync them again or does it just look at the files?
This documentation describes also using rclone check to check the integrity of the files.
I tried running this and I got an error,:
ERROR : No common hash found - not using a hash for checks
If I look at my remotes, one supports SHA1 and one supports MD5. I assume that is what this error is telling me? If rclone can't check hash validity then is it just checking for filesize and mod time in this case?
It would be a very good think to check hashes, once I am finished the local tranfer, can I set a "The local filesystem" remote which according to this table (Overview of cloud storage systems) supports all hashes and check MD5 hashes locally against the new remote, will this work?
you did not post the command or debug log.....
what are you trying to check, local and crypt remote or what?
if you want to check local against crypt remote use rclone cryptcheck
if you want to check remotes using different hashes, try rclone check --download
using debug output would show that rclone is checking hashes from local to remote.
DEBUG : kdbx.20210903.173212.7z: Need to transfer - File not found at Destination
DEBUG : kdbx.20210903.173212.7z: md5 = c486ec39bf00689e3e4702fd602e2e47 OK
INFO : kdbx.20210903.173212.7z: Copied (new)
rclone check oldRemote: newRemove:
Here is what I see when I run the command with debug:
rclone check oldRemote: newRemote: --log-level "DEBUG"
2021/09/04 15:55:59 DEBUG : rclone: Version "v1.56.0" starting with parameters ["rclone" "check" "oldRemote:" "newRemote:" "--log-level" "DEBUG"]
2021/09/04 15:55:59 DEBUG : Creating backend with remote "oldRemote:
2021/09/04 15:56:01 DEBUG : Creating backend with remote "newRemote"
2021-09-04 15:56:01 ERROR : No common hash found - not using a hash for checks
2021-09-04 15:56:01 DEBUG : koofr:4b5ac43e-d3a2-1421-f398-b71d36342a: Waiting for checks to finish
2021-09-04 15:56:02 DEBUG : File 1: OK - could not check hash
2021-09-04 15:56:02 DEBUG : File 2: OK - could not check hash
2021-09-04 15:56:02 DEBUG : File 3: OK - could not check hash
So in this case, old remote only supports SHA1 and new remote supports MD5. So I assume this is what I am seeing with these errors? Rclone is reporting that it cannot check the hash from old remote against new remote because the remotes use different hashes? Is this correct?
When I am finished transferring I want to check the local MD5 hash against the newRemove MD5 hash. Form your post it seems I do not have to make a local remote and do a:
rclone check localRemote: newRemote:
I can just do an:
rclone sync --log-level "DEBUG"
and see if it is checking the md5 hashes at this time. I do not need to setup a local filesystem remote and do this:
rclone check localRemote: newRemote:
because rclone will show me the hashes with just a regular rclone:
Okay sorry for the confusion. I tried to follow the documentation and run a rclone check from my old cloud provider to my new cloud provider:
rclone check oldRemote: newRemote:
and I got the error which I posted above.
I think this error is because the oldRemote supports SHA1 and the newRemote supports MD5. I wanted to confirm if this is correct or if there is some other problem with the hashes on the newRemote.
The second thing I though of is, if I can't check hashes from one remote to another I would like to check hashes with a third source of data which is my local filesystem. According to the documentation the local file system supports all hashes.
So even though I can't check the hashes from oldRemote to newRemote, because I have a 3rd copy of the data locally, I should be able to check the hashes by doing a simple:
if you have a local copy of the data, then why use the oldRemote?
not correct, a debug log would show that.
by default, rclone compares source and dest using modtime and size.
if rclone transfers a file, then rclone will check the hash.
okay so in the case because my two remotes do not have the same hash method there is no way to check the hash?
If only modtime and size are checked, then isn't data corruption possible? Also, I am currently transfering the encrypted files. If a file is corrupt during the transfer then perhaps there would be a problem when crypt tries to decode it, is there a way to check this using some sort of crypt operation? Or is there an any option to check hashes with a rclone sync operation?
When I run rclone check oldRemote: newRemote: as per the documentation, I get an error which I reported above. I believe this is because the two different remotes use two different hash methods so therefore no hash check can happen between these two remotes.
Errors:
2021-09-04 15:56:01 ERROR : No common hash found - not using a hash for checks
2021-09-04 15:56:01 DEBUG : koofr:4b5ac43e-d3a2-1421-f398-b71d36342a: Waiting for checks to finish
2021-09-04 15:56:02 DEBUG : File 1: OK - could not check hash
2021-09-04 15:56:02 DEBUG : File 2: OK - could not check hash
2021-09-04 15:56:02 DEBUG : File 3: OK - could not check hash
I will stop here to see if we are in agreement. If so, then the next part is an idea I had for a next step.
Okay, so I don't think I can use download due to the data size. I did have another idea however it is a totally separate operation that I think is causing the confusion but I may be able to accomplish the goal this way.
I still have a local copy of the data, totally separate from the remotes. My idea was to mount a local file system remote and do an rclone check against the new remote:
rclone check localFileSystem: newRemote:
The documentation states that the local file system remote has all hashing capabilities so it should be able to check the local file MD5 against the newRemote MD5 for each file. Even though the source is different, the files are the same because it is just another copy of the data. So the hashes should also be the same and I should be able to check the local MD5 hash against the new remote.