Copying (cp) into mount where files already exist starting upload again

What is the problem you are having with rclone?

I have a directory on my drive:

rclone copy dir encrypted:/

Yet when I cp dir /path/to/encrypted/mount/ it seems to start from the beginning and re upload. Why is this happening has it to do with the encryption?

What is your rclone version (output from rclone version)

rclone v1.52.0-008-g8774381e-beta

- os/arch: linux/amd64

- go version: go1.14.3

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy/sync file encrypted:/

as well as:

rclone mount --default-permissions --no-modtime --dir-cache-time 1m --cache-chunk-path=/tmp/dircache --cache-db-path=/tmp/dircache --cache-tmp-upload-path=/tmp/dircache encrypted: /foo --allow-other --allow-non-empty --cache-chunk-clean-interval=5m --gid=33 --uid=33 --umask=0000 --log-level DEBUG --log-file /tmp/somelog.log --vfs-cache-mode writes --vfs-cache-max-age=1m --vfs-cache-max-size 100M --vfs-cache-poll-interval 30s

The rclone config contents with secrets removed.

[encrypted]
type = crypt
remote = backup:encrypted
filename_encryption = off
directory_name_encryption = false
password = 

[backup]
type = drive
scope = drive
service_account_file = /root/.config/rclone/accounts/1.json
team_drive =
chunk_size = 128M

A log from the command with the -vv flag

N/A

I'm not quite following what you are asking.

If you run a cp command, it will overwrite things already on the destination.

Example with one file and a -i to show it's actually overwriting it.

felix@gemini:~$ cp /etc/hosts . -i
cp: overwrite './hosts'? y
felix@gemini:~$ ls -al hosts
-rw-r--r-- 1 felix felix 413 Jun  3 20:05 hosts
felix@gemini:~$ cp /etc/hosts . -i
cp: overwrite './hosts'? y
felix@gemini:~$

That has nothing to do with encryption or even rclone as that's just standard unix. What are you trying to do?

is there a reason you are using that cache remote?
https://rclone.org/cache/#status
"recommend only using the cache backend if you find you can't work without it"

Awkward. Damn. I was wanting to speed up the process.

Sorry did not see this let me try!

If you are using cp and overwrting everything, I'm not sure why you are doing that.

If you want to move a lot of files it's better to use rclone copy and write to the remote rather than the mount.

What are you trying to do?

Inside a docker container I do not want to give it the application rclone. I just want to mount a drive and give the container that volume path. Writing directly is very slow. So I was writing to the container and then copying to the volume but then it is writing the whole thing again every time I did a cp command inside the container.

I get foo.zip: WriteFileHandle.Write: can't seek in file without --vfs-cache-mode >= writes when writing to this cache drive unfortunately!

cp ovewrites the destination as we've covered.

If you use cp, it's going to overwite each time.

Why are you using the cp command? Are you trying to make a source and destination the same? What is your "goal"? What are you trying to accomplish with the cp command.

If you are using something else that is seeking you need to add that flag so without knowing what you are running, it's hard to guess why that pops up.

Yes I am using the mounted volume as a "backup" so I am just copying files to it.

When writing directly with the mount command:

/usr/bin/rclone mount --default-permissions --no-modtime --dir-cache-time 1m --cache-chunk-path=/tmp/dircache --cache-db-path=/tmp/dircache --cache-tmp-upload-path=/tmp/dircache encrypted: /downloads2 --allow-other --allow-non-empty --cache-chunk-clean-interval=5m --gid=33 --uid=33 --umask=0000 --log-level DEBUG --log-file /tmp/somelog.log --vfs-cache-mode writes --vfs-cache-max-age=1m --vfs-cache-max-size 100M --vfs-cache-poll-interval 30s

I get speeds of about 30KB/s (I am on a gigabit line)

This is a chunk of my log https://pastebin.com/FWNGhFT9

I am then mounting the docker container on /downloads2

That's a poor way to do that as using a mount for copying a ton of files is just going to be a slow.

If you want to back up something, it's better to use rclone copy remote:backup or whever you want to put it.

The mount is a not a good use case for what you are trying to do.

1 Like

Yeah I know :frowning: I just don't want to install rclone in the container!

Because? It simplifies everything you are trying to do rather than making this very slow.

In any event for backing up something up, if you keep running 'cp', it's going to keep overwriting everything.

You have

Off for some reason so files aren't going to have mod-times either.

by setting this to 1m on google drive, you lose any cached directory information so it's going to slow things down a lot.

Those are all for the cache backend, which if you rclone.conf is correct, you aren't using so they can be removed.

This is 99% of the time bad as you over mount and hide things.

There isn't much else to tune and if you use cp, it'll overwrite each time.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.