If you type 'rcone config' in the terminal you will see how you named your remote (if you named it 'crypt' than the above is correct).
If you want to copy files from your local PC to the remote you have to use the following format:
rclone copy /path/to/local/folder crypt:/path/on/remote/
You can then add whatever additional paramenters you want like vv etc.
You don't have to unmount every time when you copy or move files, you can leave that mounted. It seems your remote it named 'crypt', so in all cases you have to use it like this:
rclone copy /path/to/local/folder crypt:/path/on/remote/
If you don't specify 'crypt:' in the destination it won't work and it won't copy it to your remote. Try it like that.
when I dont unmount I get the error " Error opening storage cache. Is there another rclone running on the same remote? failed to open a cache connection to"
sure I try
(sorry for my bad english)
I only have an 50 up and 50 down internet connection (maybe other settings are better for me)
I wanna use the mount for Plex
[nextcloud]
type = webdav
url = nextcloud path
vendor = nextcloud
user = USER
pass = PASSWORD
[crypt]
type = crypt
remote = cache:/crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD
password2 = PASSWORD
MOUNT:
rclone mount --allow-other --allow-non-empty crypt:/path/to/files
one more question:
if i copy a few files and the process takes a long time, how can i find out how far the transfer is after i have closed the ssh connection
I started the process with -P and -vv
Probably not, but it does depend somewhat on your setup and devices. Majority of people do not use cache from what I can see (I don't) and get better results without it.
how can i get radarr to copy the finished data directly to the cloud, for example? or can you just set the mount path in radar?
then the copying process for me would definitely overfill the ram again
is unionfs or mergerfs with copy script maybe the better solution?
Ok, mergerfs is working well !
At the moment Im copying 4 files to my cloud.
(rclone copy —ignore existing —bwlimit=3M -vv -P /local/path crypt:/path/path)
It transferred already 160gb but the files are only ~129gb big together.
What can be the problem?
That's only a snippet of the log and based on that single line, it had a retry as you had a network blip or something occur.
The purpose of including the log is to help you faster and with less back and forth as we can see things like version and the command you ran and all the output so we can diagnose and give you an answer.
The less information provided, the more guessing that happens and back and forth.