Rclone copy RAM full tmp dir Synology

What is the problem you are having with rclone?

I try to copy files and rclone says every time „no space left on device“
rclone writes everything in the ram and so the tmp dir is full

What is your rclone version (output from rclone version)

v1.51.0

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Running a Synology DS918+ with 8gb RAM

Which cloud storage system are you using? (eg Google Drive)

Nextcloud

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy —ignore-existing -vv -P /sourcefolder /pathofthemount

Can I change the tmp dir?
I tried already rclone copy —ignore-existing —Cache-tmp-upload-path /pathtoalocalfolder -P vv /source /mountedfolder

This appears to be copying from local to local? Did you forget the remote:?

I mounted my drive at /home/USER/Nextcloud/files

And I used the command
rclone copy —ignore-existing -P -vv /local/files/example /home/USER/Nextcloud/files

can u give me an example for remote:
?

rclone copy —ignore-existing -P -vv /local/files/example crypt: /path/
and I have to unmount before?

Hi,

If you type 'rcone config' in the terminal you will see how you named your remote (if you named it 'crypt' than the above is correct).

If you want to copy files from your local PC to the remote you have to use the following format:
rclone copy /path/to/local/folder crypt:/path/on/remote/

You can then add whatever additional paramenters you want like vv etc.

ok, now its working
I didnt used remote: (every start is difficult)

I unmounted the remote copied a file and mounted my remote
but now i can only see the one copied file before and not the other

thats how I mounted it:
rclone mount --allow-other --allow-non-empty crypt: /local/path &

You don't have to unmount every time when you copy or move files, you can leave that mounted. It seems your remote it named 'crypt', so in all cases you have to use it like this:
rclone copy /path/to/local/folder crypt:/path/on/remote/

If you don't specify 'crypt:' in the destination it won't work and it won't copy it to your remote. Try it like that.

when I dont unmount I get the error " Error opening storage cache. Is there another rclone running on the same remote? failed to open a cache connection to"

Can you provide us with the rclone config file (just delete your crypt password and any sensitive data)?

Also can you provide your mount settings for the service you are running?

sure I try
(sorry for my bad english)
I only have an 50 up and 50 down internet connection (maybe other settings are better for me)
I wanna use the mount for Plex

[nextcloud]
type = webdav
url = nextcloud path
vendor = nextcloud
user = USER
pass = PASSWORD

[cache]
type = cache
remote = nextcloud:/nextcloud
plex_username = USER
plex_password = PASSWORD
chunk_size = 5M
info_age = 1d
chunk_total_size = 1G

[crypt]
type = crypt
remote = cache:/crypt
filename_encryption = standard
directory_name_encryption = true
password = PASSWORD
password2 = PASSWORD

MOUNT:
rclone mount --allow-other --allow-non-empty crypt:/path/to/files

one more question:
if i copy a few files and the process takes a long time, how can i find out how far the transfer is after i have closed the ssh connection
I started the process with -P and -vv

do i need the cache at all?

Probably not, but it does depend somewhat on your setup and devices. Majority of people do not use cache from what I can see (I don't) and get better results without it.

ok, thank you
one last question.

how can i get radarr to copy the finished data directly to the cloud, for example? or can you just set the mount path in radar?
then the copying process for me would definitely overfill the ram again

is unionfs or mergerfs with copy script maybe the better solution?

I don't write directly to my cloud drive.

I use a local drive along with mergerfs to combine the two together and write locally first and upload at night as that works for me a lot better.

Ok, maybe I can install mergerfs to my Synology
Without the cache it’s works better

Ok, mergerfs is working well ! :slight_smile:
At the moment Im copying 4 files to my cloud.
(rclone copy —ignore existing —bwlimit=3M -vv -P /local/path crypt:/path/path)
It transferred already 160gb but the files are only ~129gb big together.
What can be the problem?

You'd need to provide a debug log to diagnose the issue.

-vv says

write: broken pipe - low level retry 1/10

That's only a snippet of the log and based on that single line, it had a retry as you had a network blip or something occur.

The purpose of including the log is to help you faster and with less back and forth as we can see things like version and the command you ran and all the output so we can diagnose and give you an answer.

The less information provided, the more guessing that happens and back and forth.

how can I see the log?

I only used -vv (still copying)
read: connection reset by peer)
2020-05-01 07:25:02 DEBUG : pacer: low level retry 1/1
broken pipe)
read: connection reset by peer - low level retry 1/10
write tcp XX.XX.XX.XX:42344->XX.XX.XX.XX:443: write: broken pipe - low level retry 1/10