Mount for nextcloud data and big file

A bit of context first. I try to setup a server for nextcloud and use rclone mount for storage.
Basically, I set nextcloud data dir to rclone mount and keep appdata_xxxx dir (the one inside data dir) on the server.

I use S3-like as rclone backend.

I now run some tests before opening it to my family and friends. I know they will use it through web interface and desktop client. I know desktop client use webdav to connect to nextcloud server

Luckily, rclone can handle webdav as well. So, I setup an other backend to rclone and use rclone copy to test the speed and the limits.

What is the problem you are having with rclone?

rclone copy a >5Go file fails. It reaches 100% and then go on to 120 - 130 % and so on without limit. I have to stop the transfer using Ctrl+C and funny part, if I wait long enough, the file will be available to nextcloud. In the meantime, I will only see Test.ocTransferId*.part file in the directory tree.
I'm 100% sure this is related to rclone and file size, because I first tried putting nextcloud data folder to local filesystem and it works fine !
And when I put nextcloud data folder to rclone mount, I can rclone copy a file up to 5Go without problem : 4.5Go ok, 5.15Go NOK. These files are created using dd if=/dev/zero of=Test2 bs=1k count=4800000 and dd if=/dev/zero of=Test bs=1k count=5200000

This is the command I use rclone copy -P -c Test text-next:Photos/ as the Test file is on the server. The upload doen't long more than 10min.

Run the command 'rclone version' and share the full output of the command.

rclone v1.50.2

  • os/arch: linux/amd64
  • go version: go1.13.6

Which cloud storage system are you using? (eg Google Drive)

S3 and Webdav

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy -P -c Test text-next:Photos/

The rclone config contents with secrets removed.

[S3]
type = s3
provider = Other
env_auth = false
access_key_id = xxxx
secret_access_key = xxxx
endpoint = s3.pub1.infomaniak.cloud
acl = private
bucket_acl = private
force_path_style = true
v2_auth = true


[text-next]
type = webdav
url = https://test-next.xxxx.eu/remote.php/dav/files/ulysse132/
vendor = nextcloud
user = ulysse132
pass = xxxx

May be related to the way I mount the S3 bucket. Here is the command to mount :

[Unit]
Description=rclone
After=network-online.target

[Service]
Type=simple
Environment=MOUNT_DIR=/mnt/Test-Next
ExecStart=/usr/bin/rclone mount \
        --config=/root/.config/rclone/rclone.conf \
        --uid 33 \
        --gid 33 \
        --allow-other \
        --umask 0007 \
        --attr-timeout 2h \
        --dir-cache-time 2h \
        --poll-interval 30s \
        --vfs-cache-mode full \
        --no-modtime \
        --s3-upload-cutoff 5G \
        S3:test-next "${MOUNT_DIR}"

ExecStop=/bin/fusermount -u "${MOUNT_DIR}"

#Restart info
Restart=always
RestartSec=10

User=root
Group=root

[Install]
WantedBy=default.target

I already tried with
--timeout 10m
--cache-tmp-upload-path /root/tmp
--cache-tmp-wait-time 10m
--cache-writes

Any ideas to upload >5Go file ?

hello and welcome to the forum,

that version of rclone is many years old.

the only way to get latest stable is
https://rclone.org/downloads/#script-download-and-install

so update to latest stable and test again

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.