Hetzner Storage Box > JottaCloud | Local copies issue

What is the problem you are having with rclone?

When I use the command:
rclone move --order-by name --delete-empty-src-dirs --fast-list -vv -P /mnt/sb-files/media/.transfer/ jotta-encrypted:/media/

I seems to me that the content is first cached on my local SSD which is incredibly limited. For example transferring 27GB leads to an increase in 27GB local disk usage. Is this necessary or is there a way to upload directly to JottaCloud?

Side Question:
I was a little sloppy when setting up the JottaCloud remote and didn't setup a subfolder on there, hence I mount the root directory - how come that is regarded as inadvisable?

What is your rclone version (output from rclone version)

rclone v1.51.0

  • os/arch: linux/amd64
  • go version: go1.13.7

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04

Which cloud storage system are you using? (eg Google Drive)

Hetzner Storage Box which is being migrated to JottaCloud

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone move --order-by name --delete-empty-src-dirs --fast-list -vv -P /mnt/sb-files/media/.transfer/ jotta-encrypted:/media/

Thanks for any help.


Please see:


This is described here:


Thanks Animosity, I actually had read that doc, but I've been in a frenzy all day yesterday trying to figure this stuff out - I must have forgotten.

  1. Concerning the mount point, I suppose then I have nothing to worry about, correct? (using default settings)

  2. I am somewhat new to this, so
    a) Would I use --checksum when moving or mounting to improve things?
    b) Is there a way to determine the MD5 Checksum before transferring (and store it) to make the moving process smoother?

edit: would using:

rclone md5sum remote:path [flags]

on the Storage Box before moving the files stop the caching?

By the way, thanks so much for helping out the way you do here, you posts and github account have been a great help to me!


Unfortunately no as to upload to Jotta, it has to use the MD5 hash to upload.

Unfortunately, nope. rclone generates it as it needs it to upload to JottaCloud. That would require a change in the backend to support external hashes I would think.

Thanks. Kinds words are always very much appreciated!

Thanks again for your help.

Thus, my only solution would be to change my moving command to this, placing the cache directory where there is enough space like this, correct?

rclone move --order-by name --delete-empty-src-dirs --cache-tmp-upload-path /mnt/sb-files/cache --fast-list -vv -P /mnt/sb-files/media/.transfer/ jotta-encrypted:/media/


You'd set TMPDIR to be where you want the hash files to be calculated at.

Something like:

export TMPDIR

and run your move command and you should see the files populate for calculation in the TMPDIR you defined.

Sorry, I am a noob again now. Where would I set that?

TMPDIR=/tmp/somedir && export TMPDIR && rclone move --order-by name --delete-empty-src-dirs --cache-tmp-upload-path /mnt/sb-files/cache --fast-list -vv -P /mnt/sb-files/media/.transfer/ jotta-encrypted:/media/

Would that be it?

You'd just change /tmp/somedir to be where you want it to be on your server though. Otherwise, that would be fine.

An example script for me that I have as I used a custom rclone.conf is:

felix@gemini:/opt/rclone/scripts$ cat upload_cloud
# RClone Config file

#exit if running
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then exit; fi

# Move older local files to the cloud
/usr/bin/rclone move /local/ gcrypt: --log-file /opt/rclone/logs/upload.log -v --exclude-from /opt/rclone/scripts/excludes --delete-empty-src-dirs --fast-list --max-transfer 700G

perfect, it's working! thanks a lot.

I suppose there no way of circumventing this except changing the provider, correct? I guess that also means I will always need enough space in the cache directory in order for the transfer to go through.

Your --max-transfer 700G flag, would that be suitable when say I would set aside 700GB of cache directory and have the transfer split over two nights intelligently when I would have 1.3GB of files to transfer?

I don't use JottaCloud as I use Google Drive and there is a daily upload limit of 750GB so I stay under that. That paremeter just says stop uploading after 700GB.

The files in the TMPDIR should only be transient and there while they upload. Once uploaded, they go away so depending on your number of transfers, you may only have 4 files in there at a time by default.

I think the specific case of needing to use a temporary file to upload a crypted file to Jottacloud should be fixed in the latest beta if you want to give it a go.

thanks so much for your help!

Thanks ncw,

I am testing it from a different server and you seem to be right. That makes the whole thing a lot easier. Is there anything I should know about concerning the beta?

The beta is shortly to become the next release version so I think it should be reliable.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.