Rclone working with limited hard disk space?

What is the problem you are having with rclone?

Not really a problem, trying to automate some workflow, seeking for advice.

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.11.0-1029-gcp (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.2
  • go/linking: static
  • go/tags: none
Yes

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount --daemon remote: local
cp local/1.zip  ..
cd ..
unzip  1.zip
mv 1/  toBeEncrypted/.
rclone -q copy toBeEncrypted/ remoteEncryted:

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

Paste  log here

First of all. Thank you all for making Rclone available.
I am trying to encrypt some of my files, but I need to download and unzip them first. While my computer has limited disk space, files can't be downloaded at once, wonder anyone who has experience with Rclone can share a few tips to achieve the same goal as the workflow above.
Thank you.

hello and welcome to the forum,

i know with .7z 7zip files, from a rclone mount, i can extract a single file at a time.
so perhaps in a loop,

  1. unzip a single file from 1.zip to toBeEncrypted/.
  2. rclone -q copy toBeEncrypted/ remoteEncryted:

Yes. trying to do the loop. there are a lot of zip files.

Can you please explain a bit more " from a rclone mount, i can extract a single file at a time." With an example ?

as a test, this should work
unzip -p local/1.zip somefile > toBeEncrypted/somefile

Thank you for the quick reply.
Can you please understand the loop from your command?

trying to copy all files from remote: personal ( where personal folder contains 1.zip, 2.zip, 3.zip, ....1000.zip)
then unzip them,
then move all unzip folders to remoteEncrypted:

problems facing: Can't download all the files to local since my computer don't have enough hard disk space.

not sure your use case,

for each .zip:

  1. download it
  2. process it
  3. delete it
  4. goto step 1

what is wrong with that?

my command extracts a single file from .zip, and moves it to the dir toBeEncrypted

Nothing wrong at all, but trying all the steps automatically/programmatically.

Can't you just unzip directly to encryptedRemote?

rclone mount --daemon remote: local
rclone mount --daemon remoteEncrypted: encrypted
cd encrypted
for a in ../local/*
do
unzip $a
done

(I've never tried two rclone mount at the same time...)

You might want to ensure vfs caching is set low (or disabled) so it doesn't fill up your local disk; caching won't help much here, especially on reads.

mounting two points is ok.
The code you shared here seem to required to download all files locally, then unzip them one by one, then unzip to encryptedRemote.
But, my hard disk can't save all the files locally.

make sure to use a debug log, and look for issues.
many apps will not work well without --vfs-cache-mode

if you use two mounts, then rclone should not be using local disk space.

or
you can try one of my suggestions, which i prefer over using two mounts.

  1. download one zip
  2. process the zip
  3. delete the zip
  4. goto step 1

No it doesn't. It uses the "local" mount (so reads directly from the rermote: service) and writes to the encrypted (remoteEncrypted:) drive with no local storage requirements (beyond whatever caching mount does).

using a rclone mount for writes without the vfs file cache, imho is a no go.

  1. get errors like this
ERROR : file.txt: WriteFileHandle: Truncate: Can't change size without --vfs-cache-mode >= writes
  1. rclone does not calculate the md5
  2. rclone does not create the header x-amz-meta-md5chksum

Just tested one zip file worked ok. while mounting as

rclone mount --daemon --vfs-cache-mode off remoteDisk:re local

my point is still valid, there are critical features missing when not using --vfs-cache-mode=off
prevents me from trusting that. perhaps that is ok for you.

in addition to the three issues listed above, there is yet another critical feature missing
if there is an problem with writing,
--- rclone might not notice the problem.
--- even if rclone notices the problem, will not retry the upload.

so one way or another i would process one zip file at a time.
--- download the zip, unzip, rclone move files to remoteEncryted:, delete the zip
or
--- use the double mount,
for dest mount, to minimize the size of the vfs file cache, use
rclone mount remoteDisk:re local --daemon --vfs-cache-mode=writes --vfs-cache-max-age=10s --vfs-cache-poll-interval=10s

" so one way or another i would process one zip file at a time.
--- download the zip, unzip, rclone move files to remoteEncryted:, delete the zip"

That is my goal as well.
but can't follow your logic/ point with your code in your vary first reply.
(I am vary new to rclone and coding)
If you have time, please share a more completed codes to provide your points. Thank you.

basically, i would use @sweh code, but for the dest mount, to use my command.

rclone mount remote: local --daemon
rclone mount remoteEncrypted: encrypted --daemon --vfs-cache-mode=writes --vfs-cache-max-age=10s --vfs-cache-poll-interval=10s
cd encrypted
for a in ../local/*
do
unzip $a
done

Hi @asdffdsa,

Can you please share a bit insight of this mount,

rclone mount remoteEncrypted: encrypted --daemon --vfs-cache-mode=writes --vfs-cache-max-age=10s --vfs-cache-poll-interval=10s

since


> Filesystem           Size  Used Avail Use% Mounted on
> /dev/root             29G   28G  998M  97% /
> devtmpfs             480M     0  480M   0% /dev
> tmpfs                485M     0  485M   0% /dev/shm
> tmpfs                 97M  936K   96M   1% /run
> tmpfs                5.0M     0  5.0M   0% /run/lock
> tmpfs                485M     0  485M   0% /sys/fs/cgroup

understand, we can't be cheap sometimes (the loop is going through a lot of zip files) Wonder any solution in this particular situation. Thank you in advance.

as per the docs,
--- make sure you created your own gdrive client id.
--- gdrive can be slow when there are a lot of small files.
so if that is the problem, no good solution.

after a file has been unzipped into the local vfs file cache:
--- after 5 seconds, the upload will start.
--- once the upload has completed, the file will be purged after 10 seconds.

so if you stop processing the zip files, the cache should shrink in size quickly.
make sure that is the case.

maybe try --vfs-cache-max-age=1s --vfs-cache-poll-interval=1s

as for the loop, might need to slow it down by adding a delay,
for example, sleep 60s

or buy a cheap usb flash key

if this is a one-time transfer, rent a cheap vm from google or a seedbox and run rclone on that.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.