What is the problem you are having with rclone?
I want to upload quite a few large folders to cloud using rclone. Obviously I need to archive before uploading since they have many files, but I also want relatively good access into the archives so I can't use tarballs. Here's what I've tried so far:
- Creating zip / 7z / squashfs archives into an rclone mount w/o vfs-cache. It doesn't work because they all require seek.
- Same as 1 but with vfs-cache-mode full/writes. My volume space runs out and no archive is made
- zip and pipe into rcat, which is supposed to work but doesn't due to info-zip bugs: crashes on symlinks and even when the folder contains no symlinks the resulting files are sometimes incorrect.
So I wonder if it's possible for rclone mount to provide seekability when writing without storing the entire file? I think ncw mentioned something similar a while back: VFS: File bigger than free space - #7 by ncw
Alternatively, does someone know an archiving tool that can stream its output properly and create archives with good random access?
What is your rclone version (output from rclone version
)
1.55.0
Which OS you are using and how many bits (eg Windows 7, 64 bit)
CentOS 7.9.2009
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
See above
The rclone config contents with secrets removed.
[gdrive]
type = drive
scope = drive
upload_cutoff = 1G
chunk_size = 1G
A log from the command with the -vv
flag
Will provide if needed