What is the problem you are having with rclone?
I'm downloading large files (30GB) and then copying them to gdrive. Unfortunately the host that downloads these files hasn't got that much space. On it's primary volume it has 14GB to spare, and in a mounted volume it has 32 GB.
The mounted volume is used mainly to store the download, and the primary volume is where rclone mount should cache the download as it writes to gdrive.
This is the command that I'm using:
mount gdrive: /mnt/gdrive --config=/config/rclone/rclone.conf --allow-other --gid ${PGID} --uid ${PUID} --dir-cache-time 1h --buffer-size 256M --log-level ${LOG_LEVEL} --stats 1m --use-mmap --timeout 1h --drive-chunk-size 64M --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1024M --vfs-cache-mode writes --vfs-cache-max-size 14G
Yes, I do know that --vfs-cache-max-size 14G doesn't work because my files uploaded are much bigger than that, and looks like rclone can't upload incomplete chunks.
So basically, which are my alternatives?. I'm getting 30 MBytes/sec when doing direct upload, but with cache I can go as far as 65 MBytes/sec. Obviously I have to disable cache, but what would be the best config to optimize throughput for large files when uploading them?.
What is your rclone version (output from rclone version
)
Rclone 1.53.
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Debian 10.
Which cloud storage system are you using? (eg Google Drive)
Google Drive.
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
mount gdrive: /mnt/gdrive --config=/config/rclone/rclone.conf --allow-other --gid ${PGID} --uid ${PUID} --dir-cache-time 1h --buffer-size 256M --log-level ${LOG_LEVEL} --stats 1m --use-mmap --timeout 1h --drive-chunk-size 64M --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 1024M --vfs-cache-mode writes --vfs-cache-max-size 14G
The rclone config contents with secrets removed.
[gdrive]
type = drive
client_id =
client_secret =
scope = drive
token =
team_drive =
root_folder_id =