I am trying to upload very large files (streamed tarballs) without hitting local SSD, One of the tarball is ~70GB, if that's too large then I'll find other solutions. In any case - even if I split it to smaller files I do not wish to keep any local cache as all in all I'll be creating roughly 100GB/day and this will wear the SSD and will also require a lot of free space...
Please advise, thanks!
Run the command 'rclone version' and share the full output of the command.
rclone v1.50.2
- os/arch: linux/amd64
- go version: go1.13.6
Which cloud storage system are you using? (eg Google Drive)
OneDrive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Multiple commands:
tar cvf - /files | rclone rcat OneDrive:/outputfile.tar
Or:
/usr/bin/rclone --vfs-cache-mode writes mount "OneDrive": /mnt/OneDrive --daemon
And then
tar cvf - /files > /mnt/OneDrive/test
The rclone config contents with secrets removed.
[OneDrive]
type = onedrive
token = {"access_token":"REDUCTED","expiry":"REDUCTED"}
drive_id = REDUCTED
drive_type = personal
should update to latest stable v1.57.0, major vfs updates since your very old version.
use the first command as it does not use rclone mount and does not use the vfs file cache
if you really want to use rclone mount, remove --vfs-cache-mode writes
some use-cases do not require the vfs file cache and tar might work.
should be easy to do a quick test and see what happens.
and run rclone check to compare the hashes.
Perfect, thanks. I will do it. I was simply using whatever ubuntu 20.04 came with.
I'll upgrade and give it a shot.
To be honest, I'd much rather keep the files uncompressed/tarred but I wish to retain unix permissions (rwx/owner/group), if there is a better approach that I missed I'll gladly check it out.
Thanks again, I'll try the new version and report back.
Package managers are old/dated and not recommended for rclone as it's up to the maintainer to keep it current and majority do not as in your case, it's years old.
I actually don't have much to add except that I am surprised further down that they said it is doing what they wanted. OneDrive does not support StreamUpload so it should have to spool. Unless something has changed with OneDrive and the docs have not been updated.
At least with my OneDrive setup, this does spool large files. Sorry @gibsonlp!
My advice: Buy a small (~500gb) SSD and use that as your temp directory. And don't use rcat since it can't retry. Copy to the external SSD and then do a regular upload. Obviously not ideal but at least this way it doesn't wear your internal SSD.
This is not an rclone limitation. It is a OneDrive one. You could also look for a different service.
Thanks.
It's a remote server but I do have a couple of free 2.5" slots, I can use one of them for an old magnetic drive to be my cache dir, no worries.