Cacheless upload to OneDrive

What is the problem you are having with rclone?

I am trying to upload very large files (streamed tarballs) without hitting local SSD, One of the tarball is ~70GB, if that's too large then I'll find other solutions. In any case - even if I split it to smaller files I do not wish to keep any local cache as all in all I'll be creating roughly 100GB/day and this will wear the SSD and will also require a lot of free space...

Please advise, thanks!

Run the command 'rclone version' and share the full output of the command.

rclone v1.50.2
- os/arch: linux/amd64
- go version: go1.13.6

Which cloud storage system are you using? (eg Google Drive)

OneDrive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Multiple commands:

tar cvf - /files |  rclone rcat OneDrive:/outputfile.tar

Or:

/usr/bin/rclone --vfs-cache-mode writes mount "OneDrive":  /mnt/OneDrive --daemon

And then

tar cvf - /files > /mnt/OneDrive/test

The rclone config contents with secrets removed.

[OneDrive]
type = onedrive
token = {"access_token":"REDUCTED","expiry":"REDUCTED"}
drive_id = REDUCTED
drive_type = personal

A log from the command with the -vv flag

Not sure if relevant?

Thanks in advance!

hello and welcome to the forum

  • should update to latest stable v1.57.0, major vfs updates since your very old version.

  • use the first command as it does not use rclone mount and does not use the vfs file cache

  • if you really want to use rclone mount, remove --vfs-cache-mode writes
    some use-cases do not require the vfs file cache and tar might work.
    should be easy to do a quick test and see what happens.
    and run rclone check to compare the hashes.

Thanks!
rclone rcat creates a spool dir in /tmp, is the new version different in this regard?

i think @jwink3101 might have something helpful to comment.......

The version you have is from Nov 2019 and is over 2 years old. If you can update, validate a test and share a debug log, we can see what's going on.

I see no tmp writes at all.

felix@gemini:~/test$ tar cvf - /home/felix/test | rclone rcat DB:test.tar -vvv
tar: Removing leading `/' from member names
/home/felix/test/
/home/felix/test/hosts
/home/felix/test/one
/home/felix/test/two
2022/01/11 10:13:21 DEBUG : Setting --config "/opt/rclone/rclone.conf" from environment variable RCLONE_CONFIG="/opt/rclone/rclone.conf"
2022/01/11 10:13:21 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "rcat" "DB:test.tar" "-vvv"]
2022/01/11 10:13:21 DEBUG : Creating backend with remote "DB:"
2022/01/11 10:13:21 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2022/01/11 10:13:22 DEBUG : Dropbox root '': File to upload is small (10240 bytes), uploading instead of streaming
2022/01/11 10:13:22 DEBUG : test.tar: Uploading chunk 1/1
2022/01/11 10:13:23 DEBUG : test.tar: Uploading chunk 2/1
2022/01/11 10:13:24 DEBUG : Dropbox root '': Adding "/test.tar" to batch
2022/01/11 10:13:25 DEBUG : Dropbox root '': Batch idle for 500ms so committing
2022/01/11 10:13:25 DEBUG : Dropbox root '': Committing sync batch length 1 starting with: /test.tar
2022/01/11 10:13:26 DEBUG : Dropbox root '': Upload batch completed in 149.260302ms
2022/01/11 10:13:26 DEBUG : Dropbox root '': Committed sync batch length 1 starting with: /test.tar
2022/01/11 10:13:26 DEBUG : test.tar: dropbox = 6d37ba65fcf935e762a0c5e1981503d8a4f6054765ac3353e63c1c1a4c7d6da0 OK
2022/01/11 10:13:26 INFO  : test.tar: Copied (new)
2022/01/11 10:13:26 DEBUG : 11 go routines active
2022/01/11 10:13:26 INFO  : Dropbox root '': Commiting uploads - please wait...

Perfect, thanks. I will do it. I was simply using whatever ubuntu 20.04 came with.
I'll upgrade and give it a shot.

To be honest, I'd much rather keep the files uncompressed/tarred but I wish to retain unix permissions (rwx/owner/group), if there is a better approach that I missed I'll gladly check it out.

Thanks again, I'll try the new version and report back.

Package managers are old/dated and not recommended for rclone as it's up to the maintainer to keep it current and majority do not as in your case, it's years old.

I actually don't have much to add except that I am surprised further down that they said it is doing what they wanted. OneDrive does not support StreamUpload so it should have to spool. Unless something has changed with OneDrive and the docs have not been updated.

I do use OneDrive but I use rcat sparingly.

Which is why we always ask for a log file as that would be in the first line...

well, i thought that was the case, but the OP has, so far, not posted the debug log.
as i do not use rcat, good you can confirm that

Thanks guys.
Using spool indeed:

2022/01/11 17:24:28 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "-vv" "rcat" "OneDrive:/test"]
2022/01/11 17:24:28 DEBUG : Creating backend with remote "OneDrive:/"
2022/01/11 17:24:28 DEBUG : Using config file from "/REDUCTED/.config/rclone/rclone.conf"
2022/01/11 17:24:29 DEBUG : fs cache: renaming cache item "OneDrive:/" to be canonical "OneDrive:"
2022/01/11 17:24:30 DEBUG : One drive root '': Target remote doesn't support streaming uploads, creating temporary local FS to spool file
2022/01/11 17:24:30 DEBUG : Creating backend with remote "/tmp/rclone-spool3384328787"

I am obviously not OP but to test it out:

$ echo "testing"|rclone rcat onedrive:test.txt -vv
2022/01/11 10:19:54 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "rcat" "onedrive:test.txt" "-vv"]
2022/01/11 10:19:54 DEBUG : Creating backend with remote "onedrive:"
2022/01/11 10:19:54 DEBUG : Using config file from "/home/jwinokur/.config/rclone/rclone.conf"
2022/01/11 10:19:54 DEBUG : One drive root '': Token expired but no uploads in progress - doing nothing
2022/01/11 10:19:54 DEBUG : onedrive: Loaded invalid token from config file - ignoring
2022/01/11 10:19:55 DEBUG : Saving config "token" in section "onedrive" of the config file
2022/01/11 10:19:55 DEBUG : Keeping previous permissions for config file: -rw-r--r--
2022/01/11 10:19:55 DEBUG : onedrive: Saved new token in config file
2022/01/11 10:19:55 DEBUG : One drive root '': File to upload is small (8 bytes), uploading instead of streaming
2022/01/11 10:19:55 DEBUG : test.txt: Starting multipart upload
2022/01/11 10:19:55 DEBUG : test.txt: Uploading segment 0/8 size 8
2022/01/11 10:19:56 DEBUG : test.txt: sha1 = 9801739daae44ec5293d4e1f53d3f4d2d426d91c OK
2022/01/11 10:19:56 INFO  : test.txt: Copied (new)
2022/01/11 10:19:56 DEBUG : 10 go routines active

I guess that is too small. I could change --streaming-upload-cutoff but I'll just do it the hard way.

$ head -c 100M < /dev/urandom > rand.dat
$ cat rand.dat | rclone rcat onedrive:rand.dat -vv
2022/01/11 10:23:37 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "rcat" "onedrive:rand.dat" "-vv"]
2022/01/11 10:23:37 DEBUG : Creating backend with remote "onedrive:"
2022/01/11 10:23:38 DEBUG : Using config file from "/home/jwinokur/.config/rclone/rclone.conf"
2022/01/11 10:23:39 DEBUG : One drive root '': Target remote doesn't support streaming uploads, creating temporary local FS to spool file
2022/01/11 10:23:39 DEBUG : Creating backend with remote "/tmp/rclone-spool592646558"
2022/01/11 10:23:39 DEBUG : rand.dat: Size and modification time the same (differ by 0s, within tolerance 1s)
2022/01/11 10:23:39 DEBUG : rand.dat: Starting multipart upload
2022/01/11 10:23:39 DEBUG : rand.dat: Uploading segment 0/104857600 size 10485760
2022/01/11 10:23:40 DEBUG : rand.dat: Uploading segment 10485760/104857600 size 10485760
2022/01/11 10:23:41 DEBUG : rand.dat: Uploading segment 20971520/104857600 size 10485760
2022/01/11 10:23:42 DEBUG : rand.dat: Uploading segment 31457280/104857600 size 10485760
2022/01/11 10:23:43 DEBUG : rand.dat: Uploading segment 41943040/104857600 size 10485760
2022/01/11 10:23:44 DEBUG : rand.dat: Uploading segment 52428800/104857600 size 10485760
2022/01/11 10:23:45 DEBUG : rand.dat: Uploading segment 62914560/104857600 size 10485760
2022/01/11 10:24:01 DEBUG : rand.dat: Uploading segment 73400320/104857600 size 10485760
2022/01/11 10:24:02 DEBUG : rand.dat: Uploading segment 83886080/104857600 size 10485760
2022/01/11 10:24:03 DEBUG : rand.dat: Uploading segment 94371840/104857600 size 10485760
2022/01/11 10:24:04 DEBUG : rand.dat: sha1 = cda6fb5498ac1d84936532df3a17f39cddc0c280 OK
2022/01/11 10:24:04 INFO  : rand.dat: Copied (new)
2022/01/11 10:24:04 DEBUG : 8 go routines active

At least with my OneDrive setup, this does spool large files. Sorry @gibsonlp!

My advice: Buy a small (~500gb) SSD and use that as your temp directory. And don't use rcat since it can't retry. Copy to the external SSD and then do a regular upload. Obviously not ideal but at least this way it doesn't wear your internal SSD.

This is not an rclone limitation. It is a OneDrive one. You could also look for a different service.

Thanks.
It's a remote server but I do have a couple of free 2.5" slots, I can use one of them for an old magnetic drive to be my cache dir, no worries.