Copying or synchronizing large files to Dropbox

What is the problem you are having with rclone?

My understanding is the rclone copies and/or synchronizes files with services like Dropbox using their API. If yes, Dropbox indicates that there's a file size limit - What is the Dropbox file size limit? - Dropbox Help. Since I have single files larger than 1.5TB (compressed with LZMA), how can I copy or synchronize these across to Dropbox without splitting the files?

Which cloud storage system are you using? (eg Google Drive)

Dropbox

Based on link you provided Dropbox API limit is 375 GB. Then it is what it is...

With rclone only option available is to use rclone chunker overlay.

However as rclone does not support (at least not yet) resumption of interrupted transfers I would look into other solutions. With massive files like in your question chances of some interruption are not something to ignore. It can even make uploading such huge files close to impossible with slow connections - when upload is taking days then chances of some hiccup grow with time.

To store such files in a cloud storage I would use backup programs like restic or kopia - they split files into chunks and both allow to resume interrupted transfers - already uploaded chunks can be reused.

Thanks @kapitainsky. I'm guessing I'll have to look at other cloud providers since regardless of whether it's rclone or some other backup program, the bottleneck will be the API. Does anyone have suggestions?

Not with kopia nor restic - they store files in chunks.

You can look at AWS S3 (max size of object 5TB) but you will have the same challenges when uploading or downloading such big files. If you try to do this in one go then any problem on the way means you have to start again.

Other approach using rclone only I think would work well too with any cloud storage.

Upload:

  1. rclone chunker to local directory. With e.g. chunk size 1GB it would mean 1000 files per TB which is OK for any filesystem.
  2. rclone sync of all chunks to the cloud

Download:

  1. rclone sync of all required chunks to local storage
  2. local storage via rclone chunker to get original concatenated file

Both operations steps can be wrapped in some script making it from user perspective as simple as copy BIGfile to cloud/restore BIGfile from cloud. Only what is needed is few TB free disk space which is not a problem nowadays - even SSD.

Advantage of this method would be that any transfer problems would only require retrying relatively small chunks instead of 1.5 TB file.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.