If we have a Google Drive remote and on top of it a Crypt one (and also considering the limitation of 400k files for Google Drive shared drives),
is it possible to make Google Drive think we have a file when we actually have a folder? So then we could try to avoid this limitation?
Or maybe, if we use tar to create a big file (but without using any compression), could we use rclone to extract only one file from it without needing to download the big tar file (let's say that we have a 1TB tar with all the folders and files into it)?
That's indeed very interesting! I'll do some tests about that.
From my research, it looks like tar without compression and zip can extract just one file as well.
How about adding a new file, have you already tried to add a new file or folder inside the big 7z file?
(without needing to download the entire file, but adding it directly to the file in the cloud?)
That's unfortunate to hear. Now it is difficult to find again, but I remember ncw commented something about creating a new remote option that would provide random access, but not sure for write. And as you said, maybe it isn't possible. At least if I could concatenate large txt files directly there would be awesome, to keep some database data.
rclone mount provides random access for reads.
optional support for random access writes, as long as the cached file is still local in the vfs file cache.
once the file is uploaded, random writes is not possible.
rclone mount provides random access for reads.
optional support for random access writes, as long as the cached file is still local in the vfs file cache.
once the file is uploaded, random writes is not possible.
Apart from Google Drive, have you already heard about other remotes that maybe could provide this?
how about rclone rcat
https://forum.rclone.org/t/uploading-to-remote-and-calculating-local-md5sum-on-the-fly/29783/15 4
Unfortunately I didn't understand what you mean. I read the rcat documentation and the thread there, but I wasn't able to figure it out a way to concatenate a big file with another even bigger file that is already in the remote. Do you think is this possible somehow?
and there is a new compress remote
I've checked this option here, and it seems interesting.
But since the problem, for now it is not the space size,
but more about the max number of files, even though I could use a remote like that to create big files, it would, I think, reupload the large file again, let's say 1TB+10GB each time I add a 10GB chunk of data.
If possible, I'd prefer to keep the data uncrompressed for quick access, where a tar without compression would work for me, for example.
So if you have a 5TB 7z archive file, if you want to retrieve a file at the end of the archive, you can quickly do that with local tool? I would assume you would still need to fetch the whole 5TB before accessing the content.
I have found this very interesting comment from @ncw on this link:
If you are using rclone for backup then you could use restic with it which can backup to google drive via rclone. That packs small files together so you'll remain below the files limit.
For example my laptop has this many files
Total objects: 1264815
Total size: 203.586 GBytes (218598737207 Bytes)
which restic backs up to this many files. This includes incremental backups which is why the total is bigger.
$ rclone size remote:backup
Total objects: 72353
Total size: 324.240 GBytes (348150534919 Bytes)
but if you want tight integration with rclone, seems restic is a good way to go.
and if currently, ncw still uses restic to backup his laptop, bonus points.