Do you have any suggestion to overcome the Google Drive shared drive 400k files limitation?

Hey everyone,

If we have a Google Drive remote and on top of it a Crypt one (and also considering the limitation of 400k files for Google Drive shared drives),

is it possible to make Google Drive think we have a file when we actually have a folder? So then we could try to avoid this limitation?

Or maybe, if we use tar to create a big file (but without using any compression), could we use rclone to extract only one file from it without needing to download the big tar file (let's say that we have a 1TB tar with all the folders and files into it)?

Do you have any suggestions regarding this issue?

Thanks a lot in advance.

i do that with .7z files

could use something like rclone mount gdrive: /path/to/monuntpoint
and using any local tool to access the files at /path/to/monuntpoint

That's indeed very interesting! I'll do some tests about that.

From my research, it looks like tar without compression and zip can extract just one file as well.

How about adding a new file, have you already tried to add a new file or folder inside the big 7z file?
(without needing to download the entire file, but adding it directly to the file in the cloud?)

yes, when i first started to use rclone, i tried that.
that is not going to work even with rclone mount

cloud providers do not support random access read/writes
and for sure, rclone does not

Just upload to a new shared drive each time you run out of room and use rclone union when mounting it to make that seamless.

That's unfortunate to hear. Now it is difficult to find again, but I remember ncw commented something about creating a new remote option that would provide random access, but not sure for write. And as you said, maybe it isn't possible. At least if I could concatenate large txt files directly there would be awesome, to keep some database data.

That's indeed a good idea, but the problem is that we have access to just one shared drive a third party provided to us =/

rclone mount provides random access for reads.
optional support for random access writes, as long as the cached file is still local in the vfs file cache.
once the file is uploaded, random writes is not possible.

how about rclone rcat
https://forum.rclone.org/t/uploading-to-remote-and-calculating-local-md5sum-on-the-fly/29783/15

and there is a new compress remote

rclone mount provides random access for reads.
optional support for random access writes, as long as the cached file is still local in the vfs file cache.
once the file is uploaded, random writes is not possible.

Apart from Google Drive, have you already heard about other remotes that maybe could provide this?

how about rclone rcat
https://forum.rclone.org/t/uploading-to-remote-and-calculating-local-md5sum-on-the-fly/29783/15 4

Unfortunately I didn't understand what you mean. I read the rcat documentation and the thread there, but I wasn't able to figure it out a way to concatenate a big file with another even bigger file that is already in the remote. Do you think is this possible somehow?

and there is a new compress remote

I've checked this option here, and it seems interesting.
But since the problem, for now it is not the space size,
but more about the max number of files, even though I could use a remote like that to create big files, it would, I think, reupload the large file again, let's say 1TB+10GB each time I add a 10GB chunk of data.
If possible, I'd prefer to keep the data uncrompressed for quick access, where a tar without compression would work for me, for example.

for a file in the cloud, cannot append to it.
this has been discussed in the forum
https://forum.rclone.org/t/does-rclone-works-with-rsync-to-upload-just-the-delta-changes-instead-of-complete-file-change/13738

might want to use a block-based backup.
rclone supports restic backup program
https://rclone.org/commands/rclone_serve_restic/

That's interesting.

So if you have a 5TB 7z archive file, if you want to retrieve a file at the end of the archive, you can quickly do that with local tool? I would assume you would still need to fetch the whole 5TB before accessing the content.

even on a local machine, 7z does not iterate over the entire 5TiB file.
rclone mount supports random reads, mimics local behavior

Do you need another shared drive? PM me

I have found this very interesting comment from @ncw on this link:

If you are using rclone for backup then you could use restic with it which can backup to google drive via rclone. That packs small files together so you'll remain below the files limit.

For example my laptop has this many files

Total objects: 1264815
Total size: 203.586 GBytes (218598737207 Bytes)
which restic backs up to this many files. This includes incremental backups which is why the total is bigger.

$ rclone size remote:backup
Total objects: 72353
Total size: 324.240 GBytes (348150534919 Bytes)

maybe this could help :slight_smile:

sure, most any backup program will do that same.

but if you want tight integration with rclone, seems restic is a good way to go.
and if currently, ncw still uses restic to backup his laptop, bonus points.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.