Zips and rars in gdrive

What is the problem you are having with rclone?

Too much bandwidth wasted with vps if I unrar or unzip with the local machine's command.
Unpacking at the local machine and uploading it back. I want to do it over cloud itself.

What is your rclone version (output from rclone version)

1.56.0

Which OS you are using and how many bits (eg Windows 7, 64 bit)

ubuntu 20.04

Which cloud storage system are you using? (eg Google Drive)

Google drives

The command you were trying to run (eg rclone copy /tmp remote:tmp)

upzip, unrar, 7z x

not sure exactly what you are asking?

oh let me clarify, so once rar parts are uploaded to the drives, I don't want to unrar them from the local side anymore. I want there to be a way for me to unpack them directly on google. So there won't be any write / upload being involved.

you are going to have to run the unrar on some machine, local, vps or somewhere.

you can use a free/cheap virtual machine from google and run unrar on that.

I do have a vps that i plan to mount rclone on. I suppose that is as good as I can get.

The free vps from google you mentioned is obtained by the trial credit right?

if you run rclone mount on the vps, then you will be using its bandwidth.

the free vm is free forever.
there is no cost to ingress/egress data from the vm to gdrive and vice versa.

https://cloud.google.com/free/docs/gcp-free-tier

https://www.opsdash.com/blog/google-cloud-f1-micro.html

But I suppose there will be a cost if I were to send and receive data from the public instead of just within gdrives.

And it is rather new I see, last time I played with google's VMs I did not know about this.

I just registered the f1 class free machine. Weird config with the free defined by hours used. Don't really know if that means something like: running 3 instances but total power on time does not exceed 720.

Might call to ask about that.

as i understand it, you get 720 total hours that can be split between multiple vms.
so for 3 vm, split equally, would be 240 hours per vm.

https://cloud.google.com/free/docs/gcp-free-tier/#compute
"Each month, eligible use of all of your f1-micro instances is free until you have used a number of hours equal to the total hours in the current month"

google cloud platform - Is f1-micro VM machine type forever free? - Stack Overflow

yes I just called, and the hours is calculated by the uptime of the machine.

You could use rar2fs on top of rclone. this will let you stream files directly without the need to unrar them before. It does add an additional layer and you need to prewarm the cache so it works decently with rclone. But once it's set it works pretty good.

I only set my cache to be smth around read ahead 2GB. So even if warm it up, every time I start from there again there would be some delay. Which I don't mind for streaming.

Was gonna be smth I was going to ask, thanks for letting me know about it here!

hi,

i find that with a simple mount, without cache, such as rclone mount remote: b:\mount\rclone\remote
i can instantly access the folder/file structure of a .7z and download an individual file, not the entire .7z was in this case is 4GB

what advantage does rar2fs have?
what need is there for a vfs cache and to prewarm it?

Problem with that is, I can't seem to write into the directory correctly with a simple mount. The file would be copied to local mounted storage dir. Which is weird but vfs cache solves it.

rar2fs is for like split media files, more like you would know what it is if you happen to come across those types of media.

warming the cache is like warming the dram cache in your hdd, faster read and writes.

yes, to write, the odds are you do need the vfs cache.

but my post was about reading from a .7z in the cloud.

about prewarm/priming/ pre-caching the mount, using vfs/refresh.
that is not for faster read/writes as no data from files are downloaded.
the prewarm is only for folder/file structure, for quicker navigation
or to speed up a re-scan from media server, to find new or updated files from a rclone mount.

I was talking about warming rar2fs cache. As it will be slow first time you try to open a folder with rar files as it has to analyze the rar file to know the contents.

Pretty much it will show files inside the rar files without the need to open them. For example you have one big rar file (or a multiple parts rar file) in general you would need to unrar it to access the file inside. rar2fs does it on the fly, so you do not need to unrar before reading the contents. It is smart enough it allows you to seek etc. on the files inside the rar.

I was talking about warming the rar2fs cache not the rclone vfs. Basically what it will do is scan all rar files in the mounted directory and read the inside contents on the background therefore not adding IO Wait during directory seeking. This is huge when using it with rclone as we get enough delay when asking for a file, so in case of rar2fs it would lock fuse for a bit while it scans the rar file to see what content is inside.

interesting, thanks.

when you said using vfs/refresh does that mean you add an -- option when mounting the drives?

Currently I am using cron to achieve the mounting, so I can easily add to the commands.

yeah now it makes more sense, thanks for clarifying. Tho so far my experience using rar2fs on arch has been great with hardly any delay even when streaming over samba.

yeah rar2fs is great and does not use too much overhead. Be sure to try and use background cache
warmup option -owarmup with rar2fs, helps a lot when using rclone as it will scan all files in the background when you first mount everything.