To mount Like @zure file share

Issue.
I have created a VM in @zure, an @zure file share and rclone with win winfsp. Reason for the creation is to mine chi@. I map a drive using @zure file share, map 4 drives using rclose with different parameter which are listed below.

  1. Mounted a drive with --network-mode
  2. Mounted a drive with --vfs-cache-mode off
    3.Mounted a drive with --vfs-cache-mode full
  3. Mounted a drive with --vfs-cache-mode writes --file-perms 0555

Testing with hpool gui and all drives failed with higher scan consume time except the drive mapped with @zure file share using powershell script provided from the @zure portal.

Below would be my config info and the system information.

System: Windows 10
rclone version: rclone v1.55.1

config info:
[odrive]
type = onedrive
region = global
token =
drive_id = b!Y2yAWplw-US2-UFdyl-CGR5HKzVToZlAqDT9qKMA3kasaXJVGYzkQ4XeGbOFMsZp
drive_type = business
client_id =
client_secret =

I have spent about 3 days trying to get this done using onedrive for my hpool farming with a lot of method. However, none of them success. Any help or suggestion on this matter is very much appreciated. Thanks.

hello and welcome to the forum,

what does that mean?

we need to see a rclone debug of the errors.

if azure fileshare is working, then why use onedrive, which can be very slow and have latency issues?
pretty sure, no way a rclone mount to onedrive, will outperform azure file share.

Dear @ asdffdsa,

Thank you for your fast reply on this matter. The farming app, named hp0ol will scan each file and will indicate fail where the scan is more than 10 seconds.

There is no error in rclone.

To me, onedrive is much more cost effective and easier to use.

I noticed that when scanning the @zure file share, it doesn't use much of the bandwidth but when scanning at rclone mounted drive, it will fetch some amount of data.

My objective here is to decrease the scan time of the plot file which is around 101.4gb/file.

Any suggestion on this matter is very much appreciated.

p/s: Can i use cache to increase the performance for this scan?

well, onedrive and most any backend, is going to be slow.

hpool, does it have an option to increase the scan time beyond 10 seconds?

exactly, what is scanned?, the entire file, certain parts or what?
after the initial scan, what does hpool do with the data files?
again, not knowing how the scan is done, i would test by increasing --onedrive-chunk-size

in my testing and when comparing with other rcloners, wasabi, a s3 clone known for hot storage, is the fastest for random access.

To answer your question,

  1. The scan time is set by Chia and hpool is just following only. Hence the scan time cannot go beyond 10sec.
  2. I believe it is part of the file since it is done in milisec through local hdd.
  3. hpool is just reading the file and there will be no writing to the file.
  4. i have done a lot of --drive-chunk-size increase and it doesn't solve the problem.

I suspect the root cause of long scan time would be the random read access on onedrive. I am not sure whether cache remote will solve this problem or not? i do not mind any interim solution for this matter.

Thanks in advance.

that is been depreiacted and has bugs that will never get fixed.
but it should be easy to test it and see what happens.

what is the total size of all the files that need to be accesses by hpool?

where do these files get created and does anything write to them?

Thanks for your quick reply. Before i am to setup the thing, i would like to make clear the objective i am trying to achieve.

My objectives are to run a chi@ mining through hpo0l using a low VPS in azure with low local data storage and connected to 50TB onedrive remotely as storage. I am trying to avoid these file to store locally due to cost concern.

These files are call pl0t and they are created using the pl0tter by Chi@. Hpo0l will have an interval time to scan on each pl0t and each pl0t size is about 101.4GB. I am planning to fill it up to my onedrive and access through the VPS.

If i understand correctly, the backend cache will be storing all the file locally first before uploading it to the server? Will i achieve better seek time for disk reading? I must owe you an apology for my noob question.

i have never used the cache remote and there is this warning
https://rclone.org/cache/#windows-support-experimental

Hmm, can you advise me what is the best mount in this case then? Your suggestion is very much appreciated.

there might not be a usabl solution as you have already tried four different rclone mounts and none of them have worked.

as a test, might try wasabi and/or gdrive.

i would search the internet to see what others are doing
for example, this mentions rclone.
https://www.reddit.com/r/chia/comments/mko9hr/can_i_put_plots_on_google_drive_and_farm_them/

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.