Hi, please could you help me with the following situation:
I am trying to optimize the total access time for random reads of 64k chunks from a huge file (100GB) from Google Drive.
There are up to 64 random access reads in sequence which need up to 70 sec in total.
It seems that it needs about 1 sec for each single read action.
What could be optimized in my command ?
Windows10/64bit and newest rclone version with my own client_id for Google Drive.
O:\Cache is on a SSD
There isn't any magic to subvert the latency from a cloud file system.
If they are truly random, you can't tune that as it takes a predefined amount of time to randomly grab a piece of data.
When you first posted, there was a superbly well written, template to collect a bunch of information and most importantly, a debug log file. Sadly, you deleted it so an angel won't get his or her wings now.
It might shed some light if you shared the info in the template and a debug log to see what's going on.
I'm pretty sure I borrowed this from someone but I use this:
#!/bin/bash
# Install go
# Download https://github.com/ncw/rclone/blob/master/cmd/mount/test/seek_speed.go
# Add File's on line 10
# Change path to seek_speed.go script on line 27
LogFile=/home/felix/logs/rctest.log
Files=(
'/data/100M.file'
'/home/felix/test/100M.file'
)
echo "$(date "+%d.%m.%Y %T") SPEEDSEEK TESTS STARTED" | tee -a $LogFile
echo " " | tee -a $LogFile
#echo "UNIONFS MOUNT" | tee -a $LogFile
#ps -e -o cmd | grep "[u]nionfs-fuse" | tee -a $LogFile
#echo " " | tee -a $LogFile
#echo "RCLONE MOUNT" | tee -a $LogFile
ps -e -o cmd | grep "[r]clone mount" | tee -a $LogFile
echo " " | tee -a $LogFile
for File in "${Files[@]}"
do
echo "SEEKSPEED $File" | tee -a $LogFile
start=$(date +'%s')
go run /home/felix/scripts/seekspeed.go "$File" | tee -a $LogFile
echo "Finished in $(printf '%dm:%ds\n' $(($(($(date +'%s') - $start))%3600/60)) $(($(($(date +'%s') - $start))%60)))" | tee -a $LogFile
echo "FileSize $(du -h "$File" | awk {'print$1'})" | tee -a $LogFile
done
I just use 100M test file.
2021/02/07 14:31:19 Reading 1025113 from 84507910 took 390.875162ms
2021/02/07 14:31:19 That took 14.301002733s for 25 iterations, 572.040109ms per iteration
Finished in 0m:16s
FileSize 100M