sorry i am not that tech guy, it's a shared host. i don't know what to tell you other than this.
OK, that may make it difficult to optimize.
The bottleneck is most likely due to high load on the shared host, since I can sync 70K files from GDrive to an ultra small SFTP server in less than a minute, when nothing to transfer.
You may be able to work around the slow target using top-up sync, but that requires a fairly good technical understanding.
can i use this parameter for syn?
--max-age=24h --no-traverse --update --use-server-modtime
another question what is the default number of file rclone syn at a time? i am just worried about shared host resources.
Yes, but it will not speed up the check by itself. You need to understand and implement the full top-up approach linked above.
rclone uses 4 transfers and 8 checkers by default - each uses am SFTP connection.
https://rclone.org/docs/#transfers-n
https://rclone.org/docs/#checkers-n
what i understand is that it will not speed up the syn but it will save the server resources and API call.
this is the command line i using, if there is anything I should add please suggest. and thank you so much for your time...i really appreciate
rclone sync --fast-list --max-age=1h --no-traverse --update --use-server-modtime "remote:folder" "remote:folder" -vv
No, it will not. The filtering (--max-age=24h) is performed by rclone - not the servers.
You would be able to see this if the stats included something like: "Filtered: 9,700 / 10,000, 97%"
This combination is invalid and therefore will not function as you expect, because you mixed the concepts.
This command is much better:
rclone sync "GDrive:folder" "SFTP:folder" -v
If you want to try daily top-ups then execute this copy command daily:
rclone copy --max-age=48h --no-traverse "GDrive:folder" "SFTP:folder" -v
and supplement it with the above sync command weekly or monthly.
Thanks!
This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.