I have around 3 TB of backup data which will be synced to google drive.
There are millions of files in this backup data along with huge amount of folders..
Is there any way to fork rclone sync process so that multiple rclone sync processes runs at the same time and upload to google drive will be faster.
Please note i also use --backup-dir option for incremental backups.
Animosity is right on the money here. There are some significant limits to how fast you can create/modify files on Gdrive (about 2/sec). On many small files this will be a significant limiting factor on throughput.
If you really have "millions" of files that is a LOT of small files, even for 3TB.
I would highly recommend you consider archiving together some of the worst folders if it's data you probably don't need to access individually anyway and you mostly want to store. Transfering a handful of larger archives will be orders of magnitude faster than individually copying hundreds of thousands of files.
Hopefully we will get some features in the near future that can automatically bundle together tiny files in a transparent way and greatly improve performance as well as make the file-limit largely irrelevant. (a compression remote is already in the works although currently it does not do bundling of small files)