It will list the directory structure which will take as many operations as you have directories (roughly). If you have lots of memory then use --fast-list which will do the minimum number of directory reads at the cost of using more memory. For 100,000 files I'd use --fast-list.
The only other costs are when a file needs uploading.
Unlike s3, no extra transactions are required to read the modification time.