How to Sync and omit duplicates between directories

Context:
I have a URL that has 8 directories. In Directory #1, there are 80 files. In Directory #2, there are 90 files. 80 files in Directory #2 are the exact same as 80 files in Directory #1. But also in Directory #4, 6 and 8. There are also duplicates present from other Directories in the URL. I only want one copy of a file that has a unique name. So, after it is downloaded the first time in any particular directory, any file of the same name should not be downloaded again.

With that said, will this command work?
rclone sync source: dest:

Thank you!

Edit:
The sync is moving from data in the URL to an empty HDD on my computer.

A sync command makes source look like dest so that would not work. There isn't a native way in rclone to identify a unique file from Dir #1 and Dir #2.

I can't think of an easy way to do that as you'd have to keep a list of what are all the file names and do some scripting to keep them unique prior to copying with rclone and you'd use a files-from list.

I can make a list of all the names. Regarding scripting, do you mean Powershell? Or something else? Also, what type of script did you have in mind? I don't know how to script, mind you, but I can ask around.

This script serves a different purpose. But you can use some of the same logic and commands to achieve what you want to do. Also jump up one level, look at orig difflist with the rclone check command.

I have to be honest with you, zappo. I am really a beginner here and even though I tried to look up what you wrote, I don't understand much of it at all.

My guess would be I sub "$1" for the URL I have and then "$2" for the destination on my HDD..?

I'm not sure beyond that.

Edit:

Wait, looking at the script again, let me make another attempt:

#!/usr/bin/env bash

rclone tree -i URL --level 2 $1 | sort >tmp1
rclone tree -i HDD_destination --level 2 $2 | sort >tmp2

combine tmp1 not tmp2>not_in_2
combine tmp2 not tmp1>not_in_1

rm tmp1
rm tmp2

How about that?