What is the problem you are having with rclone?
I'm trying to sync files every 24h from local computer to cloudstorage. There are 148000 files on local computer, 223GB in size.
After performing initial rclone sync to cloud storage all files are copied. I changed just a couple of local files, but rclone sync keeps copying all files again! It copied 223Gb and all files.
Is it possible to copy only new and updated files with sync?
Run the command 'rclone version' and share the full output of the command.
rclone v1.62.2
os/version: Microsoft Windows Server 2022 Datacenter 21H2 (64 bit)
os/kernel: 10.0.20348.1668 Build 20348.1668.1668 (x86_64)
os/type: windows
os/arch: amd64
go/version: go1.20.2
go/linking: static
go/tags: cmount
Which cloud storage system are you using?
S3 compatible Object Storage (Contabo Object Storage)
The command you were trying to run
rclone sync -P . bkpspace:bkpspace
The rclone config contents with secrets removed.
[bkpspace]
type = s3
provider = Ceph
access_key_id = xxxxx
secret_access_key = xxxxx
endpoint = https://xxxxcloudstorage/bkpspace
A log from the command with the -vv flag
excerpt from the log
2023/05/17 20:37:59 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone" "sync" "-P" "--log-level" "DEBUG" "--log-file=bkplog.txt" "." "bkpspace:bkpspace"]
2023/05/17 20:37:59 DEBUG : Creating backend with remote "."
2023/05/17 20:37:59 DEBUG : Using config file from "C:\Users\Administrator\AppData\Roaming\rclone\rclone.conf"
2023/05/17 20:37:59 DEBUG : fs cache: renaming cache item "." to be canonical "//?/C:/MyApp/Live"
2023/05/17 20:37:59 DEBUG : Creating backend with remote "bkpspace:bkpspace"
2023/05/17 20:37:59 DEBUG : bkpspace: detected overridden config - adding "{NBTUO}" suffix to name
2023/05/17 20:37:59 DEBUG : Resolving service "s3" region "us-east-1"
2023/05/17 20:37:59 DEBUG : fs cache: renaming cache item "bkpspace:bkpspace" to be canonical "bkpspace{NBTUO}:bkpspace"
2023/05/17 20:37:59 DEBUG : index.html: md5 = a31a5f48cbb181803880a9f1d3e01c1d OK
2023/05/17 20:37:59 INFO : index.html: Copied (new)
2023/05/17 20:38:01 DEBUG : System/Projects/Dev2566/o9page_k5cb4.html: md5 = 68476208a7dbb0e508e1366e575d2627 OK
2023/05/17 20:38:01 INFO : System/Projects/Dev2566/history/o9vp_page_k5cb4wg6177.html: Copied (new)
2023/05/17 20:38:02 DEBUG : System/Projects/Dev2566/page/ip64127.html: md5 = ab216fbd56ba256e7f16621efcd0d2ed OK
2023/05/17 20:38:02 INFO : System/Projects/Dev2566/page/ip64127.html: Copied (new)
2023/05/17 20:38:03 DEBUG : System/Projects/Dev2566/page/j781.html: md5 = 6d140c0527408dd1686b8f1605a90692 OK
2023/05/17 20:38:03 INFO : System/Projects/Dev2566/page/j7781.html: Copied (new)
asdffdsa
(jojothehumanmonkey)
May 18, 2023, 10:41am
2
hello and welcome to the forum,
rclone will not re-copy a file if it has not changed.
can you post a debug log that shows the issue.
for example, pick a single file.
run rclone check file.ext bkpspace:file.ext -vv
to compare the local file and cloud file
run rclone copy file.ext bkpspace:file.ext -vv
re-rerun command from item2.
Nonick:
Which cloud storage system are you using?
S3 compatible Object Storage (Contabo Object Storage)
The command you were trying to run
rclone sync -P . bkpspace:bkpspace
The rclone config contents with secrets removed.
[bkpspace]
type = s3
provider = Ceph
Any reason why you specify Ceph as S3 provider when you use something else? If your provider is not listed I would go for other
to make sure that generic S3 is used. Every provider might have some specific customization.
asdffdsa
(jojothehumanmonkey)
May 18, 2023, 10:58am
4
hi, https://rclone.org/s3/#ceph
the OP needs to post something to prove rclone is re-copying an unchanged file.
Nonick
May 18, 2023, 10:58am
5
rclone is configured exactly as required by Contabo S3 Object Storage instructions: specify 2 for Ceph Object Storage.
Ahh then perfectly OK. Sorry did not know this.
And why this /bkpspace
?
when valid endpoints are:
https://eu2.contabostorage.com
https://sin1.contabostorage.com
https://usc1.contabostorage.com
Nonick
May 18, 2023, 11:17am
8
endpoint you quoted from Contabo product page is just an example.
Scroll down and see their example: rclone sync -P . eu2:bucketname/folder
So, when configuring endpoint it needs to be your bucket name.
Nonick
May 18, 2023, 11:20am
9
This is the output of commands:
run rclone check file.ext bkpspace:file.ext -vv to compare the local file and cloud file
run rclone copy file.ext bkpspace:file.ext -vv
re-rerun command from item2.
2023/05/18 13:08:48 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone" "check" "index.html" "bkpspace:index.html" "-vv" "--log-file=bkplog.txt"]
2023/05/18 13:08:48 DEBUG : Creating backend with remote "index.html"
2023/05/18 13:08:48 DEBUG : Using config file from "C:\Users\Administrator\AppData\Roaming\rclone\rclone.conf"
2023/05/18 13:08:48 DEBUG : fs cache: adding new entry for parent of "index.html", "//?/C:/MyApp/Live"
2023/05/18 13:08:48 DEBUG : Creating backend with remote "bkpspace:index.html"
2023/05/18 13:08:48 DEBUG : Resolving service "s3" region "us-east-1"
2023/05/18 13:08:48 INFO : Using md5 for hash comparisons
2023/05/18 13:08:48 DEBUG : S3 bucket index.html: Waiting for checks to finish
2023/05/18 13:08:48 ERROR : index.html: file not in S3 bucket index.html
2023/05/18 13:08:48 NOTICE: S3 bucket index.html: 1 files missing
2023/05/18 13:08:48 NOTICE: S3 bucket index.html: 1 differences found
2023/05/18 13:08:48 NOTICE: S3 bucket index.html: 1 errors while checking
2023/05/18 13:08:48 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Errors: 1 (retrying may help)
Elapsed time: 0.2s
2023/05/18 13:08:48 DEBUG : 5 go routines active
2023/05/18 13:08:48 Failed to check: 1 differences found
2023/05/18 13:09:43 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone" "copy" "index.html" "bkpspace:index.html" "-vv" "--log-file=bkplog.txt"]
2023/05/18 13:09:43 DEBUG : Creating backend with remote "index.html"
2023/05/18 13:09:43 DEBUG : Using config file from "C:\Users\Administrator\AppData\Roaming\rclone\rclone.conf"
2023/05/18 13:09:43 DEBUG : fs cache: adding new entry for parent of "index.html", "//?/C:/MyApp/Live"
2023/05/18 13:09:43 DEBUG : Creating backend with remote "bkpspace:index.html"
2023/05/18 13:09:43 DEBUG : Resolving service "s3" region "us-east-1"
2023/05/18 13:09:43 DEBUG : index.html: Need to transfer - File not found at Destination
2023/05/18 13:09:43 INFO : S3 bucket index.html: Bucket "index.html" created with ACL ""
2023/05/18 13:09:43 DEBUG : index.html: md5 = a31a5f48cbb181803880a9f1d3e01c1d OK
2023/05/18 13:09:43 INFO : index.html: Copied (new)
2023/05/18 13:09:43 INFO :
Transferred: 1.100 KiB / 1.100 KiB, 100%, 0 B/s, ETA -
Transferred: 1 / 1, 100%
Elapsed time: 0.4s
2023/05/18 13:09:43 DEBUG : 6 go routines active
2023/05/18 13:10:25 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone" "copy" "index.html" "bkpspace:index.html" "-vv" "--log-file=bkplog.txt"]
2023/05/18 13:10:25 DEBUG : Creating backend with remote "index.html"
2023/05/18 13:10:25 DEBUG : Using config file from "C:\Users\Administrator\AppData\Roaming\rclone\rclone.conf"
2023/05/18 13:10:25 DEBUG : fs cache: adding new entry for parent of "index.html", "//?/C:/MyApp/Live"
2023/05/18 13:10:25 DEBUG : Creating backend with remote "bkpspace:index.html"
2023/05/18 13:10:25 DEBUG : Resolving service "s3" region "us-east-1"
2023/05/18 13:10:25 DEBUG : index.html: Size and modification time the same (differ by 0s, within tolerance 100ns)
2023/05/18 13:10:25 DEBUG : index.html: Unchanged skipping
2023/05/18 13:10:25 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Elapsed time: 0.2s
2023/05/18 13:10:25 DEBUG : 4 go routines active
rclone sync -P . eu2:bucketname/folder
is telling rclone to use remote called eu2 then bucketname/folder
it is not related to endpoint from rclone.config
when you run:
rclone lsd bkpspace:
Does it list bucket named bkpspace
Nonick
May 18, 2023, 11:58am
12
after
rclone lsd bkpspace:
it doesnt list nothing. zero output.
asdffdsa
(jojothehumanmonkey)
May 18, 2023, 12:03pm
14
rclone did not re-copy the file.
Nonick
May 18, 2023, 12:03pm
15
rclone ls bkpspace:
also lists nothing. zero output.
asdffdsa
(jojothehumanmonkey)
May 18, 2023, 12:05pm
16
run the command with -vv
and post the full output
Nonick
May 18, 2023, 12:09pm
17
full output:
rclone ls bkpspace: -vv
2023/05/18 14:06:59 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone" "ls" "bkpspace:" "-vv"]
2023/05/18 14:06:59 DEBUG : Creating backend with remote "bkpspace:"
2023/05/18 14:06:59 DEBUG : Using config file from "C:\Users\Administrator\AppData\Roaming\rclone\rclone.conf"
2023/05/18 14:06:59 DEBUG : Resolving service "s3" region "us-east-1"
2023/05/18 14:06:59 DEBUG : 4 go routines active
rclone lsd bkpspace: -vv
2023/05/18 14:07:31 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone" "lsd" "bkpspace:" "-vv"]
2023/05/18 14:07:31 DEBUG : Creating backend with remote "bkpspace:"
2023/05/18 14:07:31 DEBUG : Using config file from "C:\Users\Administrator\AppData\Roaming\rclone\rclone.conf"
2023/05/18 14:07:31 DEBUG : Resolving service "s3" region "us-east-1"
2023/05/18 14:07:31 DEBUG : 4 go routines active
asdffdsa
(jojothehumanmonkey)
May 18, 2023, 12:13pm
18
not sure that is true
[quote="Nonick, post:1, topic:38244"]
endpoint = https://xxxxcloudstorage/bkpspace
that is not a valid value, please double check that.
Definitely this S3 provider does not behave like AWS S3.
I would start from the scratch:
rclone.config
Do not use any bucket names in endpoint
endpoint = https://usc1.contabostorage.com
create bucket named bkpspace
rclone mkdir bkpspace:bkpspace
Check if it was created:
rclone lsd bkpspace:
now you can:
rclone sync -P . bkpspace:bkpspace
In addition login via their web UI and see where your previous tests ended - you can probably delete them.
asdffdsa
(jojothehumanmonkey)
May 18, 2023, 12:24pm
20
good point, but for testing, to prevent confusion, do not use the same name for remote and bucket.
choose a different bucket name.