Getting Error "error reading destination directory"

Hi guys,

i am syncing to my dropbox by

rclone sync /mnt/backup2/data/ dropbox:Backups/data --tpslimit 1 --transfers 1 --checkers 1 --progress --exclude-from "/home/myuser/rclone/filter.txt"

and getting errors like "error reading destination directory". Its not always the same directory, it's random and i don't see any problems with the destination directories.

What is causing this, how can i prevent it?

Thank you

  1. The full command you’re attempting to use

rclone sync /mnt/backup2/data/ dropbox:Backups/data --tpslimit 1 --transfers 1 --checkers 1 --progress --exclude-from "/home/myuser/rclone/filter.txt"

  1. A logfile of rclone’s output with personal information removed. If it is large, you can use services like pastebin.com . It’s usually helpful to increase the logging with -v or -vv depending.

2024/11/04 17:29:59 INFO : Starting transaction limiter: max 1 transactions/s with burst 1
2024/11/04 17:32:31 ERROR : Financies: error reading destination directory:

  1. The rclone config you’re using. If you don’t know where to find it, check here. Before posting ensure you’ve removed any confidential information like credentials.

[dropbox]
type = dropbox
token = {"access_token":"Here is my access token","token_type":"bearer","refresh_token":"Here is my refresh token","expiry":"2024-11-04T18:40:55.110518942Z"}

4.What version of rclone you’re using. It’s also helpful to try rclone with the latest beta if you’re using a stable release to understand if your issue was recently fixed.

rclone v1.60.1-DEV

  • os/version: raspbian 12.7 (64 bit)
  • os/kernel: 6.6.51+rpt-rpi-v8 (aarch64)
  • os/type: linux
  • os/arch: arm64
  • go/version: go1.19.8
  • go/linking: dynamic
  • go/tags: none

Welcome to the forum,

When you posted there was a template of questions for you to answer. Please answer all of them so we can help you...

that is an old custom compiled version.

rclone selfupdate
or
uninstall the old version and install the latest version
https://rclone.org/install/#script-installation

and test again.

1 Like

Thank you, i installed the latest version and that seemed to help, it's been running a long while, but now getting these errors:

2024/11/05 19:09:37 ERROR : Dropbox root 'Backups/MyData': sync batch commit: failed to commit batch length 1: batch had 1 errors: last error: upload failed: too_many_write_operations
2024/11/05 19:09:37 ERROR : xyz.xaml: Failed to copy: upload failed: batch upload failed: upload failed: too_many_write_operations
2024/11/05 19:09:50 ERROR : Dropbox root 'Backups/MyData': sync batch commit: failed to commit batch length 1: batch had 1 errors: last error: upload failed: too_many_write_operations
2024/11/05 19:09:50 ERROR : abc.xaml: Failed to copy: upload failed: batch upload failed: upload failed: too_many_write_operations

ok, please post the output of

  • rclone version
  • rclone config redacted dropbox:
  • top 20 lines of a debug log, not just snippets.
1 Like

rclone version

  • rclone v1.68.1
  • os/version: raspbian 12.7 (64 bit)
  • os/kernel: 6.6.51+rpt-rpi-v8 (aarch64)
  • os/type: linux
  • os/arch: arm64 (ARMv8 compatible)
  • go/version: go1.23.1
  • go/linking: static
  • go/tags: none

rclone config redacted dropbox

[dropbox]
type = dropbox
token = XXX
Double check the config for sensitive info before posting publicly

top 20 lines of a debug log, not just snippets.

Well i am having many many info-lines from copying, so i hope this is enough:

2024/11/05 00:08:18 INFO : Starting transaction limiter: max 1 transactions/s with burst 1
2024/11/05 01:25:58 NOTICE: too_many_requests/..: Too many requests or write operations. Trying again in 5 seconds.
2024/11/05 03:48:36 INFO : /4fe18ce34709afb657c07ff0e9b2wajj51fa57830: Copied (new)
2024/11/05 19:09:26 INFO : LAS.xaml.cs: Copied (new)
...
2024/11/05 19:09:37 ERROR : Dropbox root 'Backups/MyData': sync batch commit: failed to commit batch length 1: batch had 1 errors: last error: upload failed: too_many_write_operations
2024/11/05 19:09:37 ERROR : LAS_Request.xaml: Failed to copy: upload failed: batch upload failed: upload failed: too_many_write_operations
2024/11/05 19:09:50 ERROR : Dropbox root 'Backups/MyData': sync batch commit: failed to commit batch length 1: batch had 1 errors: last error: upload failed: too_many_write_operations
2024/11/05 19:09:50 ERROR : LAS_Request.xaml.cs: Failed to copy: upload failed: batch upload failed: upload failed: too_many_write_operations
...
2024/11/05 19:12:36 INFO : TouchKeyboardLayoutResources.de-DE.baml: Copied (new)

looks like you are using a app id that is shared with all rclone users.
that could be the reason for the errors.


When you use rclone with Dropbox in its default configuration you are using rclone's App ID.
This is shared between all the rclone users.

  1. create and use your own app id
  2. test again.

I did everything from your link as described (creating app, set permissions), created a new connection and added app-key and secret.

After 14 min i got the error:

2024/11/05 23:41:05 INFO : Starting transaction limiter: max 1 transactions/s with burst 1
2024/11/05 23:55:25 ERROR : FolderA/FolderB/FolderC/FolderB: error reading destination directory:

rclone config redacted dropbox:

[dropbox]
type = dropbox
client_id = XXX
client_secret = XXX
token = XXX

That's the only error within 40 minutes though!

ok, making progress.

at this point, not sure what the issue is.
the only thing i can suggest is using --dump=headers, for a deeper look

Hi,

thank you very much for your support! :slight_smile:

the only thing i can suggest is using --dump=headers, for a deeper look

I did this and Request/Response is:

2024/11/06 01:17:48 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024/11/06 01:17:48 DEBUG : HTTP REQUEST (req 0x5c7894000d)
2024/11/06 01:17:48 DEBUG : POST /2/files/list_folder HTTP/1.1
Host: api.dropboxapi.com
User-Agent: rclone/v1.68.1
Content-Length: 236
Authorization: XXXX
Content-Type: application/json
Accept-Encoding: gzip

2024/11/06 01:17:48 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024/11/06 01:19:40 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024/11/06 01:19:40 DEBUG : HTTP RESPONSE (req 0x5c7894000d)
2024/11/06 01:19:40 DEBUG : Error: read tcp <<IP & PORT>>-><<IP & PORT>>: read: connection reset by peer
2024/11/06 01:19:40 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024/11/06 01:19:40 DEBUG : pacer: low level retry 1/10 (error Post "https://api.dropboxapi.com/9/files/list_folder": read tcp <<IP & PORT>>-><<IP & PORT>>: read: connection reset by peer)
2024/11/06 01:19:40 DEBUG : pacer: Rate limited, increasing sleep to 20ms

agai, not sure exactly what is going on.
could be a networking isssue on your end, or an issue on dropbox end.

agai, not sure exactly what is going on.
could be a networking isssue on your end, or an issue on dropbox end.

Ok.

I try to backup my most important data from my NAS to Dropbox. So it would be good to be stable, but not so sure what to do now. Adding "--ignore-errors" ?

Would you recommend another cloud hosting service?

never depend on that flag.

sure,

  • what is the total size of all data to be backed-up?
  • outside of a one time disaster, do you plan to ever download the data?
  • currently, do you keep copies of the data in multiple locations?

sure

If you do recommend something, can you be sure i won't have these issues most likely?

what is the total size of all data to be backed-up?

260 GB. Might grow a little. 1 TB should be fine

outside of a one time disaster, do you plan to ever download the data?

No

currently, do you keep copies of the data in multiple locations?

Yes, got them mirrored once in the NAS and backed up to 2 external HDDs. But all in my house and no other physical place.

there are so many ways to store data in cloud.
rclone has great support for S3 providers.

i keep recent backups in wasabi. in a disaster, they have great download speeds.
older data stored in aws s3 deep glacier, approx $1.00/TiB/Month.

idrive is a very good choice, they offer a free plan, that you could test with.


and for backups, check --backup-dir flag

rclone sync /path/to/files remote:current --backup-dir=remote:archive/`date +%Y%m%d.%I%M%S`
1 Like

Thank you!

I decided for AWS S3 Deep Glacier.
Basically i did all from here: https://ryansouthgate.com/rclone-cheap-backups/

I am getting these errors:

2024/11/07 00:36:51 ERROR : Screen Recording (21.09.2021 12-41-54).wmv: Failed to copy: multi-thread copy: failed to open chunk writer: create multipart upload failed: operation error S3: CreateMultipartUpload, failed to get rate limit token, retry quota exceeded, 4 available, 10 requested

2024/11/07 00:38:31 ERROR : BWA 2021.pdf: Failed to copy: operation error S3: PutObject, exceeded maximum number of attempts, 1, https response error StatusCode: 0, RequestID: , HostID: , request send failed, Put "https://somebucket.s3.as-north-1.amazonaws.com/FolderABCpdf?x-id=PutObject": dial tcp: lookup somebucket.s3.as-north-1.amazonaws.com on 192.168.2.1:53: read udp 192.168.23.42:53564->192.168.2.23:53: i/o timeout

2024/11/07 01:58:40 ERROR : output.1.avi: Failed to copy: multi-thread copy: failed to open chunk writer: create multipart upload failed: operation error S3: CreateMultipartUpload, failed to get rate limit token, retry quota exceeded, 5 available, 10 requested

2024/11/07 01:58:59 ERROR : output.2.avi: Failed to copy: multi-thread copy: failed to open chunk writer: create multipart upload failed: operation error S3: CreateMultipartUpload, failed to get rate limit token, retry quota exceeded, 5 available, 10 requested

2024/11/07 02:08:41 ERROR : Manual_de3.wmv: Failed to copy: multi-thread copy: failed to open chunk writer: create multipart upload failed: operation error S3: CreateMultipartUpload, failed to get rate limit token, retry quota exceeded, 5 available, 10 requested

2024/11/07 06:14:20 ERROR : ADMINBEREICH.mp4: Failed to copy: multi-thread copy: failed to open chunk writer: create multipart upload failed: operation error S3: CreateMultipartUpload, failed to get rate limit token, retry quota exceeded, 5 available, 10 requested

I still use the same command:

rclone sync /mnt/backup2/data/ aws:mybucket/data --tpslimit 1 --transfers 1 --checkers 1 --progress --exclude-from "/home/myuser/rclone/filter.txt"

Maybe i should remove the limitations here?

and for backups, check --backup-dir flag

Yet, not sure if i want incremental backups, will consider it though in future!

i have been using aws deep glacier for 7+years, never had an issue.

looks like another network issue, this time dns

post rclone config redacted aws:

what is the logic to use such values, instead of defaults?

post rclone config redacted aws:

[aws]
type = s3
provider = AWS
access_key_id = XXX
secret_access_key = XXX
region = eu-central-1
location_constraint = EU
acl = private
storage_class = DEEP_ARCHIVE

what is the logic to use such values, instead of defaults?

Well i was just playing around while having issues with dropbox and wanted to reduce all to the minimum. I would remove them for the next try....

So i removed the parameters and started a run with the command:

rclone sync /mnt/backup2/data/ aws:somebucket/databackup --verbose --progress --dump=headers --log-file=/home/myuser/rclone/log --exclude-from "/home/myuser/rclone/filter.txt"

I got 2 Errors (still running at 135GB):

FIRST ERROR:

2024/11/07 17:43:57 DEBUG : HTTP RESPONSE (req 0x40009d2f00)
2024/11/07 17:43:57 DEBUG : HTTP/1.1 204 No Content
Date: Thu, 07 Nov 2024 17:43:58 GMT
Server: AmazonS3
X-Amz-Id-2: someKey
X-Amz-Request-Id: someKey

2024/11/07 17:43:57 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024/11/07 17:43:57 DEBUG : To.wav: multipart upload "HTJOGX42GwF2Ia0RqifR0Kb461ygFwll5sZhoIa.bscWtLSMAXgkYddZcaWzdYpHXjQkMlUhRAskVvDLVuWbeCxoM6hjqSP1m1YTO73X91R.5.xnGw0bNKsBgdCBoNjE" aborted
2024/11/07 17:43:57 ERROR : HowTo.wav: Failed to copy: multi-thread copy: failed to write chunk: failed to upload chunk 3 with 5242880 bytes: operation error S3: UploadPart, failed to get rate limit token, retry quota exceeded, 0 available, 10 requested
2024/11/07 17:43:57 DEBUG : Device.S7.pdb: Need to transfer - File not found at Destination
2024/11/07 17:43:57 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024/11/07 17:43:57 DEBUG : HTTP REQUEST (req 0x40007488c0)
2024/11/07 17:43:57 DEBUG : PUT Setup.3.1.4024.7.zip?partNumber=37&uploadId=tcsvasfEZNK8eeeyeDIQYqAsfsdfsfwefNgggrrdsdgsdgaweerwetfgwesfewefewfe_EQSiuVpVqJWaXawegwegswfr3cJa33hZ.FN&x-id=UploadPart HTTP/1.1

SECOND ERROR:

2024/11/07 16:45:05 DEBUG : HTTP RESPONSE (req 0x40005a4c80)
2024/11/07 16:45:05 DEBUG : Error: write tcp 192.168.24.31:35780->23.43.523.234:443: write: connection reset by peer
2024/11/07 16:45:05 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024/11/07 16:45:05 DEBUG : pacer: low level retry 1/1 (error operation error S3: PutObject, exceeded maximum number of attempts, 1, https response error StatusCode: 0, RequestID: , HostID: , request send failed, Put "someURL": write tcp 192.168.24.31:35780->52.95.169.44:443: write: connection reset by peer)
2024/11/07 16:45:05 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2024/11/07 16:45:05 DEBUG : UNIBLOC.avi: Received error: operation error S3: PutObject, exceeded maximum number of attempts, 1, https response error StatusCode: 0, RequestID: , HostID: , request send failed, Put "someURL": write tcp 192.168.24.31:35780->42.85.149.44:443: write: connection reset by peer - low level retry 9/10
2024/11/07 16:45:05 ERROR : BLOC.avi: Failed to copy: operation error S3: PutObject, exceeded maximum number of attempts, 1, https response error StatusCode: 0, RequestID: , HostID: , request send failed, Put "someURL": write tcp 192.168.24.31:35780->42.85.149.44:443: write: connection reset by peer

Dear asdffdsa,

i have solved all issues and everything is working perfectly now.
I had an ip-address-conflict, which i couldn't see. It took a while to figure it out. That was causing all the network-issues.

So the issue was not rclone-related. Sorry for that.

Thank you for your help, i appreciate :slight_smile: