Hi I’m new to the group , been using rclone from my raspberryPi for a few weeks and generally working well. My RPI is an rsync server for various cmputers in my house. They back up once an hour, then every night the RPI backs up the backup to a second mirror drive, then once a week I backup the mirror to dropbox using rclone.
My command line is
/usr/local/bin/rclone sync -q --exclude-from /etc/rclone_exclude.txt /Archive Dropbox:Archive >> /var/tmp/log/rclone.log 2>&1
A few questions
It seems to take approx 3 days for rclone to finish, I get a lot of lock and other errors which I understand is dropbox timing out . What seems odd is that each week it takes approximately the same time (3 days). I’d imagined that once the first sync had been done, subsequent syncs would be much shorter as with rsync ? Perhaps because it completes with errors each time its re-syncing everything ?
I’m using the sync flag but I dont see any obvious evidence of deletions . Would I expect to see them in the dropbox deleted folder or is it an absolute delete i.e removed and not recoverable ?
if the rclone is very long running, I would quite like to limit it to run in shorter chunkc out of hours i.e nighttime, when no other use on my network and also my datacentre (and disks) are coolest. Is there a smart way to break up the transfer ? I guess I could simply kill it from cron and restart the next evening, but I wonder if there is a smarter way
The errors I’m getting are typically lots and lots of
The server has either erred or is incapable of performing the requested operation
- Failed to grab locks for 20653169: lock held by connection 21018547. Please re-issue the request.
- Failed to copy: upload failed: Internal Server Error
Would throttling the transfer rate provide a slower but more reliable session ?
OK - I think the abundance of Lock errors I’m getting is due to the “rats nest” of hard links that Apple created when it migrated from iPhoto to Photo. it seems like Photo has access to all iPhoto images via a system of hard links, and if I exclude these my ERROR rate drops dramatically.
well my weekly cron ran with the beta on monday and finsihed around 80 hours later. I still had 16 errors including and 3 goes ( retries=3)
Failed to copy: upload failed:
Failed to copy: upload failed: json: cannot unmarshal object into Go struct field UploadAPIError.error of type string
Failed to copy: upload failed: unexpected end of JSON input
I ran it again and it seemed to still want to upload the same files again, it doesnt seem to know what its already uploaded ?
Thanks, I ran with a -vv flag and yes, I could see it skipping a lot of files, but I could also see it recopying many files that had previously been copied and I can see at both source and target destinations ?
the recopies are more than the handfull or ERRORS (17) I got last time.
I’m trying to figure out what the problem is. I tried --size-only but it didnt make any difference. I have a pdf file inn my home directory which I dont update, yet it seems to need to recopy it each time, despite the timestamp not changing ?
I’m backing up from a raspberry Pi/ ext4 filesystem, but the files have been rsync’d from a mac, I wonder if some file update is being performed by the rsyncd server which causes the files to appear to be new
Thanks, I did a test run adding the --local-no-unicode-normalisation and I still see the same files being copies, apparently unnecessarily. There are a lot of skipped, and possibly that has improved things, I have added the clause to my weekly crontab entry and I’ll see. Thanks for your support. rclone is working in that all files seem to being backed up now, its just some should not need to be recopied and some should be deleted. I accept that the home directories of mac users, at a OSX filesystem level, are a little odd compared to most Linux/Unix systems. Thanks again, its doing its job
Could you paste the lined from the log (with -vv) which mention one of those files which get copied unnecessarily. Save the log to a file and use grep to find all the lines. I’d like to work out the reason why it is happening.
A full rclone sync takes approx 80 hours. I have run one and this has given me some considerable insight:
I am using rclone to cloudbackup up a directory which contains 3 sub directories, one for each computer I have. Looking through the full log with -vv set I realised that only one of the three subfolder is having problems. two of them behave correctly, skipping file copies as I’d expect. the third one NEVER skips, and copies EVERY file, no matter what I do.
Apparently the 3 directories are the same AFAIK, their unix ownerships are different , but I dont think that should be an issue ?
I’m starting to think that the offending directory has some problem at the destination ? I think I pre-created its folder name via the dropbox UI when I first started. I have removed the entire directory from dropbox, and I wanted to also remove it from the Deleted folder . This is proving hard as some of the subfolders are huge (apple iPhoto) , so I may need to wait for 30 days for dropbox to purge out all of the deleted stuff.
I wonder also if this strange case semi-sensitivity that Drpbox has is an issue. The offending folder originally had been created All lower case, but locally the first character was upper. A quick run after removing let rclone recreate it with upper , but still it seemed on a second pass to recopy ?
if I exclude the offending folder, rclone runs fine, completes on the second pass and even deleted some files (correctly) from the server
I’m concentrating on getting clear runs on the two folders that seem to be OK, while I wait for Dropbox to clear out the failed folder from Deleted Items .
I noticed this abnormality which might be of interest. Looks like Dropbox also doesn’t like folders with a trailing space character. I have one and it always fails this way
2017/07/27 04:04:11 ERROR : Edward/Final drafting/March 1st/March 3rd/March 5th /chapter 2 March 5th.docx: Failed to copy: upload failed: json: cannot unmarshal object into Go struct field UploadAPIError.error of type string
If I rename the ./March 5th / to ./March 5th/ then it copies fine. I think the folder was named like that by accident, but it should be legal ?