Amazon Drive upload fail

Hi guys,

I have problem with backup script…sometimes, I just get this Error…its fully random I think.

2017/03/18 04:01:50 Attempt 1/3 failed with 1 errors and: failed to make directory: HTTP code 409: "409 Conflict": response body: "{\"code\":\"NAME_ALREADY_EXISTS\",\"logref\":\"3be6794c-0b87-11e7-9a5c-1750856b84ef\",\"message\":\"Node with the name march already exists under parentId oOS8TtpaQ3iIgZ22J2dKng conflicting NodeId: -h0qdizlRB-MGoApsaa28g\",\"info\":{\"nodeId\":\"-h0qdizlRB-MGoApsaa28g\"}}"
2017/03/18 04:01:51 Attempt 2/3 failed with 1 errors and: failed to make directory: HTTP code 409: "409 Conflict": response body: "{\"code\":\"NAME_ALREADY_EXISTS\",\"logref\":\"3c294ee1-0b87-11e7-89df-23329efba961\",\"message\":\"Node with the name 18-03-2017_04:01:19 already exists under parentId -h0qdizlRB-MGoApsaa28g conflicting NodeId: kmtUX6z6QVWA2gzY2Fg9ug\",\"info\":{\"nodeId\":\"kmtUX6z6QVWA2gzY2Fg9ug\"}}"
2017/03/18 04:01:51 Attempt 3/3 failed with 1 errors and: failed to make directory: HTTP code 409: "409 Conflict": response body: "{\"code\":\"NAME_ALREADY_EXISTS\",\"logref\":\"3c589c22-0b87-11e7-9baf-9b0acf7df38e\",\"message\":\"Node with the name 18-03-2017_04:01:19 already exists under parentId -h0qdizlRB-MGoApsaa28g conflicting NodeId: kmtUX6z6QVWA2gzY2Fg9ug\",\"info\":{\"nodeId\":\"kmtUX6z6QVWA2gzY2Fg9ug\"}}"
2017/03/18 04:01:51 Failed to copy: failed to make directory: HTTP code 409: "409 Conflict": response body: "{\"code\":\"NAME_ALREADY_EXISTS\",\"logref\":\"3c589c22-0b87-11e7-9baf-9b0acf7df38e\",\"message\":\"Node with the name 18-03-2017_04:01:19 already exists under parentId -h0qdizlRB-MGoApsaa28g conflicting NodeId: kmtUX6z6QVWA2gzY2Fg9ug\",\"info\":{\"nodeId\":\"kmtUX6z6QVWA2gzY2Fg9ug\"}}"

I am sure, that these files are not duplicates…because names are generated with time variable with seconds (hodina=$(date +_%H-%M-%S))…so duplicates are not possible. Can you help me with this?

I see it wants to make an directory…but the directory already exists…

Small piece of my code…uploading part:

echo "Nahravam archivy webovych slozek na uloziste"
rclone copy /zaloha_amazon/hravakava-cz$hodina.7z CechacekAmazon:Simovi_webiky/cizi_zalohy/hravakava.cz/$rok/$mesic/$cas
rclone copy /zaloha_amazon/shop-hravakava-cz$hodina.7z CechacekAmazon:Simovi_webiky/cizi_zalohy/hravakava.cz/$rok/$mesic/$cas
echo "--- Nahravam archivy MySQL databazi na uloziste ---"
rclone copy /zaloha_amazon/admin_hravakava_cz$hodina.sql.gz CechacekAmazon:Simovi_webiky/cizi_zalohy/hravakava.cz/$rok/$mesic/$cas
rclone copy /zaloha_amazon/admin_shopik$hodina.sql.gz CechacekAmazon:Simovi_webiky/cizi_zalohy/hravakava.cz/$rok/$mesic/$cas

Just basic informations about my system:

What is your rclone version (eg output from rclone -V)
rclone v1.35

Which OS you are using and how many bits (eg Windows 7, 64 bit)
Debian 8 64-bit

Which cloud storage system are you using? (eg Google Drive)
Amazon Drive

Thanks for you time :slight_smile:
Simon

AmazonDrive caches state extensively and inconsistently – i.e. read and write calls can return different results. I’m guessing this happens if you fire a lot of rclone commands in sequence with small files or even in parallel. Can you see if the problem persists if you add a sleep 60 between every rclone command?

It would certainly be possible to catch this error, pretend “create directory” worked by returning the already existing folder id. It would also work around the inconsistent caching, since AmazonDrive luckily includes the ID there. However, I am wondering if this makes things worse if the original directory was renamed/deleted and a new oneshould be created. Without a high level knowledge of what is supposed to happen, it seems tough to always make a correct decision.

Remember that Amazon drive is case insensitive, so if you have two files potato and POTATO in a directory, you’ll get this error.

That might be the cause?

If it is a case insensitive issue, is there a way to force uppercasing files?