Detecting incomplete file download from Google Photos

Summary of Problems

  1. Rclone does not continue to download (or re-download) a previously unfinished or incomplete file download from Google Photos.
  2. There seems to be no way to detect incomplete or corrupt files downloaded from Google Photos.

rclone version

rclone v1.54.0
- os/arch: linux/amd64
- go version: go1.15.7

OS

Linux 64 bit (Ubuntu 20.04)

Cloud storage system

Google Photos

Background

I have been running rclone sync over last few weeks to backup over ~500k photos locally (~1+ TB down so far). This has been a very fun (note: sarcasm) due to Google photos daily API limits and Google starting to give 404s after ~1 hour of run time (even if API limit is not reached).

I have been running rclone sync on sub-directories like by-month/2020/2020-01 so I don't run out of API quota. To address the 404 error I have been using the linux command line tool timeout to kill rclone sync after ~1 hour (3602 seconds to be exact).

I settled on timeout after I had already tried using the rclone flag --max-duration=1h. I found Google API gives intermittent errors for different API calls so rclone was never exiting (even if no new transfers were scheduled after 1 hour) but still eating up my daily API quota.

I have rclone sync setup to run via cron a few times a day so I don't have to watch it constantly. The cron job runs are spaced out evenly throughout the day so that Google doesn't give me 404s for being too aggressive with the API.

Problem

1. I discovered a corrupted image.

This was a jpg file in feature/favorites folder that when opened in an image viewer only produced a partially viewable file - most of the file was 'grayed' out.

Here is the corrupted file info.

-rw-rw-r-- 1 rclone rclone    126976 Feb 13 04:59 IMG_20190713_073837-COLLAGE.jpg

There could be at least two legitimate reason why this photo was corrupted:

  • Timeout had abruptly killed rclone sync even when Google Photos hadn't started to give 404s
  • Network interruption or downtime.

There's a very good chance there are many other files that are also corrupted.

2. Running rclone sync did not detect or fix corruption

Running: timeout -v --preserve-status -k 1m 3602s rclone sync --transfers=10 --fast-list --gphotos-include-archived --log-level INFO --log-file=REDACTED.txt gphotos:feature feature/
2021/03/07 16:38:48 INFO  : There was nothing to transfer
2021/03/07 16:38:48 INFO  : 
Transferred:   	         0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks:                 4 / 4, 100%
Elapsed time:         1.7s

3. Renaming corrupted file and running rclone sync re-downloaded the image:

Running: timeout -v --preserve-status -k 1m 3602s rclone sync --transfers=10 --fast-list --gphotos-include-archived --log-level INFO --log-file=REDACTED.txt gphotos:feature /REDACTED/feature
2021/03/07 16:40:45 INFO  : favorites/IMG_20190713_073837-COLLAGE.jpg: Copied (Rcat, new)
2021/03/07 16:40:45 INFO  : favorites/IMG_20190713_073837-COLLAGE_corrupted.jpg: Deleted
2021/03/07 16:40:45 INFO  : 
Transferred:   	  693.541k / 693.541 kBytes, 100%, 1.400 MBytes/s, ETA 0s
Checks:                 4 / 4, 100%
Deleted:                1 (files), 0 (dirs)
Transferred:            2 / 2, 100%
Elapsed time:         1.4s

After this, the photo was correctly viewable and most importantly the file size and modtime both had changed. Modtime now seemed accurate (based on file name + based on checking Google Photos Web):

-rw-rw-r-- 1 rclone rclone    710186 Jul 13  2019 IMG_20190713_073837-COLLAGE.jpg

4. Experimentation found no quick/easy way to detect corruption.

I restored the corrupt file from backup and ran a few rclone commands but most of the command I ran did not detect the corruption!

I knew that as per the rclone features page, hashing or modtime are not supported by Google Photos but I was hoping there might be something special being done to help detect these types of errors/problems.

rclone check --log-level INFO "gphotos:feature" feature/
$ rclone check --log-level INFO "gphotos:feature" feature/
2021/03/07 16:53:01 NOTICE: Local file system at /REDACTED/feature: 0 differences found
2021/03/07 16:53:01 NOTICE: Local file system at /REDACTED/feature: 4 hashes could not be checked
2021/03/07 16:53:01 NOTICE: Local file system at /REDACTED/feature: 4 matching files
2021/03/07 16:53:01 INFO  : 
Transferred:   	         0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks:                 4 / 4, 100%
Elapsed time:         1.1s
rclone check --size-only --log-level INFO "gphotos:feature" feature/
$ rclone check --size-only --log-level INFO "gphotos:feature" feature/
2021/03/07 16:55:00 NOTICE: Local file system at /REDACTED/feature: 0 differences found
2021/03/07 16:55:00 NOTICE: Local file system at /REDACTED/feature: 4 matching files
2021/03/07 16:55:00 INFO  : 
Transferred:   	         0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks:                 4 / 4, 100%
Elapsed time:         1.2s
rclone hashsum MD5 --log-level INFO "gphotos:feature"
$ rclone hashsum MD5 --log-level INFO "gphotos:feature"
                     UNSUPPORTED  favorites/IMG_20191226_174523.jpg
                     UNSUPPORTED  favorites/VID_20200202_124028.mp4
2021/03/07 16:59:05 ERROR : favorites/VID_20200202_124028.mp4: Hash unsupported: hash type not supported
                     UNSUPPORTED  favorites/IMG_20190713_073837-COLLAGE.jpg
                     UNSUPPORTED  favorites/PXL_20210204_090657534.jpg
2021/03/07 16:59:05 ERROR : favorites/PXL_20210204_090657534.jpg: Hash unsupported: hash type not supported
2021/03/07 16:59:05 ERROR : favorites/IMG_20190713_073837-COLLAGE.jpg: Hash unsupported: hash type not supported
2021/03/07 16:59:05 ERROR : favorites/IMG_20191226_174523.jpg: Hash unsupported: hash type not supported
2021/03/07 16:59:05 Failed to hashsum with 8 errors: last error was: Hash unsupported: hash type not supported

5. Using the --download flag would detect corruption.

Given the size of my photos library - doing this for my entire library is infeasible (and very painful) so I'm looking for easier or faster ways now. Hence this post.

rclone check --download --log-level INFO "gphotos:feature" feature/
$ rclone check --download --log-level INFO "gphotos:feature" feature/
2021/03/07 22:38:20 NOTICE: Local file system at /REDACTED/feature: 1 differences found
2021/03/07 22:38:20 NOTICE: Local file system at /REDACTED/feature: 1 errors while checking
2021/03/07 22:38:20 NOTICE: Local file system at /REDACTED/feature: 3 matching files
2021/03/07 22:38:20 INFO  : 
Transferred:   	  248.010M / 248.010 MBytes, 100%, 8.737 MBytes/s, ETA 0s
Errors:                 1 (retrying may help)
Checks:                 4 / 4, 100%
Transferred:            8 / 8, 100%
Elapsed time:        29.9s

2021/03/07 22:38:20 Failed to check: 1 differences found

6. I was surprised I could not use --size-only (or modtime).

I'm not familiar with the rclone code base but I looked around and found that ModTime and Size methods seem to be implemented for objects but not for the filesystem.

I looked around the Google Photos API documentation and I see that there doesn't' seem to be support for retrieving this info via any official API calls.

However the modtime info is definitely retrieved by rclone from somewhere since it is being correctly set for all my photos. Looking at the source code it seems to be stored & retrieved from here.

I don't see the file size info being available within the Metadata so not sure how rclone handles this. Is the file streamed until Google photos says EOF and only then the file size is discovered?

I ran rclone check with DEBUG to see if Size or modtime was retrieved as part of the check but I found that it wasn't:

rclone check --size-only --log-level DEBUG --stats-log-level DEBUG "gphotos:feature" feature/
$ rclone check --size-only --log-level DEBUG --stats-log-level DEBUG "gphotos:feature" feature/
2021/03/07 17:05:43 DEBUG : rclone: Version "v1.54.0" starting with parameters ["rclone" "check" "--size-only" "--log-level" "DEBUG" "--stats-log-level" "DEBUG" "gphotos:feature" "feature/"]
2021/03/07 17:05:43 DEBUG : Using config file from "/home/rclone/.config/rclone/rclone.conf"
2021/03/07 17:05:43 DEBUG : Creating backend with remote "gphotos:feature"
2021/03/07 17:05:43 DEBUG : Creating backend with remote "feature/"
2021/03/07 17:05:43 DEBUG : fs cache: renaming cache item "feature/" to be canonical "/REDACTED/feature"
2021/03/07 17:05:43 DEBUG : Local file system at /REDACTED/feature: Waiting for checks to finish
2021/03/07 17:05:43 DEBUG : Google Photos path "feature": List: dir=""
2021/03/07 17:05:43 DEBUG : Google Photos path "feature": >List: err=<nil>
2021/03/07 17:05:43 DEBUG : Google Photos path "feature": List: dir="favorites"
2021/03/07 17:05:44 DEBUG : Google Photos path "feature": >List: err=<nil>
2021/03/07 17:05:44 DEBUG : favorites/PXL_20210204_090657534.jpg: Size: 
2021/03/07 17:05:44 DEBUG : favorites/PXL_20210204_090657534.jpg: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/VID_20200202_124028.mp4: Size: 
2021/03/07 17:05:44 DEBUG : favorites/VID_20200202_124028.mp4: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20191226_174523.jpg: Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20191226_174523.jpg: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20190713_073837-COLLAGE.jpg: Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20190713_073837-COLLAGE.jpg: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/PXL_20210204_090657534.jpg: Size: 
2021/03/07 17:05:44 DEBUG : favorites/PXL_20210204_090657534.jpg: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20191226_174523.jpg: Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20191226_174523.jpg: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20191226_174523.jpg: Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20191226_174523.jpg: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/PXL_20210204_090657534.jpg: Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20190713_073837-COLLAGE.jpg: Size: 
2021/03/07 17:05:44 DEBUG : favorites/VID_20200202_124028.mp4: Size: 
2021/03/07 17:05:44 DEBUG : favorites/VID_20200202_124028.mp4: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20190713_073837-COLLAGE.jpg: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/PXL_20210204_090657534.jpg: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20191226_174523.jpg: OK
2021/03/07 17:05:44 DEBUG : favorites/VID_20200202_124028.mp4: Size: 
2021/03/07 17:05:44 DEBUG : favorites/PXL_20210204_090657534.jpg: OK
2021/03/07 17:05:44 DEBUG : favorites/VID_20200202_124028.mp4: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/VID_20200202_124028.mp4: OK
2021/03/07 17:05:44 DEBUG : favorites/IMG_20190713_073837-COLLAGE.jpg: Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20190713_073837-COLLAGE.jpg: >Size: 
2021/03/07 17:05:44 DEBUG : favorites/IMG_20190713_073837-COLLAGE.jpg: OK
2021/03/07 17:05:44 NOTICE: Local file system at /REDACTED/feature: 0 differences found
2021/03/07 17:05:44 NOTICE: Local file system at /REDACTED/feature: 4 matching files
2021/03/07 17:05:44 DEBUG : 
Transferred:   	         0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks:                 4 / 4, 100%
Elapsed time:         1.1s

2021/03/07 17:05:44 DEBUG : 4 go routines active

What I'm looking for

Given the number of files, there's no way I can manually check if all photos have been downloaded corruption free so I'm trying to figure out what I can do here without having to resort to using --download flag on either rclone check or rclone checksum.

It seems to be that checking based on modtime (instead of simple file existence check) should be possible. So I'm looking to understand if there is a reason why this isn't already implemented (what am I missing?). If it is possible to be implemented (but not yet done for some reason), would it be a simple addition?

I'm also looking for advice or guidance on methods I could employ outside of rclone to help detect all corrupt files so I can force a re-download of them.

I came across the following code that can check for media integrity: https://github.com/ftarlao/check-media-integrity. I am currently running it on my entire library as a quick test revealed that it can detect the file in question above. It looks like it'll take ~2+ days to complete on my entire library. This seems like it'll help me find programmatically detectable corrupted files. I'm not sure if this will be 100% fool-proof though.

I feel checking against file size, modTime is likely much more reliable. Obviously the best would be hashsum but there is no support from Google Photos and I really don't want to use --download until API limits can be improved (I'll happily pay if there was an option) and the 404s go away. :slightly_smiling_face:

Any and all help would be greatly appreciated!

I'm afraid both of these problems are due to limitations in the Google Photos API.

By default rclone can't read either the size of the modification time of the image.

This means that if there is an incomplete file, rclone just won't notice.

What you can do is use the

  --gphotos-read-size   Set to read the size of media items.

And that will detect partial downloads. But note that it is really expensive in terms of API calls.

It might be worth giving it a try.

It might be possible to detect modtime changes also... When rclone has finished downloading a file, it will set the modtime to the time received from google photos. This is likely the EXIF datestamp of the photo, so in the past. Therefore if you find files which are out of time sequence they are likely corrupted. You could do this with rclone lsf --max-age 1h /path/to/files say.

You could also do a report of timestamps like this

rclone lsf -Ftp --files-only -R --csv /path/to/files

and compare it with the one from gphotos

rclone lsf -Ftp --files-only -R --csv gphotos:path/to/files

You could use the --gphotos-read-size and include the size but it will probably kill your API limit as reading a size is equivalent to a download I think.

Failing that you could use a different tool like this one: https://github.com/perkeep/gphotos-cdp

This uses a headless chrome browser to click on the pages in the google photos website. That's how good the Google Photos API is :wink:

Thanks very much for taking the time to respond and for providing the excellent suggestions. I've tried it all and here's my findings & thoughts so far.

1. --gphotos-read-size works!

At least for detecting image corruption this worked. I don't know how I missed trying this flag in my experimentation!

I haven't tried with videos yet. I found a few incomplete video download as well but they're in directories with thousands of other items so I won't try until I'm sure this is the method for me. Otherwise I'll run out of API too fast (I'm still running sync in the background :wink: )

rclone sync --gphotos-read-size
Running: timeout -v --preserve-status -k 1m 3602s rclone sync --gphotos-read-size --transfers=10 --fast-list --gphotos-include-archived --log-level INFO --log-file=REDACTED.txt gphotos:feature /REDACTED/feature
2021/03/08 16:02:57 INFO  : favorites/IMG_20190713_073837-COLLAGE.jpg: Copied (replaced existing)
2021/03/08 16:02:57 INFO  : 
Transferred:   	  693.541k / 693.541 kBytes, 100%, 1.902 MBytes/s, ETA 0s
Checks:                 4 / 4, 100%
Transferred:            1 / 1, 100%
Elapsed time:        11.5s

2. rclone lsf --max-age looks very promising.

Since I've been downloading for a few weeks already and I need to detect corruption from all that time I'll need to use --max-age as 1 month probably. This way I get a list of likely corrupt files. Since almost all photo filenames since I started my rsync rclone marathon contain the date I can programmatically or manually remove those files from this list of potentially corrupt files.

Obviously this means that any corrupt files for photos taken in last 1 month won't be detected but I can run rclone sync --gphotos-read-size for those and that's manageable from API quota perspective. For albums, shared-albums that have thousands of items, I'll probably build a file list of these photos from the last 1 month and call rclone sync --gphotos-read-size $remote_file $local_file on each file so I don't have to call rclone sync on the whole album or shared directory (which will eat my quota).

3. rclone lsf -Ftp --files-only -R --csv seems to the best.

Ideally I would run this for my entire gphotos collection but I know (based on past experience) just running rclone lsf on my collection eats up my API quota for the day so this is likely something I'll do much later down the track once I have finished all my syncing.

But this would be easy once I have the data: save csv output for remote & local, sort, diff, delete local files that don't match, finally run rclone sync on those files only (rather than parent directory).

rclone lsf -Ftp --files-only -R --csv --log-level INFO "gphotos:feature"
$ rclone lsf -Ftp --files-only -R --csv --log-level INFO "gphotos:feature"
2019-07-13 19:07:14,favorites/IMG_20190713_073837-COLLAGE.jpg
2019-12-26 17:45:23,favorites/IMG_20191226_174523.jpg
2021-02-04 10:06:57,favorites/PXL_20210204_090657534.jpg
2020-02-02 12:40:28,favorites/VID_20200202_124028.mp4

Next steps

I realise I actually need to solve two different problems. First is to fix all files that are likely corrupt and re-download them. That's been my focus so far in this thread.

However next I need to make sure future syncs don't cause this problem. Both of these will require different approaches.

Fixing existing corruptions

In the short-term, I'm happy to hack together things and mix-and-match some of the different ways to detect corruptions (as per my rantings above). It's totally fine if I detect some false positives since worst-case here is a re-download. As long as the number of false positives are relatively small it won't eat up my API quota.

Preventing future corruptions

By using --gphotos-read-size and limiting download to /by-month/$CURRENT_YEAR/$CURRENT_YEAR-$CURRNT_MONTH we can keep things sane without exhausting quota if I run this only once or twice a day.

However this is not enough since I need to detect corruption even when:

  1. I sync albums, shared-albums because I can't hardcode their names in my script;
  2. Older photos are added/edited/removed that are not from `$CURRENT_YEAR.

In these two cases, if I use --gphotos-read-size that will definitely exhaust API quota. So I need a long-term solution for these use cases.

I can also run rclone lsf -Ftp --files-only -R --csv once a month (sacrifice one day a month of API quota) and detect corruption through this way. Although I think this is not sustainable because I'm expecting that at some point in the future my gphotos collection size will get big enough that I won't even be able to complete rclone lsf without running out of quota.

I think this likely needs better programmatic support within rclone. I'll create seperate posts for what I think could be feature additions to make this easier (will link back on this thread when done).

Open questions

  1. Does rclone only set the modtime after it has successfully finished downloading? Can I rely on this?

I just found a bug on this: https://github.com/rclone/rclone/issues/4504. It seem using --retries 1 along with --max-duration=1h might fix the problem I was having. So I'll give this a try. This might stop most of my corruption problems.

I've created these feature requests based on my experiences here:

This didn't help unfortunately as existing transfers seem to just hang whenever Google decides you're sending too many requests to its Photos API.

Here's the command I ran:
timeout -v --preserve-status -k 1m 3690s rclone sync --transfers=10 --max-duration=1h --retries 1 --fast-list --gphotos-include-archived --log-level INFO --log-file=/REDACTED.txt gphotos:album /REDACTED/album

After 1 hour it just hung as expected and made no progress:

2021/03/08 20:31:32 INFO  :
Transferred:       25.301G / 25.301 GBytes, 100%, 7.342 MBytes/s, ETA 0s
Errors:                13 (retrying may help)
Transferred:        48424 / 58440, 83%
Elapsed time:     1h0m0.0s
Transferring:
 * 2015-03-03 to 2015-03-…ng Musuem/DSC_4839.JPG: transferring
 * 2015-03-07 - Californi…onal Park/DSC_5524.JPG: transferring
 * 2015-03-30 - TPG - FTT…MG_20150330_193323.jpg: transferring
 * 2015-03-08 - San Franc…oratorium/DSC_6174.JPG: transferring
 * 2015-03-01 to 2015-03-… Sciences/DSC_4129.JPG: transferring
 * 2015-02-27 to 2015-02-…te Bridge/DSC_3222.JPG: transferring
 * 2015-02-23 to 2015-02-…ry Musuem/DSC_2855.JPG: transferring
 * 2015-02-21 to 2015-02-…l Studios/DSC_2240.JPG: transferring
 * 2015-03-03 to 2015-03-…ng Musuem/DSC_4840.JPG: transferring
 * 2015-03-07 - Californi…onal Park/DSC_5525.JPG: transferring

2021/03/08 20:32:32 INFO  :
Transferred:       25.301G / 25.301 GBytes, 100%, 7.219 MBytes/s, ETA 0s
Errors:                13 (retrying may help)
Transferred:        48424 / 58440, 83%
Elapsed time:     1h1m0.0s
Transferring:
 * 2015-03-03 to 2015-03-…ng Musuem/DSC_4839.JPG: transferring
 * 2015-03-07 - Californi…onal Park/DSC_5524.JPG: transferring
 * 2015-03-30 - TPG - FTT…MG_20150330_193323.jpg: transferring
 * 2015-03-08 - San Franc…oratorium/DSC_6174.JPG: transferring
 * 2015-03-01 to 2015-03-… Sciences/DSC_4129.JPG: transferring
 * 2015-02-27 to 2015-02-…te Bridge/DSC_3222.JPG: transferring
 * 2015-02-23 to 2015-02-…ry Musuem/DSC_2855.JPG: transferring
 * 2015-02-21 to 2015-02-…l Studios/DSC_2240.JPG: transferring
 * 2015-03-03 to 2015-03-…ng Musuem/DSC_4840.JPG: transferring
 * 2015-03-07 - Californi…onal Park/DSC_5525.JPG: transferring

timeout: sending signal TERM to command ‘rclone’

I think a combination of the requested flags --exit-immediately-on-max-duration and --delete-incomplete-files-on-exit would fix this problem.

Another option could be rclone automatically detecting a hang and/or repeat 404s and bailing & exiting with the same effect as those flags.

I'll continue to experiment by extending timeout by 30 minutes to see if it'll complete after 1 hour 30 minutes. Should be more than enough time to finish downloading 10 pics or vids.

Just finished running this experiment. It helped me learn that rclone does delete partial downloads after some deadline is reached. See my update here for the details.

Question: Is there a way to reduce the context deadline?

I'd like to control it somehow so I can make it shorter. It seems to be ~25 minutes right now. I had a look at flags but couldn't see anything that allows me to control this. Ideally I'd set it so that partial downloads get deleted ~5 minutes after the internet or API disappears.

Next

I'm going re-run but with the timeout set to 2 hours (1 hour extra after --max-duration) to see if rclone also eventually gives up on files that it doesn't even start transferring. In my experiment, one file was never given up on by rclone even ~30 minutes after Google Photos API stopped responding.

2021/03/09 01:57:01 INFO  :
Transferred:       15.177G / 15.177 GBytes, 100%, 3.117 MBytes/s, ETA 0s
Errors:                10 (retrying may help)
Checks:             81283 / 81283, 100%
Transferred:          572 / 1436, 40%
Elapsed time:    1h27m0.0s
Transferring:
 * REDACTED 1…ID_20190829_041405.mp4: transferring

timeout: sending signal TERM to command ‘rclone’

NOTE: This file was never created on local disk by rclone so no deletion is necessary but if rclone gives up on trying to transfer then it can exit gracefully quickly without being forcefully killed by timeout.

Great writeup - thanks!

Yes you can.

The --max-duration sets an end time for the transfer so it should end at that time.

It should stop the transfers dead at the time limit provided you haven't change the default --cutoff-mode.

If it isn't then that is probably a bug!

This seems to have worked fairly well after a small modification to get ffmpeg to work: Number of bad/processed files: 281 / 797788 , size of processed files: 1373946.9 MB:

For others who might want to do this, here is what I ran:

/usr/local/bin/check_mi \
	--csv corrupted_files.txt \
	--recurse \
	--enable-media \
	--err-detect warning \
	--threads 8 \
	--timeout 1200 \
	/directory_to_check/ &> check_integrity.log.txt &

Stats from corrupted_files.txt on file extensions of suspected corrupt files:

      8 JPG
      1 gif
     22 jpg
    250 mp4

Overrepresentation of mp4 makes sense as they're larger files so more likely to be interrupted.

I can use this list to cross-check against rclone lsf --max-age as well as the other options described earlier in this thread.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.