"interrupted system call" errors when sync - SOLVED!

I everybody,
First ! Congrats to the community and developers that make rclone a reality, it's really an amazing tool !

I post it here in case someone else has the same issue .... it's resolved.

What is the problem you are having with rclone?

I use rclone on an armbian SBC OrangePi (RaspeberryPi like), to sync my Google Photo to my NAS storage accessible by Samba (CIFS)
Rclone work "mainly" but time to time I have "interrupted system call" errors messages ...

rclone v1.53.1 - os/arch: linux/arm - go version: go1.15

Source of the problem

it seems that there is something wrong with the GO implementation to access CIFS (samba) share:
see link : [i cannot include link so search "golang issues 39237"]

Solution (workaround)

To resolve the problem, i follow the advice here on this link : [i cannot include link so search "restic prune-fails-on-cifs-repo-using-go-1-14-build"]
And set a variable ENV before launching rclone:
export GODEBUG=asyncpreemptoff=1
Probably this env deactivate the problematic go function (preemptive read of directory name of the CIFS share or something like that ..)

Then Rclone works without any errors !!!!

Probably will be solved in future version of GO, but for now, it do the tricks !

Regards,
Nicolas.

1 Like

Thanks for the writeup!

Yes this is a known problem for which the fix hasn't made it into the current go1.15.x release yet.

Thanks for reporting this - I'm fighting a recent data corruption issue and wondered if this would be a possible cause, or if this is just spamming log entries? I'm making use a SMB mounted share available for clone for use as a VFS cache drive.

2020/11/02 00:59:35 ERROR : Movies/D<snip>.mkv: vfs cache: failed to open item: vfs cache item: check object failed: vfs cache item: open truncate failed: truncate to current size: stat /mnt/Cache/vfs/gd/Media/Movies/D<snip>.mkv: interrupted system call
2020/11/02 00:59:35 ERROR : Movies/D<snip>.mkv: Non-out-of-space error encountered during open
2020/11/02 00:59:43 INFO  : Movies/D<snip>/poster.jpg: vfs cache: queuing for upload in 5s
2020/11/02 00:59:50 INFO  : Movies/D<snip>/poster.jpg: Copied (new)
2020/11/02 00:59:50 INFO  : Movies/D<snip>/poster.jpg: vfs cache: upload succeeded try #1
2020/11/02 01:00:20 INFO  : vfs cache: cleaned: objects 670 (was 670) in use 7, to upload 0, uploading 0, total size 6.936G (was 6.936G)
2020/11/02 01:01:20 INFO  : vfs cache: cleaned: objects 685 (was 685) in use 8, to upload 0, uploading 0, total size 7.148G (was 7.148G)
2020/11/02 01:01:34 ERROR : Movies/D<snip>.mkv: vfs cache: failed to set modification time of cached file: chtimes /mnt/Cache/vfs/gd/Media/Movies/D<snip>.mkv: interrupted system call
2020/11/02 01:02:20 INFO  : vfs cache: cleaned: objects 705 (was 705) in use 7, to upload 0, uploading 0, total size 7.453G (was 7.453G)
2020/11/02 01:03:06 ERROR : Movies/D<snip>.mkv: vfs cache: failed to set modification time of cached file: chtimes /mnt/Cache/vfs/gd/Media/Movies/D<snip>.mkv: interrupted system call

This looks like the same issue.

Try with

export GODEBUG=asyncpreemptoff=1

Hi Nick,
I'm seeing something that I think might not just be log spamming, I think my issue was caused by the Cache mount not being available over SMB and a conflicting existing between two data sets, one at the root of the SMB mounted /mnt/Cache, and one at some previous incarnation of /mnt/Cache. I think something got mangled with my autofs > rclone > emby chain that I was in process of making more robust through use of BindsTo/After systemd directives.

With the cache was populating correctly and Emby being its usual trouble free self until I verified a full shutdown / restart cycle. The /mnt/Cache failed to mount initially because the vfs & vfsdata folders (but no data obviously) existed on the mount root and therefore needed --allow-not-empty to workaround, but I'm still seeing the error below.

Can you confirm it should be possible to store the VFS Cache on a SMB mount and have it persist between reboots, or is it the expected use case this cache is started clean each boot?

2020/11/02 16:53:52 DEBUG : rclone: Version "v1.53.2" starting with parameters ["/usr/bin/rclone" "mount" "gd:Media" "/opt/gd-media" "--dir-cache-time" "1000h" "--config" "/root/.config/rclone/rclone.conf" "--log-file=/var/log/rclone.log" "--log-level=DEBUG" "--allow-other" "--allow-non-empty" "--drive-chunk-size=32M" "--cache-dir=/mnt/Cache" "--vfs-cache-mode" "full" "--vfs-cache-max-size" "200G" "--vfs-cache-max-age" "336h"]
2020/11/02 16:53:52 DEBUG : Creating backend with remote "gd:Media"
2020/11/02 16:53:52 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2020/11/02 16:53:53 DEBUG : vfs cache: root is "/mnt/Cache/vfs/gd/Media"
2020/11/02 16:53:53 DEBUG : vfs cache: metadata root is "/mnt/Cache/vfs/gd/Media"
2020/11/02 16:53:53 DEBUG : Creating backend with remote "/mnt/Cache/vfs/gd/Media"
2020/11/02 16:53:53 ERROR : Failed to create vfs cache - disabling: failed to load cache: failed to walk cache "/mnt/Cache/vfs/gd/Media": readdirent: no such file or directory
2020/11/02 16:53:53 DEBUG : Google drive root 'Media': Mounting on "/opt/gd-media"
2020/11/02 16:53:53 DEBUG : : Root:
2020/11/02 16:53:53 DEBUG : : >Root: node=/, err=<nil>

any thoughts appreciated, thanks for the stellar work on rclone.

You want the cache to persist between reboots if possible to save redownloading stuff.

Provided that the SMB mount behaves properly I see no reason why it shouldn't work as a destination for a cache - rclone doesn't demand anything unusual from the file system (like locking for instance).

I think it was the underlying autofs mount not playing nicely resulting in corruption of the cache. This was working well before in a VM but I’ve had some teething issues migrating into a container based solution.
If I find anything that looks like a bug with rclone I’ll let you know. Thank you.

Great! Always like to fix bugs :slight_smile: If you find a prob and can make a reproducer even better :slight_smile:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.