Input/Output Error

Hello, I’m sorry if this has been posted before but I can’t find anything related to it.
Currently I’m in an awkward situation with rclone. I use rclone on my config as storage for Sonarr, Radarr and Plex. At first this configuration runs smoothly as I expected, but now from time to time it just block me from writing into the mount. I tried manually copy but it throws me this error:
cp: error writing the file : Input/output error
cp: failed to close the file : Input/output error.
I’ve checked the hard disk and everything is fine, I’ve even checked on a second server and it encountered the same error. Is it because of Google Drive restricting me?

What’s your conf and config you are using?

I use according to this guide here:

Do I need to upload the service file too?

It’s easier if you can just share your actual rclone.conf and mount command.

I think I got some more clues, I’ve checked on the Google API Console and I found out that the API call for drive.files.create fails most of it. About the config file:
rclone mount command in systemd file.

You might want to delete that pastebin of your conf file, that’s got way too much info in it.

Oof. Yeah. You wan to scrub out your passwords/secret info and such before posting.

So you have cache_tmp_wait configured but no spot to put the files. I’d also tune down the buffer size to 0M as it’ll double cache (use the cache configured and the buffer size so the buffer isn’t needed).

Mine is pretty straight forward as I use systemd for my service:

[felix@gemini ~]$ cat /etc/systemd/system/rclone.service
Description=RClone Service

ExecStart=/usr/bin/rclone mount gmedia: /gmedia \
   --allow-other \
   --dir-cache-time=160h \
   --cache-chunk-size=10M \
   --cache-info-age=168h \
   --cache-workers=5 \
   --cache-tmp-upload-path /data/rclone_upload \
   --cache-tmp-wait-time 60m \
   --buffer-size 0M \
   --syslog \
   --umask 002 \
   --rc \
   --log-level INFO
ExecStop=/usr/bin/sudo /usr/bin/fusermount -uz /gmedia


You can use any spot you temporarily want to store files before it uploads to your GD by adjusting the cache-tmp-upload-path

So, is it why recently rclone rejects my copy command? no tmp folder?

I would set the cache-tmp-upload and give it a try.

You can also post the rclone logs if you are getting an error the logs.

I tried and it didn’t work. Plex recognises there’s movie in that folder but it is only locally, online on google drive it just doesn’t exist. And again, the Google API Console is reporting that all the API calls are errors:

Can you share the rclone log file? That should show what’s going on.

I’m running rclone with flag --log-file but when I check the log file I pointed it to, there isn’t anything?

You can run:

[felix@gemini ~]$ ps -ef | grep rclone
felix     3550     1  2 May11 ?        00:54:31 /usr/bin/rclone mount gmedia: /gmedia --allow-other --dir-cache-time=160h --cache-chunk-size=10M --cache-info-age=168h --cache-workers=5 --cache-tmp-upload-path /data/rclone_upload --cache-tmp-wait-time 60m --buffer-size 0M --syslog --umask 002 --rc --log-level INFO

You can use either --log-level DEBUG to turn it up to debug logging or use -vvv

You should see something in the log if you started it up and have access to the logfile like:

Mar 27 13:12:28 gemini rclone[17494]: rclone: Version "v1.40" starting with parameters ["/usr/bin/rclone" "mount" "gmedia:" "/gmed
ia" "--allow-other" "--dir-cache-time=1m" "--cache-chunk-size=10M" "--cache-info-age=168h" "--cache-workers=5" "--buffer-size=500M
" "--attr-timeout=1s" "--syslog" "--umask" "002" "--rc-addr" "" "--rc" "--cache-tmp-upload-path" "/data/rclone_up
load" "--cache-tmp-wait-time" "1440m" "--cache-writes" "--log-level" "DEBUG"]
Mar 27 13:12:28 gemini rclone[17494]: Serving remote control on
Mar 27 13:12:28 gemini rclone[17494]: Using config file from "/home/felix/.rclone.conf"

Just an update before I got your message, this time it seems to spit out correctly what really happened with the transfer:
Failed to copy: googleapi: Error 403: User rate limit exceeded., userRateLimitExceeded.
Is there any workaround this problem?

It seems like you aren’t using the cache feature and that would be the fix. is the docs if you want to read through it.

Yes I do use cache. Here’s my log file.
I just don’t know how at first things went fine now everything just… stop.

Did you use something prior with out the cache? From the logs, it looks like you got a ban probably with all the 403s going through.

Did you do anything without the cache prior as once you start up the process, it threw out all those constant 403s.

You probably want to wait (they usually say 24 hours for the ban but not sure when it starts).

I believe that maybe because I copied it directly into the mount? Is that what banned me?


My normal config is GD using an encrypted cache mount. I use the config/startup that I shared and I treat it like a local file system. Sonarr/Radarr/etc all copy directly into the mount as if it’s a regular file system.

Been running for months with the cache now without any bans as that’s why the cache was created. If you had anything mounted prior without the cache, it most likely would have generated the ban.