Lots of "Unable to determine if file is sample" after moving files from seedbox to Google Drive

What is the problem you are having with rclone?

After uploading files from my new seedbox to Google Drive, I can't import them into radarr/sonarr etc with error "Unable to determine if file is sample"

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.4.0 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.2
  • go/linking: static
  • go/tags: none


I've recently setup a seedbox, but I'm getting lots of "Unable to determine if file is sample" when I try to input files from Google Drive.

My current workflow is (I've tried lots of variants....):

  1. Radarr/Sonarr on local machine (unRaid docker)
  2. nzbget/rutorrent on rapidseedbox
  3. rutorrent: (i) hardlink automove to new seedbox directory, (ii) rclone move job to Google Drive (iii) radarr tries to import file from new Google Drive location
  4. nzbget: (i) file saved in seedbox directory (ii) rclone move job to Google Drive (iii) radarr tries to import file from new Google Drive location

# Upload nzbget files

/usr/local/bin/rclone move /home/user/local tdrive_vfs: --config=/home/user/.config/rclone/rclone.conf --user-agent="external" -vv --order-by modtime,ascending --min-age 5m --fast-list --check-first --drive-chunk-size=256M --transfers=4 --checkers=8 --exclude _unpack/** --exclude rutorrent/** --drive-stop-on-upload-limit --delete-empty-src-dirs --bwlimit=100M

# Move rutorrent files

/usr/local/bin/rclone move /home/user/local/seedbox/rutorrent tdrive_vfs:seedbox/rutorrent --config=/home/user/.config/rclone/rclone.conf --user-agent="external" -vv --order-by modtime,ascending --min-age 20m --fast-list --check-first --drive-chunk-size=256M --transfers=4 --checkers=8 --drive-stop-on-upload-limit --bwlimit=100M


Locally, I have a mergerfs mount that sees the new files added to Google Drive. This is my rclone mount command - is the umask 000 the problem? I don't really understand permissions

rclone mount \
	--allow-other \
	--umask 000 \
	--dir-cache-time 5000h \
	--attr-timeout 5000h \
	--log-level INFO \
	--poll-interval 10s \
	--cache-dir=/mnt/user/mount_rclone/cache/tdrive_vfs \
	--drive-pacer-min-sleep 10ms \
	--drive-pacer-burst 1000 \
	--vfs-cache-mode full \
	--vfs-cache-max-size 10G \
	--vfs-cache-max-age 96h \
	--vfs-read-ahead 1G \
	tdrive_vfs: /mount_location &

I have to wait for a few jobs to finish before I reboot, so I'm trying to get likely candidates identified first!

Thanks in advance for any help

looks like changing umask to 002 solved the problem

ok, that didn't do the trick.

Has anyone else encountered this problem when trying to import files from Google Drive that have been moved by GD by rclone from a different server/seedbox?

Sorry as I'm not sure what the error is.

Can you share a rclone log with the error and the accompanying Sonarr/Radarr log?

Thanks for helping.

Rclone is transferring the file, but radarr can't import. Here's a log trying to import the kissing booth 2 and ant-man. For the paste below I tried umask 002 for the mount

My flow is:

Seedbox download-->rclone sync/move to cloud-->local ***arrr looking at local mount for new files to import.

It was working when I first setup my seedbox last week, but then it stopped

What's the user rclone is running at on the box importing? What's the radarr user?

As the radarr user, what does ls -al on the file look like?

root@Highlander:/mnt/user/mount_mergerfs/tdrive_vfs/seedbox/rutorrent/movies_uhd/The.Kissing.Booth.2.2020.HDR.2160p.WEBRip.x265-iNTENSO# ls -al
total 13874288
drwxrwxrwx 1 root root           0 Feb 20 11:45 .
drwxrwxrwx 1 root root           0 Feb 20 11:45 ..
-rw-rw-rw- 1 root root 14207270188 Jul 26  2020 The.Kissing.Booth.2.2020.HDR.2160p.WEBRip.x265-iNTENSO.mkv

I think radarr docker is using user nobody on my unraid box. How do I check the rclone user on the seedbox?

Edit: Yep, unRAID is user nobody and group users for permissions. This is set in the PUID = 99 PGID = 100 variables in the docker container.

It seems like a permissions issue so on whatever box has the mount, you'd need to make sure user radarr can see the files user rclone has.

If you have dockers and whatnot, not sure how to fix that as I don't use them at all.

I don't run any on my stuff as root and use a regular user (same one for everything). For a rclone mount, you can use umask to in theory fix it, but you'd have to ls -al the files from each user and test reading from the radarr user with a mediainfo or something to read it.

If you can get that working with the radarr user that should fix it as umask 000 in theory should do the trick.

I'm really confused as I only have this problem accessing files via the mount added to the drive by the seedbox - it's not just a radarr problem e.g. I can't access via the mount any of the files uploaded to GD by the seedbox rclone script.

Is there something wrong with my move command:

/usr/local/bin/rclone sync /home/user/local/seedbox/rutorrent tdrive_vfs:seedbox/rutorrent --config=/home/user/.config/rclone/rclone.conf --user-agent="external" -vv --order-by modtime,ascending --min-age 10m --exclude _unpack/** --check-first --stats-file-name-length 80 --drive-chunk-size=256M --transfers=4 --checkers=8 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude .Recycle.Bin/** --exclude *.backup~* --exclude *.partial~* --exclude imported/** --drive-stop-on-upload-limit --bwlimit=100M --log-file=/home/user/rclone/rutorrent_log.txt

I've tried using tdrive_boost_upload_vfs: as the seedbox upload remote which didn't help either in case that was the problem as I use the tdrive_vfs: remote locally

type = drive
scope = drive
service_account_file = /home/user/rclone/service_accounts/sa_spare_upload1.json
team_drive = TD1

type = crypt
remote = tdrive_boost_upload:crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxxxx
password2 = xxxx

type = drive
scope = drive
service_account_file = /home/user/rclone/service_accounts/sa_tdrive.json
team_drive = TD1
server_side_across_configs = true

type = crypt
remote = tdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxw
password2 = xxxx

type = union
upstreams = /home/user/local tdrive_boost_upload_vfs:
action_policy = all
create_policy = ff
search_policy = ff

Once it's on the cloud, any permissions are pretty much irrelevant as none of that stuff transfers.

It's all about the box trying to read it.

If that box can read the file with a normal rclone copy command as a test, the mount would work fine unless you have an issue with the rclone.conf on the destination.

I'd test rclone copy on the remote and make sure it can read it.

If read works, the issue lies with mount and permissions between the users.

ok, I was barking up the wrong tree. I've been doing a lot of file renaming for pretty much my whole library. This must have triggered a lot of API calls as everything has been fine since I changed my service account and I can import as normal.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.