File too big for remaining disk space

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

I am trying to copy from a camera system to an S3 bucket.

I am getting an error "Failed to copy: preallocate: file too big for remaining disk space".

Each of the files are 1G in size. The drive rclone is running from has 3G of free space.

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.0

  • os/version: debian 11.8 (64 bit)
  • os/kernel: 4.19.152-alpine-unvr (aarch64)
  • os/type: linux
  • os/arch: arm64 (ARMv8 compatible)
  • go/version: go1.23.4
  • go/linking: static
  • go/tags: none

Are you on the latest version of rclone? You can validate by checking the version listed here: Rclone downloads
-->

Not quite. I am a minor version behind

Which cloud storage system are you using? (eg Google Drive)

Amazon S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

days=$1            # comprises <year>/<month>/<first digit of day>
dryrun="" # set to empty string to actually run, to  do a dry run  set to  "--dry-run"
# we now use --log-level rather than verbose
# verbose="-v"      # set to empty string to suppress most output of rclone, set to "-v" or "-vv" for verbose

echo "starting run $(date) with days=$days"

if [ -z "$days" ] ; then # if no days are specified
  echo "clear date"
  rclone --config ${PWD}/rclone.conf $dryrun \
    copy /volume1/.srv/unifi-protect/video/ \
    nvr-backup:/nvr-backup-340752817366/ \
    --log-file /home/nvr-backup/rclone.log --log-level INFO
  exit 1
fi

for secondDigit in {0..9} ; do
  directory="/volume1/.srv/unifi-protect/video/${days}${secondDigit}"
  echo "**** Backing up $directory ****"
  if [ ! -d $directory ] ; then
    echo "Directory does not exist"
    continue
  fi
  destination="nvr-backup/nvr-backup-340752817366/${days}${secondDigit}"
  echo "**** Destination is $destination ****"
  /usr/bin/rclone --config ${PWD}/rclone.conf $dryrun $verbose \
    copy $directory $destination \
    --log-file /home/nvr-backup/rclone.log --log-level INFO
  echo "**** Done ****"
done

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[nvr-backup]
type = s3
provider = AWS
env_auth = false
region = ap-southeast-2
location_constraint = ap-southeast-2
acl = private
access_key_id = XXX
secret_access_key = XXX

A log from the command that you were trying to run with the -vv flag

tail of log

2025/03/03 23:47:16 INFO  : E438830EAEC7_2_rotating_1732934956974.ubv.a674dc2c.partial: Removing failed copy
2025/03/03 23:47:16 ERROR : E438830EAEC7_2_rotating_1732961821789.ubv: Failed to copy: preallocate: file too big for remaining disk space
2025/03/03 23:47:16 INFO  : E438830EAEC7_2_rotating_1732961821789.ubv.f0b6d00a.partial: Removing failed copy
2025/03/03 23:47:16 ERROR : Attempt 3/3 failed with 338 errors and: preallocate: file too big for remaining disk space
2025/03/03 23:47:16 INFO  :
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Errors:               338 (retrying may help)
Elapsed time:        14.0s

2025/03/03 23:47:16 NOTICE: Failed to copy with 338 errors: last error was: preallocate: file too big for remaining disk space

hi,

i think there might be a typo in your script, that would explain the message
"Failed to copy: preallocate: file too big for remaining disk space".

the correct syntax for s3 remote is
name_of_remote:name_of_bucket/folder

perhaps
destination="nvr-backup/nvr-backup-340752817366/${days}${secondDigit}"
should be
destination="nvr-backup:nvr-backup-340752817366/${days}${secondDigit}"


as a side issue, perhaps
verbose="-v"
should be
verbose="-vv"

or use --log-level DEBUG


if that does not fix the issue, then please post a full debug log.