MacOS - rclone sync -no space left on device

What is the problem you are having with rclone?

I am a novice when it comes to this so any help would be appreciated. Thanks in advance.
I'm trying to copy a 5TB directory from an Quantum xsan mount to an Amazon S3 file gateway mount. These are mounted to a MacPro running MacOS 12.6.8 Monterey. For some reason rclone has filled up the local drive and says "no space left on device". When I checked the drive it shows the Mac System data storage is 900 GB in size.

  1. How can I clear the rclone temp/cached data from the System data storage?
  2. How can I limit the cache size so this does not happen again?

I've used a similar rsync command and I haven't encountered this. I've transferred 20+ TB of data using rsync so far but the speeds are very slow. I've read that rclone uses multi threading so I decided to test it.

Run the command 'rclone version' and share the full output of the command.

rclone version

rclone v1.64.0

  • os/version: darwin 12.6.8 (64 bit)
  • os/kernel: 21.6.0 (x86_64)
  • os/type: darwin
  • os/arch: amd64
  • go/version: go1.21.1
  • go/linking: dynamic
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)


The command you were trying to run (eg rclone copy /tmp remote:tmp)

\\\sudo rclone sync -MvPL --progress-terminal-title "/volumes/source path" "/volumes/destination path"\\\

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

Paste config here

A log from the command that you were trying to run with the -vv flag

Paste  log here

welcome to the forum,

please understand, as volunteers, we cannot see into your machine, need to provide more details.

maybe you got the destintation wrong?
maybe something other app has filled up the system data storage?
where are those files located, should be easy to find them?

as far as i know, there is no cache for local to local.
please provide more detail?

can you post a rclone debug log, some evidence to confirm?

Thanks for replying! Is there a default location where rclone logs are stored from past executions? I don't have much space left on this machine to run the command again. I want to be able to fall back to using rsync if I can't resolve this rclone issue. I'm sorry its not not a local to local copy/sync. The source and destinations are both mounted network shares. The source is Quantum Xsan mount via a Fiber Channel connection and the destination is an Amazon s3 file storage gateway windows share within our network. After the first attempt stopped because of the "ran out of space" error. I freed up some space to test again and as rclone was copying I verified and could see the files were being copied from the source directory to the destination. During that time I could see the System Data storage was also being filled up to I canceled the sync/copy. I assume that because the source and destination are remote storage it has to cache the files locally before copying. I hope this makes sense :slight_smile:


no. have to use something like
--log-level=DEBUG --log-file=/path/to/rclone.log

yes, i understand. from my perspective, that is local to local, in that no cloud provider is involved.
the source is local path "/volumes/source path"
the dest is local path "/volumes/destination path"

maybe. if true, then these are the paths that rclone might use.

rclone config paths
Config file: /root/.config/rclone/rclone.conf
Cache dir:   /root/.cache/rclone
Temp dir:    /tmp

please post evidence that it is rclone doing that?

if you post a rclone debug log that shows the issue, then maybe we can make some progress?

Thank you very much for your assistance. Thankfully the seemed to be resolved after performing a recovery/reinstalling of the OS. Something else may have caused the System data storage to fill up or I may have fudged something up to cause it. Right now data is being copied with out any issues and the System data storage is not growing while copying like it did before.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.