No space left when copying millions of file in local

Hi

My problem is not really related to rclone I think but I wondered if someone had this problem ?

I trying to copy millions of files in local (96 millions) in the same folder with a drive of 2Tb
The copy is not a single copy but multiple copies but with the same target folder

After some millions of files, I had an error message with no space left although there was enough space. I searched on internet and found maybe it was related to the number of inodes and increased it to 240M.
But after 30 millions files, I still got those error messages and the copy stops or slows
Anyone had similar experience?

My VM has 16 cores / 128Go , 2 Tb

df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 14M 462 14M 1% /dev
tmpfs 14M 717 14M 1% /run
/dev/sda1 3.7M 98K 3.6M 3% /
tmpfs 14M 1 14M 1% /dev/shm
tmpfs 14M 3 14M 1% /run/lock
tmpfs 14M 18 14M 1% /sys/fs/cgroup
/dev/sda15 0 0 0 - /boot/efi
/dev/sdb1 14M 12 14M 1% /mnt
/dev/sdc1 239M 30M 210M 13% /datadrive
tmpfs 14M 10 14M 1% /run/user/1000

df -h
Filesystem Size Used Avail Use% Mounted on
udev 56G 0 56G 0% /dev
tmpfs 12G 776K 12G 1% /run
/dev/sda1 29G 12G 18G 41% /
tmpfs 56G 0 56G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 56G 0 56G 0% /sys/fs/cgroup
/dev/sda15 105M 3.6M 101M 4% /boot/efi
/dev/sdb1 220G 61M 209G 1% /mnt
/dev/sdc1 2.0T 171G 1.7T 10% /datadrive
tmpfs 12G 0 12G 0% /run/user/1000

What's the rclone version?
What's the command you are running?
What's the command with -vv and share the output from the error?

Basically all the stuff in the question template :slight_smile:

yes sorry :slight_smile:
version : v1.49.1
command : ./rclone copy -P -vv prod:migration /datadrive/migration --ignore-checksum --size-only --transfers 128 --checkers 128

What's the backend for that?

What's the file system that's actually filling up? / ?

Can you link the full log to something as well as that's just a snippet?

That does look very much like there is no space on device being reported from the destination.

I wonder if the destination directory is full?

Which file system are you using?

How many files do you have immediately in the directory /datadrive/migration?

Backend is an Azure VM (Standard DS14 v2) + attached disk of 2Tb with an Ubuntu 18.04 and ext4 as file system

When attaching the drive to the VM, I increased the number of inodes with this command
sudo mkfs -t ext4 -N 250000000 /dev/sdc1

Currently, there is 30 521 956 files in the folder

My original need is that I have 96 millions of small files on blob storage and I want
to append them in one single file or multiple files with fixed sized.
I thought to transfer it in local with rclone
I copy them in one folder to make sure that all files were copied

I think it's more a limitation of the file system , number of files per folder and/or inodes
but I would like a confirmation if someone already did this ?

I'll generate a log and upload it asap

According to my reading you can have an unlimited number of files in an ext4 directory.

It might be worth doing sudo dumpe2fs -h /dev/sdc1 and posting that to see if there is anything we can see in there (once the disk appears to be full).

Hi
Sorry was and still working on the problem :slight_smile:
From my test I have this problem around 30M files in a folder and tested if I coud split the whole set in smaller set and copy them in multiple folders and it worked.

So what i'm doing now is run a copy with files starting with letter A and put them in folder A , B files in B folder and so on...

So the limitation seems to come from the number of files in a folder

That seems consistent with your errors.

I'm confused though because I thought ext4 could have unlimited files per directory. Maybe unlimited means very large limited by something else in your machine...

A bit of searching dug up this blog - it looks like the problem is ext4 dir_index

https://blog.merovius.de/2013/10/20/ext4-mysterious-no-space-left-on.html#do-you-have-dir_index-enabled

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.