I am mounting my entire Amazon Drive using /usr/local/bin/rclone --config /root/.rclone.conf mount --max-read-ahead 1024k --transfers 20 --checkers 40 --read-only --allow-other --no-modtime remote:/ /mnt/amazon_ori &
Works flawless.
Now I wanted to add a layer above using aufs to be able to have a transparent cache for frequently used files. But when I try to mount this: mount -t aufs -o br=/mnt/cache/amazon=rw:/mnt/amazon_ori -o udba=reval none /mnt/amazon
aufs is complaining [ 279.750882] aufs au_xino_read:702:ls[697]: I/O Error, too large hi11613993917959617111
After reading up it seems aufs has a limit when it comes to the size of the inode numbers.
So, (finally) my question: Why are the inode numbers so big and any chance we can get them smaller?
Here is a ls -ilsa from the /mnt/amazon_ori
root@odroid64:/mnt/amazon_ori# ls -ilsa /mnt/amazon_ori/ total 0 6361105148754046731 0 drwxr-xr-x 1 root root 0 Nov 7 15:12 Backups 3424564663998667886 0 drwxr-xr-x 1 root root 0 Nov 7 15:12 Books 10506996466445958951 0 drwxr-xr-x 1 root root 0 Nov 7 15:12 Games 12334951539984183433 0 drwxr-xr-x 1 root root 0 Nov 7 15:12 Movies 7351975847773898335 0 drwxr-xr-x 1 root root 0 Nov 7 15:12 Music 5922203585319387058 0 drwxr-xr-x 1 root root 0 Nov 7 15:12 TV 11613993917959617111 0 drwxr-xr-x 1 root root 0 Nov 7 15:12 Transfer 10509979743363668140 0 drwxr-xr-x 1 root root 0 Nov 7 15:12 enc 738198852751780662 0 drwxr-xr-x 1 root root 0 Nov 7 15:12 iTunes
I recreated it. It actually works under crypt. It doesn’t work on a regular non-crypt. Interestingly enough the inode numbers are high on both. I doubt that is actually the problem but it sure doesn’t like non-crypt.
-># rclone --config /home/robert/.rclone.conf mount --max-read-ahead 1024k --transfers 20 --checkers 40 --read-only --allow-other --no-modtime robacd:/cams /root/t &
root@HS: ~
-># mount -t aufs -o br=/data/Media1=rw:/root/t -o udba=reval none /root/u
root@HS: ~
-># ls /root/u
ls: reading directory /root/u: File too large
root@HS: ~
I’m using the default inode generator in the FUSE library which will generate a 64 bit random number more or less.
It seems from a bit of research that AUFS expects build a table with all these inode numbers in. If the max inode number is 2^64 then that will never fit in memory, hence the error.
This seems like a bad/strange design decision from AUFS - surely it should be using a hashmap to store the inodes…
Anyway rclone could implement its own sequential inode generator - it would have to keep a map of all files and generate them sequential inode numbers which would fix this problem completely at the expense of using more memory.
If you would like to see this feature then make an issue on github.
Well, I found mhddfs now which seems to do the job.
There are also other union filesystems. I think your time is better spend on other tasks then this one.
Well, I mhddfs is working for me just fine to get a layer above the rclone fuse mount for quick access to frequently used files. I could not use unionfs since my kernel (3.14 on an Odroid C2) has no support for it.
I haven’t tested unionfs-fuse but it looks interesting.
I give it a try since I have an issue with mhddfs to get write access when mounting it via netatalk on my Macs
But I guess that’s a very special problem