It’s quite rare to have problems with XFS and inodes exhaustion. Mostly because XFS doesn’t have inode limit in a manner known from other filesystems - it’s using some percentage of whole filesystem as a limit and in most distributions it’s 25%. So it’s really huge amount of inodes. But some tools and distributions lowered limit ex. 5% or 10% and there you could have problems more often.

You could check what is you limit by issuing xfs_info with drive and searching for imaxpct value:

xfs_info
root@zombi:~# xfs_info /srv/backup/
metadane=/dev/mapper/slow-backup isize=256    agcount=17, agsize=2621440 blks
        =                       sectsz=512   attr=2
data    =                       bsize=4096   blocks=44564480, imaxpct=25
        =                       sunit=0      swidth=0 blks
naming  =version 2              bsize=4096
 ascii-ci=0
log     =internal               bsize=4096   blocks=20480, version=2
        =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime=brak                   extsz=4096   blocks=0, rtextents=0

In this case I have 25% and it could be changed dynamically with xfs_growfs -m XX where XX is new percentage of volume capacity.

It’s also possible to change imaxpct on creation time by adding option -i maxpct=XX.


Enjoyed? Buy Me a Coffee at ko-fi.com