[Lilug] A brief touch on XFS, Ext4, and Ext3

Chris Knadle Chris.Knadle at coredump.us
Sun Feb 1 10:06:23 PST 2009


On Saturday 31 January 2009, Josef 'Jeff' Sipek wrote:
> On Sat, Jan 31, 2009 at 12:43:46AM -0500, Chris Knadle wrote:
> > There's no earth-shattering conclusion -- the only thing I was
> > out to prove was that the graph that I had seen showing Ext4 was
> > twice as fast as everything else wasn't necessarily the case. 
> > There are a bunch of considerations when choosing a filesystem,
> > and for average-sized disks on desktops and servers "speed"
> > probably isn't the main issue -- depending on what your
> > definition of "speed" is, of course.  [Speed of ... fsck, read,
> > write, file creation, file deletion, check/repair, dump,
> > restore... etc]
>
> That's a worth-while goal. I'm going to reply with some rant spew.

;-)  I didn't consider it a rant (even if you did) -- thanks for the 
info!

> (4)  read [1] about benchmarking filesystems/storage & reporting
> the results properly

Sounds I'll need to read quite a bit more than this one paper if I 
really want to do this better.

> (5)  I suspect that your bonnie++ runs are too small to give
> meaningful results for the system you're running things on; I can't
> of course know for sure because you didn't tell us what the test
> system was (aside from the note about it being SMP PIII). Was the
> disk equally old? How much RAM does it have? etc., etc. What
> mkfs/mount options did you use? Why did you choose them? Are you
> comparing apples-to-apples? (Did you mkfs/mount the different
> filesystems with equivalent options?) The partition you used, was
> it located on the inside or the outside tracks of the HDD? (Yes,
> this matters!)

Just to try answer some of these questions:

The motherboard is and old buggy pre-1999 Gateway server motherboard 
which was then rebranded and sold as a Tyan S18330, and it has some 
annoying power management issues -- it never properly powers off on 
its own.  There's 684MB RAM on it.

The disk is far newer than the motherboard is.  The CDROM is connected 
to the primary IDE interface, while the disk is connected to the 
secondary IDE interface [yes, that matters too].  For all three 
filesystems I formatted and used exactly the same partition in the 
same location, and used exactly the same mount option "realtime" 
[only because it was the default] which updates inode access times.  
I normally mount 'noatime', but I left the default the first time and 
kept it throughout for consistency.  In all cases:

   Size     Start   End   filesystem   mount point
   200 MiB      1    25   ext2         /boot
   1 GiB swap  26   148   swap
   74.4 GiB   149  9729  <filesystem under test>  /

Note that the filesystem is actually 74.4 GiB, not 78.8 GB as I had 
first reported; the discrepency is because the front of the disk is 
labeled "80 GB", but in this case GB = 1x10^9 bytes and I had 
mentally expected 80 GB to mean 80 GiB = 2^30 bytes.  Ugh.  
The "filesystem under test" should span most of the disk; however 
about 2.3 GiB of that is first taken up by Ubuntu.  A better test 
would be if the OS ran on one disk and the other entire disk was 
under test.

Unfortunately I don't actually know the mkfs options that Ubuntu text 
installer used when making the filesystems -- though I might be able 
to find those out.

> (6)  To your credit, you did say that the testing was very basic.
>
> (7)  Generally, the performance of a filesystem (or really any
> complex system!) is workload dependant --- again, to your credit,
> you did say that. Your typical desktop/laptop workload won't really
> make any difference.

I agree that most people don't care and wouldn't see much difference; 
however, on my Desktop under "normal use" I do notice some 
differences between filesystems.  With XFS compared to Ext3, CPU load 
is noticably lower, file deletion time for large or many small files 
is noticably longer, and there seems to be noticably less disk I/O 
contention.  [Please note that these are completely unscientific 
obvservations.]

> (8)  Your choice of benchmarks:
>
>      (a) mkfs time: unless you intend to run mkfs over and over,
> it's a really useless benchmark. Many years ago, when I was
> deciding which fs to use for my desktop's /home, I benchmarked mkfs
> time as well; it really didn't tell me anything I didn't already
> know (ext* take ages, XFS&others don't). How many times have I run
> mkfs on that partition after I finished benchmarking? Once, and
> that was 4-5 years ago.

I only included it because I could.  i.e. I had a stopwatch and
wasn't afraid to use it.  :-P  I agree that format time is 
meaningless.

>      (b) distro install time: this is kind of similar to the mkfs
> time benchmark. How frequently do you intend to reinstall? The only
> time I had to reinstall my desktop (after switching to Debian ages
> ago) was when I didn't upgrade Sid for 3 years. It would have been
> too painful, so I just reinstalled it from scratch - leaving my
> /home undisturbed.

I just found it interesting that the time for installation did vary by 
a few minutes solely based on which filesystem I did the install on.  
And like you, I haven't resinstalled my Desktop in quite a while.

>      (c) bonie++: yes, it's one of the more standard benchmarks,
> but do you really know what it does?

Vaguely, yeah.  By default it times creation/deletion of 0-byte files,
which on first thought seems like another useless test.  Sadly, the 
Ubuntu bonnie++ package doesn't come with a man page, even though the 
Debian version does.  :-/  What the hell.

> Do you know what the results mean?

Roughly...  I read the readme concerning results.

> Are you sure that you ran it in a way that actually benchmarks
> the filesystem?

Absolutely not.

> Again, to your credit, you did mention  
> that you didn't try to find the "right" config.

That's about all I'm sure of.

> (10) mkfs and mount options: defaults are good, right? No! They
> might be perfect for one workload, but abysmal for others. What's
> worse, if you are trying to compare to filesystems, you really
> should give both of them equivalent mkfs/mount options - otherwise
> you're comparing apples to oranges.

Makes sense.  Another part of this is that there are two different 
kernels and even environments in use; one that's used for the 
install, and another that's used after the first reboot.

>      (c) ext3 mount: last I heard (about 2-3 months ago), ext3 does
> NOT use barriers by default. You can mount with "barrier=1" to
> enable them.
>
>          What's the big deal? Well, first read (10b), and then,
> consider this:
>
>          If you use default mount options and run benchmarks for
> ext3 and XFS, are you giving the two filesystems a fair chance to
> compete?

Wow -- nope.  Thanks for pointing this out -- I hadn't known it.  

   -- Chris

-- 

Chris Knadle
Chris.Knadle at coredump.us



More information about the Lilug mailing list