[Lilug] query: handling exponentially increasing storage requirements on a rack
Chris Knadle
Chris.Knadle at coredump.us
Wed Jul 1 14:50:38 PDT 2015
On 07/01/2015 09:35 AM, Cholewa, John wrote:
> So I ordered a 36-drive, >250 TB nfs storage server for my lab. I would
> have preferred avoiding needing to deal with drives that I'd normally
> consider too new for use in this environment (they're 8 TB each), but
> I've been limited by our heavy requirements. The vendor is having
> difficulty in getting this large amount of space to behave as a single
> xfs mount point. I may have to accept hacks along the line of splitting
> up the array to multiple volumes.
I've been investigating hardware for a small home storage server and the
8 TB drives seem worrisome to me; they're all either SMR (Shingled
Magnetic Recording) drives that have some odd write timing
characteristics or helium-filled drives. Helium is such a small
molecule that it slowly leaks through metal, so I have some concerns
about the long-term reliability of these drives. Because of these
concerns I've decided to stick with 6 TB drives that use a more
conventional design.
> Some of the people on this list are at much larger labs, wherein pithily
> small storage amounts like a mere quarter petabyte are easily laughed
> off as chumpspace. While I can't change what I'm getting here this time
> around, I wouldn't mind hearing what kind of solutions are used to get
> large amounts of storage to look like single mounts for end-user
> researchers and the like. Do you cluster your storage among multiple
> computers and transparently make them look like a single server? If so,
> what are some good practice routes towards achieving this? I know that
> there are alternate means of doing storage over a network, so right now
> I'm definitely reaching the limits of my old-school way of thinking and
> wouldn't mind some pokes in the right direction. :)
This does sound like you've hit the limits of a single-box solution.
This probably points to needing some kind of clustering filesystem to
clump together several machines like GFS2, GlusterFS, OCFS2, GPFS.
[Or at least that's what I'd be researching if I was dealing with this
issue myself.]
-- Chris
--
Chris Knadle
Chris.Knadle at coredump.us
More information about the Lilug
mailing list