[Lilug] query: handling exponentially increasing storage requirements on a rack

David Dickman david at softbear.net
Wed Jul 1 19:27:55 PDT 2015


Sounds like a recipe for disaster: multiple single points of failure, all
in one box. How about a second one for backup?

Found some interesting information on Backblaze
<https://www.backblaze.com/blog/storage-pod-4-5-tweaking-a-proven-design/>.
They appear to have a higher confidence level than I do on these. They also
replace the boxes outright every three years, apparently. Their in-house
configurations are 45 drives split into 3 RAID6 volumes with JFS on top.
They are only using 4TB drives right now, yielding 180TB.

The end of the post has some interesting discussion as well.

On Wed, Jul 1, 2015 at 5:50 PM, Chris Knadle <Chris.Knadle at coredump.us>
wrote:

> On 07/01/2015 09:35 AM, Cholewa, John wrote:
> > So I ordered a 36-drive, >250 TB nfs storage server for my lab. I would
> > have preferred avoiding needing to deal with drives that I'd normally
> > consider too new for use in this environment (they're 8 TB each), but
> > I've been limited by our heavy requirements.  The vendor is having
> > difficulty in getting this large amount of space to behave as a single
> > xfs mount point.  I may have to accept hacks along the line of splitting
> > up the array to multiple volumes.
>
> I've been investigating hardware for a small home storage server and the
> 8 TB drives seem worrisome to me; they're all either SMR (Shingled
> Magnetic Recording) drives that have some odd write timing
> characteristics or helium-filled drives.  Helium is such a small
> molecule that it slowly leaks through metal, so I have some concerns
> about the long-term reliability of these drives.  Because of these
> concerns I've decided to stick with 6 TB drives that use a more
> conventional design.
>
> > Some of the people on this list are at much larger labs, wherein pithily
> > small storage amounts like a mere quarter petabyte are easily laughed
> > off as chumpspace.  While I can't change what I'm getting here this time
> > around, I wouldn't mind hearing what kind of solutions are used to get
> > large amounts of storage to look like single mounts for end-user
> > researchers and the like.  Do you cluster your storage among multiple
> > computers and transparently make them look like a single server?  If so,
> > what are some good practice routes towards achieving this?  I know that
> > there are alternate means of doing storage over a network, so right now
> > I'm definitely reaching the limits of my old-school way of thinking and
> > wouldn't mind some pokes in the right direction.  :)
>
> This does sound like you've hit the limits of a single-box solution.
> This probably points to needing some kind of clustering filesystem to
> clump together several machines like GFS2, GlusterFS, OCFS2, GPFS.
> [Or at least that's what I'd be researching if I was dealing with this
> issue myself.]
>
>    -- Chris
>
> --
> Chris Knadle
> Chris.Knadle at coredump.us
>
> _______________________________________________
> Lilug mailing list
> Lilug at lists.lilug.org
> http://lists.lilug.org/listinfo.cgi/lilug-lilug.org
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.lilug.org/pipermail/lilug-lilug.org/attachments/20150701/bd5d8fce/attachment.htm>


More information about the Lilug mailing list