[clug-talk] WD Green HDD's with ZFS
osgnuru at gmail.com
Tue Mar 27 08:12:23 PDT 2012
Thanks for the link. I was going to post the question of how to calculate
It reminds me of the old days of Novell NetWare and how you had to
calculate out how much RAM you needed for the size of drives you had and if
you did not have enough memory it would crash during install and not tell
On Tue, Mar 27, 2012 at 9:04 AM, Andrew J. Kopciuch <akopciuch at bddf.ca>wrote:
> On March 27, 2012, Royce Souther wrote:
> > Okay I am looking at XFS. Two draw backs that need to be worked around.
> > can grow but cannot shrink, not a big deal as long as I leave one SATA
> > unused for a future drive to swap out an old one. The other draw back is
> > that to run xfs_check (XFS version of fsck) you need A SHIT LOAD OF RAM
> > a very, very big swap file/partition. It looks like for every 2TB of
> > storage you need 20GB of swap and a day or two to run xfs_check.
> > I am going to try XFS. It still sounds like a good thing.
> xfsprogs does require a lot of memory for some utilities, but they have
> getting better and better with more recent versions.
> Their examples show 16Tb with 50 million files requiring just over 2GB of
> It seems that xfs_repair -n would do the same as xfs_check, and you can
> calculate the memory needed for xfs_repair to run :
> Also, this older article (2008), has some useful information on some
> xfs_db & xfs_fsr in particular ... it prompted me to check some servers I
> manage again.
> clug-talk mailing list
> clug-talk at clug.ca
> Mailing List Guidelines (http://clug.ca/ml_guidelines.php)
> **Please remove these lines when replying
Easy, fast GUI development.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the clug-talk