Data disk archive parity6/16/2023 ![]() It has many advantages over Linux software RAID and hardware RAID systems. The maximum volume size is 256 ZB which is more data than there are physical drives in the world (ZB>EB>PB>TB>GB). In my opinion ZFS is the best option at this point for large disk arrays on file servers. The addition of a PPA changes that a bit, but the management and NFS aspect still concern me somewhat. It was more complicated than I felt I could deal with in my spare time at that point. ![]() ![]() I had looked at your tutorial the last time I had this issue while running Ubuntu 12.04 LTS. With the SnapRAID pool feature has that now been overcome? Your tutorial has what appears to be a rather complicated set of workarounds with a suggestion that appears to add another layer on top of the SnapRAID setup. What happens when I fill a disk with content for a specific category and need to have that category span to another disk? Let's say for example I was able to create a SnapRAID setup. This data is currently spread over many directory structures - the directory structures are an integral part of the client functions. The clients for this data must see the data via NFS as one large volume. I have not been able to find anything on the web about best practices for pooling. I suspect my usage patterns on the data would match well with SnapRAID - I read the data many more times than I write it, and most of it moves only once after landing on the array.Ī couple of concerns I have write off the bat: I'm pretty sure I get the big picture with SnapRAID, I read your tutorial and I also see there is now a PPA for SnapRAID 6.3.
0 Comments
Leave a Reply. |