6 of 6 people found the following review helpful
on November 14, 2013
I have used the SPM-394 as an array controller only; supposedly it also can work as a port multiplier. I do not know whether it can play both roles at the same time. It handles up to 5 drives, at SATA2 (3Gb/s) speeds.
I have more than one of them, and use them in conjunction with the Norco SS-500 modules to build array boxes with multiples of 5 drive slots, which I run in RAID5 mode. In this mode, write performance is good, and read performance is good. I set one up in basic striping only mode for testing and it saturated the SATA2 link easily with five older server-class 7200RPM drives, but I do not feel comfortable without at least some redundancy, so once I was satisfied there were no link issues, I set them to RAID5. The drives in question would only saturate the SATA2 link if all five were used for data, but the finished array comes very close even in RAID5 mode; with faster drives it should be able to do so.
This is one of the few 'inexpensive' array controllers I have used that handle drive errors with some grace. Most controllers I have seen will mark a drive as failed (or an array as lost) for even a link CRC error, and will quietly abort a rebuild for similar reasons. The SPM-394 handles glitchy connections reasonably well, either handling transient errors itself or passing them correctly to the host to have the command retried. I have not had an SPM-394 abort a rebuild except under extreme cases (power loss during rebuild, drive removal during rebuild, &c), even when I had a less-than-perfect drive mounted or a link that was not perfect.
In array mode, it requires nothing special of the host, and appears as a single large drive. This means you do not need the software array features offered by many (most?) SATA hosts these days, and do not have to do rebuilds whenever your system crashes (filesystem checks are another matter, but at least they go faster when they are not competing with the array controller's rebuild process for drive time). I do not see anything preventing layering of host controller's array features on top of these devices to build a multiple-layer array (an array built by striping a number of RAID5 arrays, for example), but I prefer to avoid software based arrays (even those with 'hardware assists').
The LCD panel and buttons provide access to configuration, status, and related features. There is no reason to need the special configuration utilities that come with them, but those utilities offer some advantages over doing it directly (easier to set up certain features, for example). Since my setup is simple (one RAID5 occupying most of the capacity of 5 drives), I just use the LCD and buttons to set them up and monitor things. More interesting configurations may be achievable with the setup utilities provided with the controllers.
A generous collection of LEDs is provided: link/activity (green) per drive, error (red) per drive, power (green), link/activity (green) to host, fault (red). These LEDs allow you to see at a glance when there is a problem with a drive or when things are running well. I love being able to see what is going on, but if you prefer something visually 'quieter', you might consider one of the SPM-393 variants instead.
It monitors chassis temperatures, as well (it has a remote temperature probe). Supposedly the underlying chip offers fan monitoring, but DATOptic's version does not include this particular feature. Maybe if enough customers request the option, they will consider it. It would be nice if fan monitoring was something you could control from the LCD + buttons.
It does not provide SMART information for the array. It does provide a basic PASS/FAIL indication but I have never seen it show FAIL. It would be nice to be able to get more information about the array through SMART commands, or to have a way to fetch the data so scripted hardware checks can include the arrays and (ideally) their component drives.
One future worry I have with it is that it uses stripes of 512 bytes, or at least this is what I see on drives with 512 byte sectors. I have not found any way to change this stripe size. I suspect this will impact performance if these are used with 'advanced format' drives (recently, there has been a trend toward 4096 byte sectors with the drive emulating 512 byte behaviour, in particular on larger drives), but I do not have a bunch of these drives available for testing at this time. I may update the review to reflect my observations if I get a chance to try such a configuration.
1 of 1 people found the following review helpful
on February 6, 2015
A couple years ago, I built a new high-performance PC for myself which I still use today. I had envisioned putting Windows 8 and Linux on it in dual-boot configuration, but for some strange reason, none of the Linux distributions could understand my motherboard's RAID 1 volume. After a lot of research, and after some of the best and brightest minds in Linux forums couldn't figure it out, I concluded that my motherboard's X79 chipset Intel Rapid Storage Enterprise (IRSTe) RAID wasn't understood at all by any Linux OS's. Either that, or something got screwed up somewhere on my Windows side of things.
I had considered using the IRST drivers instead of the IRSTe drivers in Windows. I think Linux understands those. But according to the docs, the IRST drivers don't send the TRIM command to my SSD drives. And I have SSD drives also. So that idea was unappealing. And for those of you who don't know, if the TRIM command is never sent to your SSD drives, they slow down over time, to something less than 50% of their original speed.
So it seemed I was check-mated. There would be no way to dual-boot this thing with both Windows and Linux unless I either went with un-raided disks or decided to put up with an SSD drive that degraded over time. I figured there must be a better way.
I looked into other RAID solutions. It seems most people think motherboard (software) RAID is a lousy idea, for one reason or another. Many suggest going with a hardware RAID instead. So I looked into hardware RAID cards. Almost all of them plug into the PCI-x bus and require drivers. Most of the entry-level to mid-level hardware RAID cards cost upwards of $300-$500. Now, 2 years into my dilemma, the money didn't matter to me. I was willing to pay that. But I just had one question: How long does it take to boot? It turns out most hardware RAID cards take over a minute to boot! I guess that's fine for the server market, where the PC's stay up for months at a time without powering down. But for a desktop? Unacceptable!
The reason I have SSD drives is primarily to make booting happen in under 8 seconds. My previous Windows XP PC was terrible about booting. It took over 3 minutes to fully boot. And I just did not want to put up with that again. I wanted fast boots! I like booting in 8 seconds. Who doesn't?
So it seemed I was out of luck. Until I saw this DATOptic SPM394 device. First of all, it doesn't require any drivers, since it just plugs into the SATA port of your motherboard and looks like a hard drive. All OS's should, theoretically, understand it without any drivers at all. And secondly, it boots instantly. It adds absolutely no extra time to your boot. Once I realized that, I bought it. And I'm glad I did.
I now have it up and running on my desktop PC with 4 hard drives, each are 3TB HGST Deskstar NAS drives. I have one pair of drives in RAID 1, and the other pair in another RAID 1 configuration. So I have two RAID 1 volumes. One of the RAID 1 volumes is for Windows, the other is for Linux. Works perfectly!
Configuring this thing was real easy with just the LCD and external push buttons. It requires absolutely no instructions. Once I got used to navigating the LCD menu system, I quickly set up both RAID 1 volumes. It was super easy. It lets you do RAID 0, 1, 3, 5, and 10, "large", and "clone". I presume "large" means JBOD (Just a Bunch Of Disks), but I'm not sure. And I assume "clone" is a way of creating backups of entire drives real easily, maybe similar to RAID 1 but meant to copy the entire drive and then the clone can be removed? I wasn't too sure.
On the Windows side (I used Windows 8.1), you can use their CD-ROM which has a GUI app for the device. The GUI lets you do everything the LCD menu system lets you do. Plus, you can set it up with an email address to use, and it will send you emails of any events you want to be alerted to. So if one of your disks in a RAID 1 volume goes down and is being put in repair mode, it will email you. Or if one of the disks is overheating, it will email you. There's a log view you can look at. And you can also setup password protection.
There was one gotcha I ran into. The Intel X79 chipset of my motherboard does not understand port multipliers! Curse you, Intel! So even though I had two RAID 1 volumes, my motherboard only saw one of them. The other did not exist, according to my BIOS. And I don't think there's any way to get Linux or Windows to understand that I have two of them. They only see one if the BIOS itself only sees one. Uh-oh!
I solved that problem by switching to my Marvell chipset SATA ports instead of my Intel X79 SATA ports. The Marvell chipset understands port multipliers just fine. The Marvell BIOS immediately saw both RAID 1 volumes. No problem!
But for those of you who only have Intel chipsets on your motherboard, you might be in for some pain if you intend to set up more than one volume on the same device. Like I configured two RAID 1 volumes, but I believe you can configure it to have up to 5 volumes as a JBOD (Just a Bunch Of Disks) configuration.
If your motherboard only has an Intel chipset that doesn't support port multipliers, you can still use this device! But you can only set up a single volume with it. In my case with 4 drives, I would have done all 4 drives in a single RAID 10 volume instead. That is, if I didn't also have my Marvell chipset SATA controller on my motherboard, which does understand port multipliers.
On a side note... For those of you wanting to configure 5 drives in RAID 5 configuration (4 drives data, 1 drive parity), please reconsider. All reports online suggest that this is a bad idea. The issue has nothing to do with the SPM394 controller. It has to do with how RAID 5 itself works. If one of your drives dies, you will replace it with a brand new drive. When you do that, it will run all of the other 4 drives continuously for up 24 hours (depending on how big your drives are), rebuilding that 5th drive. If at any time one of those drives is detected as having a bad block or does not respond in time, what happens? I don't know what will happen with the SPM394, but most RAID controllers will simply shut everything down and abort. In other words, you might lose all of your data. It all depends on how good the SPM394 controller deals with it, though. How likely is this? Very. I suggest sticking with RAID 1 and RAID 10 only when using the SPM394. That means only 2 drives or 4 drives, but not 5 drives. Too bad this device doesn't have 6 SATA ports, because then you could set it up with RAID 6, which would be fine. But it doesn't.
My Marvell SATA 3 (6 Gbps) ports seemed about as fast as my Intel X79 SATA 2 (3 Gbps) ports. And since my Intel X79 SATA 3 (6 Gbps) ports were both occupied by my two SSD drives, I didn't care all that much about any speed differences. But I did have a chance to run Crystal Disk Mark. Here are my results:
First of all, all 4 drives were new 3TB HGST Deskstar 7200 RPM SATA 3 drives:
Benchmark #1: One drive by itself, no RAID, directly hooked up to X79 SATA 2 port (not using SPM394) :
- Sequential Read: 162.7, Write: 160.3
- 512K Read: 57.11, Write: 77.61
- 4K Read: 0.738, Write: 1.710
- 4KQD32 Read: 1.830 Write: 1.743
Benchmark #2: Using SPM394, two RAID 1 volumes each with 2 drives, hooked up to X79 SATA 2 port:
- Sequential Read: 136.1, Write: 129.9
- 512K Read: 51.43, Write: 67.56
- 4K Read: 0.751, Write: 1.506
- 4KQD32 Read: 0.739, Write: 1.503
Benchmark #3: Same configuration as #2, but hooked up to Marvell SATA 3 port:
- Sequential Read: 132.1, Write: 126.7
- 512K Read: 50.81, Write: 67.4
- 4K Read: 0.738, Write: 1.520
- 4KQD32 Read: 0.737, Write: 1.521
Benchmark #4: Using SPM394, 4 drives in RAID 10 configuration, hooked up to X79 SATA 2 port:
- Sequential Read: 253.9, Write: 241.5
- 512K Read: 57.07, Write: 109.2
- 4K Read: 0.625, Write: 0.398
- 4KQD32 Read: 0.631, Write: 0.399
Benchmark #5: Same as #4, but hooked up to Marvell SATA 3 port:
- Sequential Read: 239.7, Write: 158.3
- 512K Read: 56.58, Write: 104.6
- 4K Read: 0.615, Write: 0.393
- 4KQD32 Read: 0.631, Write: 0.394
Comments about the benchmarks:
You do get a very large improvement in read and write speed by switching from RAID 1 to RAID 10 instead (comparing benchmark #3 vs. #4). I think you would see a similar improvement if you used RAID 0, also.
If there's one minor deficiency I think they could improve upon, then, it is the read performance of their RAID 1. It could be the about the same as that of RAID 0 or RAID 10 with a firmware redesign. Instead, the RAID 1 read performance (benchmarks #2 and #3) is about half the RAID 10 read performance (benchmark #4 and #5). So this indicates that the SPM394 device does not do parallel read operations during RAID 1 to speed up reads... Write performance can not be improved in RAID 1, though, because it needs to write the same data to both drives at the same time.
But I'm not going to take a star off for that. It's something I think most RAID devices don't do. I didn't buy this device thinking it was high performance, either.
Even though RAID 1 read performance wasn't spectacular, I think it's adequate. Comparing benchmark #1 to #2, you can see the penalty you get for using the SPM394 at all. It slowed reads down from 162 MB/s to 136MB/s. That's not terrible. It's about the same speed as you get from a single drive. And you can see the random 4K reads and writes are pretty much in the same ballpark for those two benchmarks. (Random, small reads and writes are done most of the time in real life, by the way.) So in other words, you're not going to really notice much a decrease in the speed caused by the SPM394 itself if you're just comparing it with a single drive's performance. Though, you could argue that its RAID 1 read performance could be double what it is right now, if it had the right firmware.
By the way, on a side note: One time by accident, I installed Windows 8 to my RAID 1 hard disks instead of my SSD drive. The boot times were about 40-50 seconds in this RAID 1, compared with 8 seconds by my (non-RAID) SSD. Of course I would expect an increase in boot time going from SSD to hard disk. But it wasn't terrible. Also, the "feel" of using Windows 8 on this RAID 1 wasn't bad. It felt similar to SSD. Pretty snappy. This surprised me. I thought this would have felt slower and less responsive. But it wasn't very noticeable.
This is a niche market product. Most people will find that their motherboards can set up RAID configurations pretty easily, and so they don't need to buy this product. Actually, most people don't even know that they should setup RAID disks. I only needed this product because I wanted to dual-boot both Windows and Linux, and my Linux side didn't understand my motherboard's software RAID for some reason I'm still not sure of. Rather than ripping out all of my hair trying to solve the problem on the Linux side (which I tried for a few months already), I chose this as a reasonable solution. Others might choose to buy this product because they trust hardware RAID more than they do software RAID, and there are plenty of people who make convincing arguments for that on the web.
Overall, I'm very happy with this purchase. I can now use Linux and Windows just fine in dual-boot configuration on my desktop PC. I don't ever have to worry about data corruption due to having the wrong drivers or the wrong versions of the drivers, either. Because, there are no drivers to mess with! When I eventually have a hard drive that dies, it's nice to know that I'll get an email or see the little red LED on the front of the PC light up. It's certainly not as good as some of the $500 hardware RAID cards out there in terms of reliability (since there's no battery back-up) and in terms of speed performance, but it seems good enough for me. And when I eventually upgrade to a new PC, I feel confident that I can just take this controller with me and use my current RAID drives without any worry about the hard drives not being understood by the new system.
Its number one selling point is its ease of use. For that, I gave it 5 stars.
I'll update my review when/if I ever have any RAID events such as a disk failing or a block on a disk failing. It will be interesting to see how well the SPM394 deals with these events, and how easy or hard it is to recover. That's the real test.
Hope that helps!
on June 28, 2015
Although I've long made it a habit to keep all of my _data_ on a RAID array, for at least as long as PC motherboards have supported RAID, until recently I'd always avoided trying to do the same thing with the actual system drive, just because Windows makes it so unnecessarily difficult to actually get it to install to anything that isn't a bog-standard IDE or SATA drive...
But, after the SSD drive in my latest build decided to just die for no apparent reason (and with no warning whatsoever), and faced with the prospect of reinstalling everything from scratch, I couldn't help thinking there _had_ to be a better way to do this. Surely, someone made some kind of hardware-based RAID controller that Windows would understand and cooperate with -- or better yet, one that would be _invisible_ to the OS, which would see it as just another normal drive?
Searching for "hardware SATA RAID controller" brought me to this device here, and after seeing all the four- and five-star reviews, I decided to take a chance on it. I have to say, I'm quite impressed so far -- setup was a breeze; it took longer to physically install everything into the PC's case than it did to set up the array from the SPM394's front panel, and neither Windows nor the motherboard had any idea that the device plugged into SATA port #1 on the motherboard is anything other than a normal hard drive.
Honestly, the only thing I can find so far to criticize about it (aside from the lack of a printed manual or instruction sheet in the box, but that seems to be par for the course these days :( ) is that the LCD display, and the unit's secondary functions (temperature monitor, voltage monitor, etc.), could be a bit more useful than they currently are. Until you actually press one of the front-panel buttons, the display just sits there at "DAT Optic Inc. SMP394"; it would be nice if there was a way to make the LCD continuously display some useful information, such as temperature and fan speed, or to have it cycle through the various measurements so you can see your system health at a glance. (It also doesn't seem to have any way to program alarms for those measurements, so I'm not entirely sure what conditions -- other than, presumably, a drive failure in the array -- will make the buzzer sound off.) Granted, all of those things are secondary to the SPM394's main purpose -- but since they're there, it'd be nice to get some more use out of them.
But, user-interface quirkiness aside, I'm quite pleased so far, and wouldn't hesitate to recommend it based on the very easy experience I had getting it set up, and _finally_ getting my OS and applications installed onto a RAID1 mirror without having to fight Windows' balky setup process to get to install the right drivers. :) So far, performance seems about on par with the single drive that it replaced; even though it's "only" running at SATA-II speeds, instead of the SATA-III speed the motherboard and drives were/are capable of, I can't say I've noticed any major slowdown or sluggishness due to the lower speed; given that this is just a general-purpose PC, rather than a high-end gaming rig, a database server, or an HD video-editing studio, I suspect 95% of what I do on this machine probably doesn't juggle enough data on and off the drive that SATA-II vs. -III makes all that much difference.