We reported on Intel's SSD 330 weeks ago, but today is the official announcement of the drive and its availability in the channel. Unlike previous 300 series drives, the 330 isn't based on Intel's own controller - it's the second SandForce SF-2281 drive in Intel's lineup.

Intel SSD Comparison
  Intel SSD 520 Intel SSD 510 Intel SSD 330 Intel SSD 320
Capacity 60 / 120 / 180 / 240 / 480GB 120 / 250GB 60 / 120 / 180GB 40 / 80 / 120 / 160 / 300 / 600GB
Controller SandForce SF-2281 Marvell 9174 SandForce SF-2281 Intel X25-M G3
Interface 6Gbps SATA 6Gbps SATA 6Gbps SATA 3Gbps SATA
Random Read Performance (Up to) 50K IOPS 20K IOPS 42K IOPS 39.5K IOPS
Random Write Performance (Up to) 60K IOPS 8K IOPS 52K IOPS 23K IOPS
Sequential Read Performance (Up to) 550 MB/s 500 MB/s 500 MB/s 270 MB/s
Sequential Write Performance (Up to) 520 MB/s 315 MB/s 450 MB/s 220 MB/s
Warranty 5 years 3 years 3 years 5 years

 

The 330 differs from the SandForce based Intel SSD 520 in a couple of key areas. The 330 uses lower p/e count NAND (still Intel 25nm MLC) compared to the 520, reducing Intel's rated lifespan from 5 years of 20GB of writes per day to 3 years under the same workload. Intel's ratings have historically been conservative, so I'd expect that the 330 would last much longer than 3 years for the vast majority of workloads - particularly based on previous calculations on the topic. With a 3 year usable lifespan rating however, Intel dropped the limited warranty on the 330 to 3 years as well.

All of this results in a significant reduction in price compared to the 520:

SSD Pricing Comparison
  60/64GB 120GB/128GB 180GB
Crucial m4 $79.99 $154.99  
Intel SSD 520 $109.99 $184.99 $289.99
Intel SSD 330 $89 $149 $234
Kingston HyperX 3K   $169.99  
Samsung SSD 830 $94.99 $174.99  
OCZ Vertex 3 $98.99 $159.99  
OCZ Vertex 4   $179.99  

In fact, the Intel SSD 330 appears to be the cheapest SF-2281 drive on the market today. Interestingly enough it's only available at three capacities (60, 120 & 180GB), while the Intel SSD 320 was available in far more models going all the way up to 600GB.

Performance ratings are down compared to the Intel SSD 520, although it's not clear whether these are artificial limits or not:

Intel SSD 520 vs. 330
  Intel SSD 520 120GB Intel SSD 330 120GB
Capacity 120GB 120GB
Controller SF-2281 SF-2281
Random Read Performance (Up to) 25K IOPS 22.5K IOPS
Random Write Performance (Up to) 40K IOPS 33K IOPS
Sequential Read Performance (Up to) 550 MB/s 500 MB/s
Sequential Write Performance (Up to) 500 MB/s 450 MB/s
Warranty 5 years 3 years
Price $184 $149

Presumably Intel's SSD 330 uses a similar custom firmware solution to the SSD 520, which should hopefully keep it safe from any SF-2281 related BSOD or other instability.

 

Intel SSD 330
Capacity 60GB 120GB 180GB
Controller SandForce SF-2281 SandForce SF-2281 SandForce SF-2281
Interface 6Gbps SATA 6Gbps SATA 6Gbps SATA
Random Read Performance (Up to) 12K IOPS 22.5K IOPS 42K IOPS
Random Write Performance (Up to) 20.5K IOPS 33K IOPS 52K IOPS
Sequential Read Performance (Up to) 500 MB/s 500 MB/s 500 MB/s
Sequential Write Performance (Up to) 400 MB/s 450 MB/s 450 MB/s
Warranty 3 years 3 years 3 years
MSRP $89 $149 $234
Comments Locked

75 Comments

View All Comments

  • bji - Tuesday, April 17, 2012 - link

    Sources, please. Nobody in this discussion has been able to find any reliable indication that there are BSODs on the 520s. Unless you have reliable sources, you're just adding more hearsay to the noise.
  • Holly - Tuesday, April 17, 2012 - link

    I haven't had bsod/freeze/restart since summer with my OCZ Vertex 3. Not sure about firmware version atm, but half year run without glitch sounds like a rock stable to me (machine runs 24/7).
  • mustardman29 - Wednesday, May 9, 2012 - link

    My OCZ Vertex 3 still blue screens once in awhile. Just updated to the latest firmware. v2.22 so will see what happens. v2.15 definitely didn't fix the problem for me although it happened a lot less than it used to.

    I won't ever buy an OCZ SSD again. Not because I blame them for the problems but because of the way they dealt with it. My next one will probably be one of these Intel 330's with the more reliable firmware.
  • bji - Monday, April 16, 2012 - link

    AFAIK, Sandforce performance only noticeably suffers when the drive is full or very near full. Not to say that the issue isn't important, but the likelihood that it will affect you seems pretty small unless you keep your drives at 99% full, which nobody does.
  • pc_void - Monday, April 16, 2012 - link

    " which nobody does"

    I take it that's a joke... at these sizes lots of people would be going over 100% if it were actually possible.

    Don't even try to imagine how many lazy people there are. It'll BLOW YOUR MIND.
  • nexox - Monday, April 16, 2012 - link

    """unless you keep your drives at 99% full, which nobody does."""

    Since drives don't understand file systems, TRIM is the only way for a disk to figure out that it's not full. If you were to temporarily use 95% of your disk, then delete files to free up space, without TRIM, the drive won't know that it's safe to erase the flash blocks which contained the deleted files - the disk will stay 95% allocated.

    If, after deleting files, you write more, and they land on the previous 5% free space, the disk will be 100% allocated, and it will stay there. I don't believe that any operating systems preferentially over-write deleted data when writing new data, so if you don't use TRIM, the question is not whether you will fill the flash blocks, but when.

    In summary: TRIM is pretty much essential for SSD use in a desktop, though I don't think that the Sandforce post-TRIM performance loss will be at all noticeable, and so there's no problem there.

    The real issue is whether these have any power loss protection... which, at this price point, I suspect they do not.
  • bji - Monday, April 16, 2012 - link

    Drives keep spare area. TRIM just allows the drive to use area that otherwise would be considered user space blocks as spare area, until written to. All that TRIM does is to maximize spare area.

    Sandforce controllers (and all other SSD controllers that I know of) reserve enough spare area to guarantee a minimum level of write performance.

    Therefore, TRIM is not essential to maintain good performance; that's what spare area is for. TRIM does help with retaining better-than-minimum performance though. Let's face it; the sandforce controller even when in its minimized write performance state is still 'fast enough' for most uses and the difference in write speed probably won't even be detectable to most people.

    The GP pointed out that many people keep their drives full. I doubt this is really true. Who keeps their drive in a state where every file you want to store requires that you remove a previously-written file? Only people rarely doing writes I would imagine (i.e. keeping a drive full of stolen movies and then only deleting a movie when room is needed to write a newer movie, and that probably only happens on the order of daily or weekly).

    If you're doing lots of 'work' using your drive (i.e. manipulating media, compiling software, etc), you're probably maintaining enough free space on your drive so as not to constantly be hassled with out of space issues while you do your work. In this case, you'll be keeping enough spare area to mitigate performance problems.

    If you're not doing lots of 'work' using your drive (i.e just plopping stolen media on it and then reading that media on occasion), then you won't be worried about sandforce write performance on full drives anyway.
  • Romberry - Monday, April 16, 2012 - link

    You're understanding of spare space and what it does differs from my own, as does your claim that spare space is all that is needed to maintain drive performance. ("...TRIM is not essential to maintain good performance; that's what spare area is for. ")

    Drives (and/or operating systems) that do not incorporate TRIM suffer serious performance degradation over time. The TRIM function serves to reverse (or at least ameliorate) that degradation. Spare space, at least as I understand it, is for wear leveling. (As cells approach wear limits in the main space and begin to become unreliable, those addresses are mapped to cells in the reserved spare space.)

    I'm sure that my plain language attempt at a technical explanation is lacking or off in some way or another. Didn't mean it to be a rigorous exposition anyway. Just saying that my understanding of the subjects of TRIM and spare space seems to be directly at odds with yours.
  • kyuu - Monday, April 16, 2012 - link

    All TRIM does is mark cells with data that is no longer in use (because it was "deleted) so that the cells holding that data are properly reset. Without TRIM, the data is still there and, when a write comes in, that data has to overwritten instead of getting written to cells that already ready for new data.

    Overwriting the data requires waiting for the cells to be "emptied", and *then* performing the write operation. That's much more time-consuming than just writing to cells that are already "empty", and that extra time is what's responsible for write speed degradation without TRIM or some form of garbage collection.

    Truly, nowadays most SSDs have good enough garbage collection that, given enough idle time (which most drives in non-enterprise settings will have more than enough idle time), they can accomplish much the same thing without TRIM anyway. TRIM is just a nicety, really.
  • bji - Monday, April 16, 2012 - link

    It all depends on the controller. I have some SSDs that never implemented TRIM and it is not a problem. Their garbage collection and block management algorithms are good enough to maintain good performance even without TRIM. Are they as fast as solutions that make better use of spare area and TRIMmed space? No. But they don't suffer from some kind of catastrophic write performance as I think some people are claiming is inevitable.

    When you write data the SSD wants to write to already empty cells, because then there is no erase cycle required before the write. If the entire user space of the drive was already filled, then the next write would likely, rather than overwriting the existing block of data, write to a spare block and then mark the block that used to hold the data at that LBA as spare. Then in the background that no-longer-used block would be erased, making it available for a future write.

    If the drive gets so far "ahead" of its ability to erase added-to-spare-area-but-not-yet-cleared blocks that it runs out of erased blocks, it would have to resort to erasing blocks before writing the next block, which would be a serious performance degredation.

    Presumably, it would take a long time of sustained writes to 'use up' the already-zeroed spare area and go into a mode where every write requires an erase. Once the sustained stream of writes stopped, the drive would have an opportunity to 'catch up' on the erases that it deferred.

    I suspect that real controllers actually balance erases against writes and 'slow down' some writes during periods of high activity in order to slow down the rate at which already-cleared blocks are used up.

    If you are using TRIM and have given blocks back to the SSD, then it has even more space to use to keep ahead of sustained writes.

    I suspect that there are various performance levels that an SSD would achieve based on how much already-erased spare area it has, how much 'virtual' spare area in the form of TRIMmed blocks that the O/S has said that it doesn't need, and what the current write load is.

    I think that in the worst case, intelligent controllers are very resilient to running out of zeroed blocks, even if the drive is entirely 'full' and the spare area is at its minimum.

Log in

Don't have an account? Sign up now