Conclusion

There are probably two big takeaways from these meetings, along with a third one that may be more inferred than outright stated.

The first is that USB 3.0 is making a splash whether Intel and AMD get on board early or not. It's just going to happen. Kingston's USB-enabled SSD uses USB 3.0, along with A-Data's; A-Data is actually moving all of their flash drives over to USB 3.0, and Patriot had that remarkably fast flash drive using it. Thermaltake has already moved their BlacX line to it, and Cooler Master even had a notebook cooler with a built-in USB 3.0 hub. Bottom line: USB 3.0 is here to stay.

The second is that after a briefly stagnant period dominated largely by the SF-1200/SF-1500, the SSD market is picking up again. That much should be evidenced by Anand's time with the SF-2000 controller along with the proliferation of Marvell's SATA 6Gbps controller. Each manufacturer I visited with that produced SSDs had a SATA 6Gbps drive on hand and were posting remarkable performance with them.

Finally, though, the third and more questionable point...SATA 6Gbps may have a remarkably short shelf life compared to other standards. While the industry is moving to USB 3.0 with or without Intel and AMD, SATA 6Gbps is having a slightly slower uptake and isn't promising to reap the kinds of dividends USB 3.0 is. Where it gets really alarming is how close SSD manufacturers are already getting to saturating SATA 6Gbps, a standard that came into being largely for them. The Patriot rep I spoke to was of a similar opinion and suggested products like OCZ's RevoDrive may actually wind up being the wave of the future, since PCI Express is capable of substantially more throughput than SATA 6Gbps is. We may be seeing SSDs that need that kind of throughput sooner rather than later.

Cooler Master, Thermaltake, and Patriot
Comments Locked

20 Comments

View All Comments

  • flurazepam - Friday, January 7, 2011 - link

    Would be nice to see a windowed panel option (i.e. post purchase) for the existing Corsair 600T cases.
  • Corsair Tech Marketing - Friday, January 14, 2011 - link

    Those will be available soon...
  • Meaker10 - Friday, January 7, 2011 - link

    I think that SSDs should have a new style of interface based of pci express lanes.

    They should be able to plug straight into the motherboard with a single powered connector (possibly like mini-PCIe where it lies flat I suppose but have cable support for larger drives).

    Maybe have 8-16 lanes fed to a controller chip with configurable lanes between each port say up to a maxmimum of 4 (for now).
  • Hrel - Friday, January 7, 2011 - link

    So... Why isn't anyone using fiber optics in computers yet? Intel has talked about it, ISP's are using it. Everyone knows its faster than anything out now and in a computer the length of cables doesn't need to be very long.

    Get on it industry.
  • DanNeely - Friday, January 7, 2011 - link

    Optical networking is still too expensive for the consumer market. Eg the cheapest gigabit PCIe fiber card on newegg is $200. Intel's described research that should lead to making optical connections on silicon significantly cheaper. Hopefully that will play out in the next few years; but it's not here yet.
  • Penti - Friday, January 7, 2011 - link

    Why do you need it though? We have 100GbE now over copper that is in the making. And 10GbE over copper is standard. Converting that to fiber in the switch or media converter is no problem. You can of course terminate directly into a computer if you like, but if your not running it as a router it's pointless. You use fiber optics for high-speed links or uplink/trunks in your networks and for long distances. Besides it's already used in data centers with fibre channel. Do the end user need to terminate to anything more then 1Gb ethernet over copper or multiple 1GbE? Or 10GbE over copper (or multiple bonded/trunked). Nope not even the servers need that. Well maybe if you need more then 4x10GbE in one server. But for access to the SAN there's a separate fibre channel card connected to a fibre channel switch with fiber optics. No reason to replace the Ethernet copper connection in the computer/server. Unless you need a length kilometers of cable and then fiber optics have always been the choice. Since long distances is rarely needed it don't make any sense to have fiber optics everywhere. OP actually nails it, the cables don't need to be long. If you can do 100GbE over 10 m of cable that's enough. 10 GbE handles the same distance as 1 GbE, 100 meters. Do you really need fiber optics the last meters? No. And for home use I think you prefer that you can use a cheap ethernet switch or router even if you have FTTH. It's not like it would go any faster.
  • softdrinkviking - Saturday, January 8, 2011 - link

    intel is surely making progress on lightpeak, and it will eventually be cheap enough to make it into PCs.
    this is way, way future, and by the time they figure it out, or at least at some point in the future, it may be necessary for something, hey.
    also, i think it's really interesting to see what comes out of running fiber in PCs beacuse it's something new and you never know what advantages may come of it.
    like how toslink cables have the pleasant side-effect of relieving horrible ground hiss in audio signals out of some PCs.
    not saying that we NEED fiber in our computers, but most gadget or tech hobbyists are not really concerned with what we need. It's more of a "wow look at that, that's cool" kind of thing.
  • Penti - Monday, January 10, 2011 - link

    Well this kind of things are driven by the professional side, and audio has gone back to copper with HDMI and DisplayPort any way. And S/PDIF over RCA isn't bad. You still have timing errors over toslink too.

    We don't really use fiber because it's faster we use it because it's more practical. For connecting cities and countries that is. Intel has demonstrated you could even use it inside computers, but they also no they don't need it.

    Lightpeak might be practical, but that's just one use of fiber optics. That don't mean we will rip out are CAT6 cables and use it for ethernet. Just means it will be used for some high-performance and consumer devices. How it advances in the data centers is something fully other thing though. It will be other standards there.
  • semo - Friday, January 7, 2011 - link

    Any word on current memory fabrication? I remember Samsung was talking about 2xnm node RAM that was supposed to work at 1.35V.
  • BathroomFeeling - Friday, January 7, 2011 - link

    Any word as to what that super secret Coolermaster thing "worth waiting for" is?

Log in

Don't have an account? Sign up now