G.Skill Phoenix Blade (480GB) PCIe SSD Review
by Kristian Vättö on December 12, 2014 9:02 AM ESTG.Skill hasn't been a very visible SSD OEM lately. Like many DRAM module companies, G.Skill entered the market early around 2009 when the market was very immature and profits were high, but lately the company has more or less been on a hiatus from the market. Even though G.Skill has had an SF-2281 based Phoenix III in the lineup for quite some time, it never really did anything to push the product and a Google search yields zero reviews for that drive (at least from any major tech review outlet).
However, back at Computex this year G.Skill showcased a prototype of its next generation SSD: the Phoenix Blade. Unlike most SSDs on the market, the Phoenix Blade utilizes a PCIe 2.0 x8 interface, but unfortunately it is not the native PCIe drive many of you have been waiting for. It is driven by four SandForce SF-2282 controllers in a RAID 0 configuration. It makes sense for G.Skill to pursue the ultra high-end niche because the SATA SSD market is currently extremely populated. It ends up being close to impossible for the smaller OEMs to compete against giants like Samsung, SanDisk and Crucial/Micron while being profitable, since ultimately the NAND manufacturers that are vertically integrated will always have a cost advantage.
G.Skill Phoenix Blade Specifications | |
Capacity | 480GB |
Form Factor | Half-Height, Half-Length (HHHL) |
Interface | PCI Express 2.0 x8 |
RAID Controller | SBC 208-2 |
NAND Controller | 4x SandForce SF-2282 |
NAND | Toshiba 64Gbit 19nm MLC |
Sequential Read | Up to 2000MB/s |
Sequential Write | Up to 2000MB/s |
4KB Random Read | Up to 90K IOPS |
4KB Random Write | Up to 245K IOPS |
Power Consumption | 8W (idle) / 18W (max) |
Encryption | AES-128 |
Endurance | 1536TB (~1.4TB per day) |
Warranty | Three years |
At this moment the Phoenix Blade is only available in 480GB capacity, although G.Skill plans to add a 960GB model later. 480GB is a logical choice because for the target group 240GB would be too small in many cases and doesn't provide the same level of performance, whereas the cost of the 960GB would push most shoppers away. In the end, G.Skill hasn't been actively involved in SSDs recently, so doing a 'soft' launch and observing the market's reaction is a safe strategy.
The interesting spec is the endurance and it's not a typo. G.Skill is indeed rating the Phoenix Blade at 1,536TB, which translates to 1.4TB (i.e. 1433.6GB) of writes per day for three years. I asked G.Skill about the rating method and I was told it's simply raw NAND capacity multiplied by the number of P/E cycles, which is then divided by the average write amplification. G.Skill assumes an average write amplification of 1x due to SandForce's real-time data compression, so 512GB*3,000/1 yields 1,536TB. As G.Skill's SSD venture is solely consumer focused, it has no reason to artificially limit the endurance to boost its enterprise SSD sales like many vendors do, although I am concerned whether the Phoenix Blade has been fully validated for workloads that write over a terabyte of data per day.
Delving into the Phoenix Blade reveals a massive metal heatsink that covers nearly the whole PCB (or both PCBs, actually). There's plenty to cool since each SF-2282 can draw up to ~4W under load plus at least another couple of watts for the RAID controller, which results in a maximum power rating of 18W according to G.Skill's data sheet.
Taking off the heatsinks reveals the main PCB as well as the daughterboard. Both are home to two SF-2282 controllers with each controller being connected to eight dual-die 16GB NAND packages (i.e. 32*16GiB=512GiB of NAND in total).
The RAID controller in the Phoenix Blade is a complete mystery. Googling the part number doesn't bring any light to the situation and due to confidentiality agreements G.Skill is tightlipped about any details regarding the controller. My best guess is that the controller features firmware from SBC Designs with the actual silicon coming from another vendor.
Update 12/13: It turns out that the controller has actually nothing to do with SBC Designs as it's from Comay that is a brand used by CoreRise, which is a Chinese SSD manufacturer. In fact, the Phoenix Blade looks a lot like CoreRise's Comay BladeDrive G24, so I wouldn't be surprised if G.Skill was sourcing the drives from CoreRise and rebranding them (nearly all memory vendors do this -- very few do the manufacturing in-house). I'm still inclined to believe that the silicon is from a third party as CoreRise's product lineup suggests that the company doesn't have the expertise that's needed for semiconfuctor design and development, but the firmware is likely unique to CoreRise.
The Phoenix Blade is bootable in any system. Upon boot, the drive loads legacy drivers before the BIOS and hence it can be selected as a boot device just like any other drive. Loading the drivers adds a few seconds to the boot time, but other than that the Phoenix Blade behaves like a normal SATA drive (but with much higher rated peak performance). TRIM and SCSI unmap are also supported.
While not easily visible in the photo due to residue from thermal pads, the Phoenix Blade uses the new B2 stepping of the SF-2282 controller. The fundamental design of the controller has remained unchanged, but the new stepping introduces DevSleep support for improved power efficiency, although in this case DevSleep brings no real added value.
Test System
CPU | Intel Core i5-2500K running at 3.3GHz (Turbo & EIST enabled) |
Motherboard | ASRock Z68 Pro3 |
Chipset | Intel Z68 |
Chipset Drivers | Intel 9.1.1.1015 + Intel RST 10.2 |
Memory | G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24) |
Video Card | Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective) |
Video Drivers | NVIDIA GeForce 332.21 WHQL |
Desktop Resolution | 1920 x 1080 |
OS | Windows 7 x64 |
Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit
62 Comments
View All Comments
Havor - Sunday, December 14, 2014 - link
What really sucks is that Intel continues attaching a PCH to the host processor through a four-lane DMI 2.0 connection on even the X99. You only get 2 GB/s of bi-directional throughput.So 3 disk R0 or 4 disk R5 is all it takes to saturate the DMI connection between chipset and CPU, even do you got 10x SATA3 connectors.
On the moment only solutions are M.2, PCIe to have a faster storage solution.
And for the desktop, only M.2 with native PCIe 3.x 4x will be able to to deliverer cost affectedly solutions, one's they finally have good SSD controllers developed.
alacard - Sunday, December 14, 2014 - link
You're preaching to the quire on that one. 2GB per second (actually only 1800MB/s after overhead) divided between 10 SATA ports, 14 USB (6 3.0) ports, Gigabit LAN, and 8 PCI express lanes, is an absolute joke.TheWrongChristian - Monday, December 15, 2014 - link
What you're missing is that while an SSD at peak speed can saturate a SATA 3 link, and 3 such drives can saturate 2GB/s DMI connection, even the best SSDs can rarely reach such speeds with normal workloads.Random (especially low queue depth 4K random) workloads tend to be limited to much lower speeds, and random IO is much more representative of typical workloads. Sequential workloads are usually bulk file copy operations, and how often do you do that?
So, given your 10x SATA 3 connectors, what workload do you possibly envisage that would require that combined bandwidth? And benchmark dick swinging doesn't count.
personne - Sunday, December 14, 2014 - link
My tasks are varied but they often involve opening large data sets and importing them into an inverted index store, at the same time running many process agents on the incoming data as well as visualizing it. This host is also used for virtualization. Programs loading faster is the least of my concerns.AllanMoore - Saturday, December 13, 2014 - link
Well you could see the blistering speed on 480Gb comparing to 240Gb version, see the table: http://picoolio.net/image/e4OEzioAs - Saturday, December 13, 2014 - link
I know RAID 0 (especially with 4 drives) theoretically would give high performance but is it really worth the data risks? I do question some laptop manufacturers or PC OEM to actually build a RAID 0 with SSDs for potential customers, it's just not a good practice imo.personne - Monday, December 15, 2014 - link
RAM is much more volatile than flash or spinning storage yet it has its place. SSDs are in a sense always RAID array since many chips are used. And it's been posted that the failure rate of a good SSD is much less than a HDD, multiple SSDs are still less than a single HDD. And one should always have good backups regardless. So if the speed is worth it it's not at at all unreasonable.Symbolik - Sunday, December 14, 2014 - link
I have 3x Kingston HyperX 240gb in Raid 0, I have 4 of them, but 3 maxes out my AMDraid gains, it is significant over 2 at around 1000 x 1100 r/w (ATTO diskbench). I have tried 4, the gain was minimal. To get further gains with the 4th, I'd probably need to put in an actual RAID card. I know it's not intel, but it is sandforce.Dug - Friday, December 12, 2014 - link
You say - "As a result the XP941 will remain as my recommentation for users that have compatible setups (PCIe M.2 and boot support for the XP941) because I'd say it's slightly better performance wise and at $200 less there is just no reason to choose the Phoenix Blade over the XP941, except for compatibility"I'm curious, what are you using to determine the XP941 has slightly better performance? It just seems to me most of the benchmarks favor the Phoenix Blade.
Kristian Vättö - Friday, December 12, 2014 - link
It's the 2011 Heavy Workload in particular where the XP941 performs considerably better than the Phoenix Blade, whereas in 2013 and 2011 Light suites the difference between the two is quite small. The XP941 also has better low QD random performance, which typically important for desktop workloads.