I dont really see how they would improve upon this, its really just solving the inconvenience of keeping system and frequent apps on the SSD and data on HDD. I'm sure Apple has a great algorithm already for deciding what gets to be on the SSD based on your system usage, though I hope there is an option to put new apps directly on the SSD on install (as I would want when installing games etc on my Windows PC). I hope windows 8 packs similar functionality because to the non-power user its not as trivial as it may seem to me.
If the drive is presented as a single volume I don't see how you could request a new app be installed & why should you? Shouldn't the automatic tiering automatically move it to the SSD if warranted? (Used alot)
It's a primary drive; there's no backup component to it. You back it up like you would any drive. To put it another way, the SSD is for speed, not safety.
Any spanned volume works the same way. Double the capacity; halve the reliability. Whereas mirroring halves the usable capacity while doubling the reliability.
I would have expected a larger write buffer than 4GB... In fact, I would have expected writes to work much like reads, with all writes going entirely to the SSD and files being moved to the slower HDD later if they weren't frequently accessed.
Well, once it's been written to the write buffer, I assume the OS will decide whether or not to keep it there or move it to the rotational HD later.
And I don't see why greater than 4GB would be necessary. It truly is intended to just be a scratchpad/clipboard of sorts. As long as it can easily handle typical workloads, then the user shouldn't be able to see any improvement going to a larger buffer.
I would think that media editors would benefit from a larger write-cache. So would anyone transferring a folder larger than 4GB I would think, which tends to happen not-too-infrequently from my experience xP I think a 12GB write-cache would have been pretty decent myself and would cover a lot of file-move scenarios.
I am not sure why file moving is very important? Moving from where to where? Even with a thunderbolt connector, you'd need an external SSD to see a difference. Otherwise either the connector - ie USB 3 - or the external storage would be the bottleneck.
The 4GB write cache is for one edge case, which is that programs like to write lots of little files - file locks, temporary small stuff, whatever. The most recently used algorithm would miss those because they are brand new files - there's no usage data available. So they all go in that 4GB cache first - it's a great solution IMO.
For all other cases, the SSD and HDD dynamically re-arrange their contents. So if you're editing a movie, that movie will be on the SSD, for example. (I was about to bring up the case of a 300MB photoshop file but then realized that easily fits in that 4GB buffer too... whoops... 4GB is quite a bit, only movie editing will really exceed that).
The more they use for the write buffer the less is available to files and programs I guess. 4GB just for writes should be enough for most people in most cases though, only when you transfer something larger than that would you take a hit to performance.
4GB is more than enough - you only need to store the last X seconds of writes and then flush them out. ZFS's ZIL partition for a home filer can be fine even at 2GB.
As for writing everything to SSD and then migrating out slow stuff to HDD, that requires a lot more SSD to work. Veritas's file system vxfs v5 includes dynamic storage tiering (DST) that behaves this way, but you have a target mix of fast/slower storage that is 30/70, not 1/8 or 1/24 like these Apple offerings. There is also a considerable overhead cost to tracking and refreshing file locations with their product.
I would think that a block based cache drive would offer more fine grained performance. The concept is interesting, I can't wait to see some performance reviews.
It still could be block based, could it not? I mean, SRT is block-based yet it caches whole files, not just a few blocks that the file takes up :P If a file is on blocks 2, 3, 4, 5, 7, 11 then all of those blocks should be accessed an equal number of times, ensuring the entire file and all of it's blocks are transferred.
Absolutely not. The opposite is the case. An algorithm that works on the OS level can make much better decisions on what to keep on the SSD and what to move to the HDD.
It could move all my media files onto the HDD, for example - I might watch movies, listen to music and look at my pictures, but the case that I'd edit these *and* the editing would incur a performance penalty is very small. It could keep all system files on the SSD. And so on. The OS has much more information about how your files might get used so it can make much better decisions.
It could even keep track of which applications benefit heavily from the SSD and which don't to help make sure a photoshop install that gets used a couple times a month doesn't get pushed off the ssd to make room for big 10GB games that get played many times a week.
SSD caching, Intel's failed Turbo Memory, and the like have all failed, for the reason that they tried to be too tricky, or required too much manual effort.
This seems to hit the sweet spot. Automatic immediate caching for small amounts (a la Turbo Memory,) with large amount automatic repositioning between drives based on usage load.
No manual keeping track, but MUCH more benefit than the existing solutions. In all honesty, this is what I thought both Turbo Memory and SSD caching *WERE* until I read more in to them. This makes a lot more sense. Use the spinning drive as "volume" storage as it should, then when you figure out what smaller amounts of data should be on the higher-speed, move them.
Make new writes of smaller amounts of data go to the SSD, then write to the spinning drive when workload allows. No risk of losing data like a "regular" write-back cache.
"SSD caching, Intel's failed Turbo Memory, and the like have all failed, for the reason that they tried to be too tricky, or required too much manual effort."
I don't understand this at all. Have you ever tried to use Intel's SSD caching debuted on the z68 platform? I'm guessing you probably are just spewing someone's thoughts on the platform without ever trying it.
I have used SSD caching on my z68 platform since I've started putting it together. After about 5-10 mins and a few reboots, SSD caching was up and running and incredibly noticeable. Aside from the brief setup, I've never spent another thought on the matter. It's the easiest solution out there. If the SSD dies randomly, guess what? nothing happens other than me replacing the SSD. My single drive stores everything and maintains cache for anything I load often.
Don't you think it's a tad bit early to declare Smart Response to have failed, seeing as it's only been available on notebooks and most desktops since the IVB launch earlier this year? In fact, I've seen a lot of cheaper Ultrabooks using SRT.
Wow, thanks for posting this, Anand! I guess you're right - it's not SSD caching per se (other than the 4GB write cache). With a 128GB SSD array and the OS directing what files to move/exchange from the rotational HDD onto the SSD portion, this looks like it could be an elegant solution.
Given the option, would us regular desktop users want to move to an intelligently-managed hybrid Hard Drive solution? I run Windows Vista on a 128GB OCZ Vertex 3 with a 1 TB drive as my other applications (and media) storage drive.
However, all of my games were originally installed on my rotational HD by default. If I want Starcraft 2, for example, to be loaded off my SSD, I have to copy over the main game directory and use a directory junction (Windows' version of a UNIX-style symbolic link to a directory) to point to the new SSD directory in order to "fool" Windows into thinking it's still on my rotational drive. And strangely enough, this process BREAKS when it's time to patch or update my game.
If I had a smart algorithm deciding what files were commonly accessed and pulled those files onto my SSD for me, that would be very effective, I feel.
Can Intel's "Smart Response Technology" already do this? Is Apple's Fusion Drive significantly different? Are there any significant downsides to these methods versus manually maintaining an SSD volume and a rotational HD volume, given that you throw at least a 64GB or 128GB SSD into the setup?
Meh, I don't see what benefit it has for enthusiasts... Junction points aren't much of a hassle, I like being in control, and SSD prices are plummeting anyway. I was gonna buy a second 128GB drive soon, to store more games on flash, but I might just jump on a 256GB given the deals I've seen (and with BF looming).
For mainstream users it's definitely an optimal solution tho.
I am an enthusiast (work in IT, built computers for a decade) and I'm with Paulman. Sure, I can keep it mostly organized, but I have better things to do and I'm willing to sacrifice a little bit of performance for convenience. I'd rather be getting some work done than messing around with my files.
Why don't you just reinstall SC2 on your SSD? o_O Seems kind of silly to have to go through that process again and again when you could do just one things and have it work.
And Intel's SRT is the exact same thing, except it leaves a copy for parity on the drive be default (I believe you can turn that off). Where Intel's SRT differs is it's cache size. Instead of 4GB, you have a max of 64GB which is used to not only cache writes, but to cache frequently used files as well. So instead of having 95% storage and 5% cache, the cache is flexible and configurable up to 64GB. If you have a bigger SSD than 64GB then you just use the rest for storage as you deem fit.
So in your case, you could say, use 10GB for Intel's SRT write-cacheing and your remaining usable space for installing Windows/SC2/Web browser/ect. Or if you want something automated, then make your cache the full 64GB and it will work just like Apple's Fusion and the remaining 60GB you can use to install Windows and any other programs/files you don't want getting bumped off the cache due to inactivity.
What would be great is if you could use Fusion Drive on an SSD/HD combo you installed yourself. I have an SSD as my main drive and a HDD as the secondary in the optibay of my Macbook Pro. I would love to have Mountain Lion combine them into one logical unit and manage it for me.
I'm hoping there is an easy way to enable the feature through terminal commands or that someone makes a 3rd party app to do it (see: Trim Enabler).
Yeah, I'm hoping someone finds out what OS X SW is handing this and ports it to other Macs as I also have such a setup.
I have made my entire HDD my user folder using the Advanced Settings options in Sys Prefs so my SSD can be my boot and app drive. I don't even use half of my 80GB X25.
The 4GB is probably about right, thankfully not a lot of writes occur during general use (else our SSDs wouldn't last 5+years), although I would much prefer to be able to set that write buffer.
Then again, if it's frequent data, it should just be getting written to SSD where it stays, skipping the buffer, or so I assume.
Looking forward to it getting torn up and analysed. Though I imagine that'll take a while, you guys have A LOT of work to do over the next month or so.
I was wondering what would happen if one volume failed. Are you dead in the water? Ideally, the OS would always be on both drives, that way a drive failure wouldn't sideline you completely.
Yep, if your SSD cops out then you're SOL. Which is why Intel's SRT copies the file instead of swapping it. Otherwise they're pretty much the exact same thing.
Since it looks like Apple designed "Fusion Drive" to appear as a single drive to work seamlessly with existing software, I'm guessing things like Time Machine backup and Lion Recovery will still work fine. Even if the OS drive failed, a Time Machine drive has a built-in recovery partition so you can still restore your files if either drive fails.
"That 4GB write buffer is the only cache-like component to Apple's Fusion Drive. Everything else works as an OS directed pinning algorithm instead of an SSD cache. In other words, Mountain Lion will physically move frequently used files, data and entire applications to the 128GB of NAND Flash storage and move less frequently used items to the hard disk"
That first bit is essentially a hybrid drive like the Seagate Momentus, the rest is exactly like Intel's ssd caching... infact that's all it is with maybe a tweak algorithm.
I find myself wondering if operating at the file level is any real improvement. The end effect is the same and it shouldn't matter is all or part of an application is cached. If something isn't cached it's because it doesn't get used much. Why waste SSD space on it?
If I use Word a lot but never do a mail merge, it isn't going to affect Word's performance for me if the files for the mail merge function live on the platter drive instead of the SSD.
'OS directed pinning operation' Isn't that what most modern desktop OSes do with frequently launched apps, cache them to RAM for quicker access? Why is that caching but Fusion isn't?
I thought the concept wasn't worth it as per the last podcast =P. I want caching to be great, until we have 2TB SSD that are affordable, there will be a benefit to not have to deal with managing two drives. When you install a bunch of stuff, it can be tedious to manage.
Oh, and remember to check if the new 27" can still be used as a monitor as well please!
To say that this isn't SSD caching is purely a semantic argument. The only difference is that this works at the file level rather than the sector level. Other than that it is still a continuing popularity contest to determine which items are worthy of a place on the SSD or are just as well off on the platter drive.
The big difference between this and building a PC with SRT enabled is that the Fusion drives comes pre-imaged with the Mac OS and bundled apps on the SSD, so you get the performance boost immediately instead of having teach it over the course of usage what files need to be cached. I expect PC OEMs to do the same as big brand machines start to offer SRT as a factory option.
No, the real difference that makes this not a "cache" is that the files do not live on both the SSD and the HDD. They only live on one or the other. A true "cache" would have all files residing on the HDD, with selected ones also cached to the SSD. (Similar to how a processor's L1/L2/L3 caches simply store the same stuff that's already in RAM.... it doesn't disappear from the RAM when it gets put into the cache.)
Unless you look under the hood with System Report the user will only be aware of a single drive volume. This to mean screams "SSD CACHE!"
Seriously, this is just building on SRT. Is there any doubt Intel shared their code with Apple to get started? End users get SSD benefits without having to think about what should go where. Where have I heard that pitch before?
I hope it's smart enough not to cache big video files regardless of how often they're played. That would be a horrible waste of SSD space for a type of data that gets little benefit from the medium. Though in a mobile setting it would be advantageous for battery life. Perhaps a control panel for such settings is needed.
When I heard of this yesterday I assumed that this was Apple's implementation of Intel's SRT with OSX? Am I wrong assuming that? Regarding this article and comments it looks like this is somewhat different?
It's an extension of SRT that takes it a bit farther.
For example, Apple says this isn't caching because the files in the SSD aren't duplicates of the platter drive. They are the sole copies. This means you get more combined space from the two drives. Rather than a 1 TB drive with an invisible 128 GB cache, you have what appears to be a single 1.2 TB volume with two performance levels depending on where data is located. This is the biggest benefit I can see of operating at the file level rather than the sector level as SRT does.
It remains to be seen how much of this is Intel and how much Apple. If it is mostly Intel we should see a version of this for Windows sometime next year as six months is a frequent exclusivity term between Intel and Apple. Intel may make it exclusive to Haswell chip sets to promote those.
It's kind of like the variable throughput of some optical drives. Makers of CD-ROM games for console once had to pay a lot of attention to where a file was on the disc. You wanted code on the fast parts of the disc and video on the slow parts, assuming it wasn't slower than the video playback required. Since the video is linear it doesn't matter how fast it loads so long as the minimum is maintained. But code needed to be on the faster tracks to avoid long pauses between parts of the game.
In general, this is a good thing for users who are easily confused about dealing with multiple drives of differing performance. I really liked the idea of SRT but found it extremely flaky on the two H67 systems where I tried to use it. I gave up and settled for manually managing the placement of different file types on my systems. (It is very easy to make the Windows 7/8 User data directories live anywhere you want so the SSD doesn't get filled with big data files.) Perhaps Microsoft should be like Apple and make Windows more natively aware of the concept rather than leaving it entirely to Intel.
Intel just needs to add the option to disable the data-parity part to their drivers and then SRT can be exactly the same as "fusion". No need for new CPUs/Chipsets.
My opinion is this is almost purely Intel's tech that Apple decided they liked and then implemented themselves xP
I have done The same thing as you with my OSX, I have moved moved VMs,steam,iTunes library and other Non-SSD friendly stuff to my HDD with a combination of soft links and moving the iTunes library but for some of my less technical friends this technology will be easier.
In the Apple store, the Fusion drive is a $250 upgrade option (over the standard 1TB drive) on the Mac Mini (not available on the base model mind you, just the $800+ models).
So essentially they are charging an additional $250 for a 128GB SSD (probably Toshiba) and enabling some software bit that will decide for you what data gets put on which drive... at a time when consumers can purchase a 128GB Samsung 830 for $80, or 256GB for $160.
I guess we shouldn't be suprised considering they'll sell you a ($25) external DVD drive for $79, and upgrade from ($20) 4GB to ($40) 8GB of RAM for $100.
Apple's doing what they always do, charging extra for convenience. Yes, you could seek out and buy cheaper parts, or manage multiple partitions on your own, but if you don't have the time, technical skill, or desire, Apple will handle it for you (for a price).
I see it as similar to the auto industry. Tasks like changing oil or replacing headlights are simple and cheap to do yourself, but an awful lot of people without the time or the skill (or just because messing around with their car scares them) pay someone else to do it.
First I want to confess that I am not very up to date with NAND hardware, but wouldn't having a dedicated area for caching on the SSD be an issue with write wearing? Unless the SSD controller / OS handles wear leveling these days.
Yes butt he time scale for losing a serious portion of the drives is measured in years.
Also, most drives have a reserved area for this purpose. The reserved cells are never available to the user and are switched in as cells wear out. This is why you see drives listed as 120 GB instead of 128 GB or 240 GB instead of 256 GB.
It's a trade-off that means the drive should last and function well long into the life span of the typical system. If you had a machine in continuous use for five or more years you might notice the drive losing capacity. Not an issue for most people.
If I remember correctly, most modern drives would last years even in database use, let alone what a single user could do. The hard drive would probably fail before the wear limits were reached.
The Fusion drive reminds me the Hierarchical Storage Management of Irix I was using in the good old days ... HSM is still in use but is usually relegated to the big iron. It would be nice if apple will support multiple tiers of storage.
For Random Write we have the 4GB sorted most of that. The problem is Random Read. Where do we do Random Read most since these needs to be inside SSD. For others Seq Read Write HDD isn't that much slower at all.
Again, i think the experience will be hard to measure with any benchmarks tools. I am looking forward to Anand's review/
I just suspect this is going to "just work" (as in just work well).
And a mystery to me is why hasn't Seagate gotten it together, after years, to get really great performance from their XT series. I mean, how long does it take to get it right? A few months I bet is enough if you actually try.
The problem for Seagate and other HD makers is getting the full value out of the SSD volume meant moving out of their comfort zone and into higher levels of the OS than their usual product. It also meant much higher price points if they were going to incorporate really large amounts of flash memory. This makes them nervous since they have very price sensitive customers in the big PC OEMs.
I'm sure they'll keep at it but they've missed their window and that bird has flown. What is happening now is new computer models are coming with mSATA port/slots to allow a notebook to have an SRT drive along with a high capacity HD. Dell, for example, ow including a 32 GB mSATA cache as standard on several models.
This isn't as sophisticated as what Apple is doing. It remains to be seen if Intel and/or Microsoft will offer the functionality to enable PC OEMs to match the Fusion feature set. Much of it should be trivial, such as pre-loading the SRT cache volume with the OS files and any other favored items, although what you'd really need to do is mirror the appropriate sectors on the hard drive. At least until a file system level version of SRT is offered.
Although Apple ended up ditching ZFS after running into licensing issues, they started working on the basic underpinnings for a possible new filesystem, and included it in 10.7 Lion, calling it Core Storage.
While Apple's still using Core Storage with the ancient HFS+ filesystem, the new FileVault 2 disk encryption system uses Core Storage to present the encrypted drive as a virtual normal one to the system.
I'm guessing Fusion Drive is also using Core Storage.
Thanks for providing some actual information on how this works. I've been googling around for a while, apparently none of the other so-called "tech" publications are interested enough to actually ask... Anandtech stands out, once again. Thanks!!!
"Just like all the other tiered/caching SSD and small NAND drive systems that have been around for ages."
Exactly and where are these systems? While they may not be new, they weren't mainstream. They lived in the domains of the mommies cellar dwellers… While not new Apple at least made it mainstream… This is what they do...
I read recently that the 1TB Fusion drive was supported with Bootcamp, but the 3TB was not... that's a shame since the internal drive is not upgradeable I would have gone with the 3TB Fusion drive. I also want the flexibility of installing Windows via Bootcamp, and see it's not supported on the 3TB drive. The Apple note also indicated that the Apple Disk Utility not be used for Bootcamp partition, and instead to use the Bootcamp utility.
Would using Parallels or VMWare Fusion (with Fusion drive? that's sure to be an opportunity for marketing!) get around the Bootcamp and 3TB restriction?
Another question: What about the Emergency Restore partition that started with Mountain Lion? Is that still supported with Fusion >drive< ?
Apple has confirmed that the Boot Camp partition would reside entirely on the HDD. Also, for some reason the 3TB drive is not currently compatible with Boot Camp Assistant. http://support.apple.com/kb/HT5446?viewlocale=en_U...
Everyone seems to be overlooking the hdd portion of the Fusion drive. How fast does it spin? Is it 5400 or 7200? At some point this part of the drive will be critical to large file transfer performance.
Anand, do you have any insight into how this drive would handle a 200GB Aperture library? The library appears in the Finder as a single file, but will the fusion drive software be clever enough to know that it can subdivide it to put the bit you're working with on the SSD? This is really the make or break question for the fusion drive for me, if the fusion can make it feel like Aperture is on SSD, but without having to fork out for a massive external SSD then it could be a massive boost for photo work.
The Aperture library is really a package, not a single file (right click on it and select "Open Package Contents" if you want to see the files inside the package.)
i've got a sandy bridge mac mini, if i put in a 128gb ssd in the extra hd slot, can I get mountain lion treat it like a fusion drive? Would I even want to?
Whoever gets one of these macs with a Fusion drive and is willing to "play"... can the SSD and spindle drive be separated using Disk Utility, so that you have two physical disk icons appear on the desktop?
If so, Apple have sure made my life easier with ZFS and Macs that have single drives! SSD as HFS for booting and Apps Spindle for data using ZFS filesystems
I could even create disk0s4 and disk0s5 on the SSD and play with cache/zil.
I want to decide what stuff to put on HDD and what I want on my SSD. I don't trust any software to decide for me what deserves to be loaded on the fast drive. Gotta see this benchmarked.
And you would decide what to put where by what method? Logically, you'd figure out what stuff you access the most and is therefore the stuff that would benefit the most from faster loading.
Surprise! The system does that automatically and saves you the effort. It isn't a deep decision. It's just a matter of the system being better equipped to track file usage than you are and executing on your behalf.
Funnily enough this was something done by myself and a few people who i copied from the internet about a year ago. installing a 128gb SSD and replacing super drive from Mac book pro combining both volumes, I also did the same using a 4GB write buffer. i also have done the same to my gaming computer (windows). this isn't something apple and just thought of, maybe bettered. slightly annoying.
I am really looking forward to seeing the new iMac review. Please also spend some word on the 21'inch iMac. Last year most of the word were given to the 27'inch, hopefully the review will cover a little bit more on its companion.
Can you only fusion ONE SSD and ONE HDD? I currently have ML running on a 128GB SSD where I also keep my apps, and two 1TB HDD in raid 0 for storage. Can I fusion the SSD and the raid 0 pair? I imagine if this were possible, I'll get benefits of both fusion *and* raid 0. True?
Great post...:) Yesterday I install Adobe software to my client that bought a new iMac 27" and I'm really surprise from the read/write speed.... I go to the disk utilities and saw 1 Psychical drive of 3 tera. I thought that is a SATA drive, but how it working so fast?? No I understand that is a fusion drive. Apple do it well.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
87 Comments
Back to Article
dagamer34 - Wednesday, October 24, 2012 - link
Of course, the idea with algorithms like this is to break them so they get better, right? :Dspacebarbarian - Wednesday, October 24, 2012 - link
I dont really see how they would improve upon this, its really just solving the inconvenience of keeping system and frequent apps on the SSD and data on HDD. I'm sure Apple has a great algorithm already for deciding what gets to be on the SSD based on your system usage, though I hope there is an option to put new apps directly on the SSD on install (as I would want when installing games etc on my Windows PC). I hope windows 8 packs similar functionality because to the non-power user its not as trivial as it may seem to me.stevedemena - Thursday, October 25, 2012 - link
If the drive is presented as a single volume I don't see how you could request a new app be installed & why should you? Shouldn't the automatic tiering automatically move it to the SSD if warranted? (Used alot)lyeoh - Thursday, October 25, 2012 - link
What I wonder is what happens when one of the drive fails? Say the SSD fails.If it's was a cache and they did things right, it wouldn't matter if the SSD failed (as long as the system realized the SSD had failed).
Whereas in Apple's case it's not a cache. From the description it seems to me that you will lose data if either of the drives failed.
From what I see SSD failure rates aren't low enough to say that 99% of the time it'll be the spinning disk that fails.
Zink - Friday, October 26, 2012 - link
It's that same as having a SSD boot drive and a data HDD and managing files yourself. More disks is going to increase the likelihoods of failure.Site7000 - Monday, October 29, 2012 - link
It's a primary drive; there's no backup component to it. You back it up like you would any drive. To put it another way, the SSD is for speed, not safety.Bownce - Tuesday, October 30, 2012 - link
Any spanned volume works the same way. Double the capacity; halve the reliability. Whereas mirroring halves the usable capacity while doubling the reliability.Guspaz - Wednesday, October 24, 2012 - link
I would have expected a larger write buffer than 4GB... In fact, I would have expected writes to work much like reads, with all writes going entirely to the SSD and files being moved to the slower HDD later if they weren't frequently accessed.Paulman - Wednesday, October 24, 2012 - link
Well, once it's been written to the write buffer, I assume the OS will decide whether or not to keep it there or move it to the rotational HD later.And I don't see why greater than 4GB would be necessary. It truly is intended to just be a scratchpad/clipboard of sorts. As long as it can easily handle typical workloads, then the user shouldn't be able to see any improvement going to a larger buffer.
Freakie - Wednesday, October 24, 2012 - link
I would think that media editors would benefit from a larger write-cache. So would anyone transferring a folder larger than 4GB I would think, which tends to happen not-too-infrequently from my experience xP I think a 12GB write-cache would have been pretty decent myself and would cover a lot of file-move scenarios.orthorim - Wednesday, October 24, 2012 - link
I am not sure why file moving is very important? Moving from where to where? Even with a thunderbolt connector, you'd need an external SSD to see a difference. Otherwise either the connector - ie USB 3 - or the external storage would be the bottleneck.The 4GB write cache is for one edge case, which is that programs like to write lots of little files - file locks, temporary small stuff, whatever. The most recently used algorithm would miss those because they are brand new files - there's no usage data available. So they all go in that 4GB cache first - it's a great solution IMO.
For all other cases, the SSD and HDD dynamically re-arrange their contents. So if you're editing a movie, that movie will be on the SSD, for example. (I was about to bring up the case of a 300MB photoshop file but then realized that easily fits in that 4GB buffer too... whoops... 4GB is quite a bit, only movie editing will really exceed that).
ijmmany - Thursday, November 1, 2012 - link
from what i saw at the launch also playing with in mac Store you can adjust cache accordingly i think up to 12GB write-cachetipoo - Wednesday, October 24, 2012 - link
The more they use for the write buffer the less is available to files and programs I guess. 4GB just for writes should be enough for most people in most cases though, only when you transfer something larger than that would you take a hit to performance.bsd228 - Thursday, October 25, 2012 - link
4GB is more than enough - you only need to store the last X seconds of writes and then flush them out. ZFS's ZIL partition for a home filer can be fine even at 2GB.As for writing everything to SSD and then migrating out slow stuff to HDD, that requires a lot more SSD to work. Veritas's file system vxfs v5 includes dynamic storage tiering (DST) that behaves this way, but you have a target mix of fast/slower storage that is 30/70, not 1/8 or 1/24 like these Apple offerings. There is also a considerable overhead cost to tracking and refreshing file locations with their product.
LeftSide - Wednesday, October 24, 2012 - link
I would think that a block based cache drive would offer more fine grained performance. The concept is interesting, I can't wait to see some performance reviews.Freakie - Wednesday, October 24, 2012 - link
It still could be block based, could it not? I mean, SRT is block-based yet it caches whole files, not just a few blocks that the file takes up :P If a file is on blocks 2, 3, 4, 5, 7, 11 then all of those blocks should be accessed an equal number of times, ensuring the entire file and all of it's blocks are transferred.orthorim - Wednesday, October 24, 2012 - link
Absolutely not. The opposite is the case. An algorithm that works on the OS level can make much better decisions on what to keep on the SSD and what to move to the HDD.It could move all my media files onto the HDD, for example - I might watch movies, listen to music and look at my pictures, but the case that I'd edit these *and* the editing would incur a performance penalty is very small. It could keep all system files on the SSD. And so on. The OS has much more information about how your files might get used so it can make much better decisions.
Zink - Friday, October 26, 2012 - link
It could even keep track of which applications benefit heavily from the SSD and which don't to help make sure a photoshop install that gets used a couple times a month doesn't get pushed off the ssd to make room for big 10GB games that get played many times a week.CharonPDX - Wednesday, October 24, 2012 - link
SSD caching, Intel's failed Turbo Memory, and the like have all failed, for the reason that they tried to be too tricky, or required too much manual effort.This seems to hit the sweet spot. Automatic immediate caching for small amounts (a la Turbo Memory,) with large amount automatic repositioning between drives based on usage load.
No manual keeping track, but MUCH more benefit than the existing solutions. In all honesty, this is what I thought both Turbo Memory and SSD caching *WERE* until I read more in to them. This makes a lot more sense. Use the spinning drive as "volume" storage as it should, then when you figure out what smaller amounts of data should be on the higher-speed, move them.
Make new writes of smaller amounts of data go to the SSD, then write to the spinning drive when workload allows. No risk of losing data like a "regular" write-back cache.
MarkLuvsCS - Wednesday, October 24, 2012 - link
"SSD caching, Intel's failed Turbo Memory, and the like have all failed, for the reason that they tried to be too tricky, or required too much manual effort."I don't understand this at all. Have you ever tried to use Intel's SSD caching debuted on the z68 platform? I'm guessing you probably are just spewing someone's thoughts on the platform without ever trying it.
I have used SSD caching on my z68 platform since I've started putting it together. After about 5-10 mins and a few reboots, SSD caching was up and running and incredibly noticeable. Aside from the brief setup, I've never spent another thought on the matter. It's the easiest solution out there. If the SSD dies randomly, guess what? nothing happens other than me replacing the SSD. My single drive stores everything and maintains cache for anything I load often.
MrSpadge - Wednesday, October 24, 2012 - link
Same here. Except that NVelos solution is just as simple (personally used that one as well).lowlymarine - Wednesday, October 24, 2012 - link
Don't you think it's a tad bit early to declare Smart Response to have failed, seeing as it's only been available on notebooks and most desktops since the IVB launch earlier this year? In fact, I've seen a lot of cheaper Ultrabooks using SRT.Klimax - Thursday, October 25, 2012 - link
I don't think he is aware of that. It looks like he's talking about old technology (at least Core 2 or older)Paulman - Wednesday, October 24, 2012 - link
Wow, thanks for posting this, Anand! I guess you're right - it's not SSD caching per se (other than the 4GB write cache). With a 128GB SSD array and the OS directing what files to move/exchange from the rotational HDD onto the SSD portion, this looks like it could be an elegant solution.Given the option, would us regular desktop users want to move to an intelligently-managed hybrid Hard Drive solution? I run Windows Vista on a 128GB OCZ Vertex 3 with a 1 TB drive as my other applications (and media) storage drive.
However, all of my games were originally installed on my rotational HD by default. If I want Starcraft 2, for example, to be loaded off my SSD, I have to copy over the main game directory and use a directory junction (Windows' version of a UNIX-style symbolic link to a directory) to point to the new SSD directory in order to "fool" Windows into thinking it's still on my rotational drive. And strangely enough, this process BREAKS when it's time to patch or update my game.
If I had a smart algorithm deciding what files were commonly accessed and pulled those files onto my SSD for me, that would be very effective, I feel.
Can Intel's "Smart Response Technology" already do this? Is Apple's Fusion Drive significantly different? Are there any significant downsides to these methods versus manually maintaining an SSD volume and a rotational HD volume, given that you throw at least a 64GB or 128GB SSD into the setup?
Impulses - Wednesday, October 24, 2012 - link
Meh, I don't see what benefit it has for enthusiasts... Junction points aren't much of a hassle, I like being in control, and SSD prices are plummeting anyway. I was gonna buy a second 128GB drive soon, to store more games on flash, but I might just jump on a 256GB given the deals I've seen (and with BF looming).For mainstream users it's definitely an optimal solution tho.
dcollins - Wednesday, October 24, 2012 - link
I am an enthusiast (work in IT, built computers for a decade) and I'm with Paulman. Sure, I can keep it mostly organized, but I have better things to do and I'm willing to sacrifice a little bit of performance for convenience. I'd rather be getting some work done than messing around with my files.Freakie - Wednesday, October 24, 2012 - link
Why don't you just reinstall SC2 on your SSD? o_O Seems kind of silly to have to go through that process again and again when you could do just one things and have it work.And Intel's SRT is the exact same thing, except it leaves a copy for parity on the drive be default (I believe you can turn that off). Where Intel's SRT differs is it's cache size. Instead of 4GB, you have a max of 64GB which is used to not only cache writes, but to cache frequently used files as well. So instead of having 95% storage and 5% cache, the cache is flexible and configurable up to 64GB. If you have a bigger SSD than 64GB then you just use the rest for storage as you deem fit.
So in your case, you could say, use 10GB for Intel's SRT write-cacheing and your remaining usable space for installing Windows/SC2/Web browser/ect. Or if you want something automated, then make your cache the full 64GB and it will work just like Apple's Fusion and the remaining 60GB you can use to install Windows and any other programs/files you don't want getting bumped off the cache due to inactivity.
Bownce - Tuesday, October 30, 2012 - link
Because his DRM hacks would break?JTravers - Wednesday, October 24, 2012 - link
What would be great is if you could use Fusion Drive on an SSD/HD combo you installed yourself. I have an SSD as my main drive and a HDD as the secondary in the optibay of my Macbook Pro. I would love to have Mountain Lion combine them into one logical unit and manage it for me.I'm hoping there is an easy way to enable the feature through terminal commands or that someone makes a 3rd party app to do it (see: Trim Enabler).
solipsism - Wednesday, October 24, 2012 - link
Yeah, I'm hoping someone finds out what OS X SW is handing this and ports it to other Macs as I also have such a setup.I have made my entire HDD my user folder using the Advanced Settings options in Sys Prefs so my SSD can be my boot and app drive. I don't even use half of my 80GB X25.
Andhaka - Thursday, November 1, 2012 - link
http://jollyjinx.tumblr.com/post/34638496292/fusio...Already done. The technology to create a Fusion Drive is built inside OsX iteself. ;)
Cheers
MadMan007 - Wednesday, October 24, 2012 - link
That would be terrible, then you would actually be able to have control over your hardware instead of being pushed into buying a whole new Mac!!Pessimism - Wednesday, October 24, 2012 - link
+5 to above. You will never see that kind of freedom or flexibility in an Apple product.MySchizoBuddy - Wednesday, October 24, 2012 - link
yes HP and Dell will totally allow you to do this on their platform/s
euler007 - Wednesday, October 24, 2012 - link
The eight caddies I just bought from Dell to install RE4 drives I bought from Newegg beg to differ.doobydoo - Sunday, October 28, 2012 - link
'just bought' being the operative words.Peanutsrevenge - Wednesday, October 24, 2012 - link
The 4GB is probably about right, thankfully not a lot of writes occur during general use (else our SSDs wouldn't last 5+years), although I would much prefer to be able to set that write buffer.Then again, if it's frequent data, it should just be getting written to SSD where it stays, skipping the buffer, or so I assume.
Looking forward to it getting torn up and analysed. Though I imagine that'll take a while, you guys have A LOT of work to do over the next month or so.
Now STFU and give us our podcast ;)
Impulses - Wednesday, October 24, 2012 - link
I think you underestimate how many writes current SSD can handle in their lifetime.rscsrAT - Wednesday, October 24, 2012 - link
And what happens if you change the SSD or the HDD?MonkeyPaw - Wednesday, October 24, 2012 - link
I was wondering what would happen if one volume failed. Are you dead in the water? Ideally, the OS would always be on both drives, that way a drive failure wouldn't sideline you completely.Freakie - Wednesday, October 24, 2012 - link
Yep, if your SSD cops out then you're SOL. Which is why Intel's SRT copies the file instead of swapping it. Otherwise they're pretty much the exact same thing.ThreeDee912 - Wednesday, October 24, 2012 - link
Since it looks like Apple designed "Fusion Drive" to appear as a single drive to work seamlessly with existing software, I'm guessing things like Time Machine backup and Lion Recovery will still work fine. Even if the OS drive failed, a Time Machine drive has a built-in recovery partition so you can still restore your files if either drive fails.PaulRod - Wednesday, October 24, 2012 - link
"That 4GB write buffer is the only cache-like component to Apple's Fusion Drive. Everything else works as an OS directed pinning algorithm instead of an SSD cache. In other words, Mountain Lion will physically move frequently used files, data and entire applications to the 128GB of NAND Flash storage and move less frequently used items to the hard disk"That first bit is essentially a hybrid drive like the Seagate Momentus, the rest is exactly like Intel's ssd caching... infact that's all it is with maybe a tweak algorithm.
mavere - Wednesday, October 24, 2012 - link
Intel's SRT is still a caching technology, which is more like the first part of this "Fusion" thing.I can't think of any consumer products that would automate the transfer of the physical files themselves.
Freakie - Wednesday, October 24, 2012 - link
Nope, the first part is write-caching while SRT is mostly file caching (you can write-cache as well, if you like)epobirs - Wednesday, October 24, 2012 - link
I find myself wondering if operating at the file level is any real improvement. The end effect is the same and it shouldn't matter is all or part of an application is cached. If something isn't cached it's because it doesn't get used much. Why waste SSD space on it?If I use Word a lot but never do a mail merge, it isn't going to affect Word's performance for me if the files for the mail merge function live on the platter drive instead of the SSD.
'OS directed pinning operation' Isn't that what most modern desktop OSes do with frequently launched apps, cache them to RAM for quicker access? Why is that caching but Fusion isn't?
EnzoFX - Wednesday, October 24, 2012 - link
I thought the concept wasn't worth it as per the last podcast =P. I want caching to be great, until we have 2TB SSD that are affordable, there will be a benefit to not have to deal with managing two drives. When you install a bunch of stuff, it can be tedious to manage.Oh, and remember to check if the new 27" can still be used as a monitor as well please!
epobirs - Wednesday, October 24, 2012 - link
To say that this isn't SSD caching is purely a semantic argument. The only difference is that this works at the file level rather than the sector level. Other than that it is still a continuing popularity contest to determine which items are worthy of a place on the SSD or are just as well off on the platter drive.The big difference between this and building a PC with SRT enabled is that the Fusion drives comes pre-imaged with the Mac OS and bundled apps on the SSD, so you get the performance boost immediately instead of having teach it over the course of usage what files need to be cached. I expect PC OEMs to do the same as big brand machines start to offer SRT as a factory option.
slashbinslashbash - Wednesday, October 24, 2012 - link
No, the real difference that makes this not a "cache" is that the files do not live on both the SSD and the HDD. They only live on one or the other. A true "cache" would have all files residing on the HDD, with selected ones also cached to the SSD. (Similar to how a processor's L1/L2/L3 caches simply store the same stuff that's already in RAM.... it doesn't disappear from the RAM when it gets put into the cache.)epobirs - Wednesday, October 24, 2012 - link
Unless you look under the hood with System Report the user will only be aware of a single drive volume. This to mean screams "SSD CACHE!"Seriously, this is just building on SRT. Is there any doubt Intel shared their code with Apple to get started? End users get SSD benefits without having to think about what should go where. Where have I heard that pitch before?
I hope it's smart enough not to cache big video files regardless of how often they're played. That would be a horrible waste of SSD space for a type of data that gets little benefit from the medium. Though in a mobile setting it would be advantageous for battery life. Perhaps a control panel for such settings is needed.
spda242 - Wednesday, October 24, 2012 - link
When I heard of this yesterday I assumed that this was Apple's implementation of Intel's SRT with OSX? Am I wrong assuming that? Regarding this article and comments it looks like this is somewhat different?epobirs - Wednesday, October 24, 2012 - link
It's an extension of SRT that takes it a bit farther.For example, Apple says this isn't caching because the files in the SSD aren't duplicates of the platter drive. They are the sole copies. This means you get more combined space from the two drives. Rather than a 1 TB drive with an invisible 128 GB cache, you have what appears to be a single 1.2 TB volume with two performance levels depending on where data is located. This is the biggest benefit I can see of operating at the file level rather than the sector level as SRT does.
It remains to be seen how much of this is Intel and how much Apple. If it is mostly Intel we should see a version of this for Windows sometime next year as six months is a frequent exclusivity term between Intel and Apple. Intel may make it exclusive to Haswell chip sets to promote those.
It's kind of like the variable throughput of some optical drives. Makers of CD-ROM games for console once had to pay a lot of attention to where a file was on the disc. You wanted code on the fast parts of the disc and video on the slow parts, assuming it wasn't slower than the video playback required. Since the video is linear it doesn't matter how fast it loads so long as the minimum is maintained. But code needed to be on the faster tracks to avoid long pauses between parts of the game.
In general, this is a good thing for users who are easily confused about dealing with multiple drives of differing performance. I really liked the idea of SRT but found it extremely flaky on the two H67 systems where I tried to use it. I gave up and settled for manually managing the placement of different file types on my systems. (It is very easy to make the Windows 7/8 User data directories live anywhere you want so the SSD doesn't get filled with big data files.) Perhaps Microsoft should be like Apple and make Windows more natively aware of the concept rather than leaving it entirely to Intel.
Freakie - Wednesday, October 24, 2012 - link
Intel just needs to add the option to disable the data-parity part to their drivers and then SRT can be exactly the same as "fusion". No need for new CPUs/Chipsets.My opinion is this is almost purely Intel's tech that Apple decided they liked and then implemented themselves xP
spda242 - Wednesday, October 24, 2012 - link
Thanks for The explanation!I have done The same thing as you with my OSX, I have moved moved VMs,steam,iTunes library and other Non-SSD friendly stuff to my HDD with a combination of soft links and moving the iTunes library but for some of my less technical friends this technology will be easier.
jaydee - Wednesday, October 24, 2012 - link
In the Apple store, the Fusion drive is a $250 upgrade option (over the standard 1TB drive) on the Mac Mini (not available on the base model mind you, just the $800+ models).So essentially they are charging an additional $250 for a 128GB SSD (probably Toshiba) and enabling some software bit that will decide for you what data gets put on which drive... at a time when consumers can purchase a 128GB Samsung 830 for $80, or 256GB for $160.
I guess we shouldn't be suprised considering they'll sell you a ($25) external DVD drive for $79, and upgrade from ($20) 4GB to ($40) 8GB of RAM for $100.
PeteH - Wednesday, October 24, 2012 - link
Apple's doing what they always do, charging extra for convenience. Yes, you could seek out and buy cheaper parts, or manage multiple partitions on your own, but if you don't have the time, technical skill, or desire, Apple will handle it for you (for a price).I see it as similar to the auto industry. Tasks like changing oil or replacing headlights are simple and cheap to do yourself, but an awful lot of people without the time or the skill (or just because messing around with their car scares them) pay someone else to do it.
spacebarbarian - Wednesday, October 24, 2012 - link
First I want to confess that I am not very up to date with NAND hardware, but wouldn't having a dedicated area for caching on the SSD be an issue with write wearing? Unless the SSD controller / OS handles wear leveling these days.Freakie - Wednesday, October 24, 2012 - link
The controller usually handles wear leveling :)epobirs - Wednesday, October 24, 2012 - link
Yes butt he time scale for losing a serious portion of the drives is measured in years.Also, most drives have a reserved area for this purpose. The reserved cells are never available to the user and are switched in as cells wear out. This is why you see drives listed as 120 GB instead of 128 GB or 240 GB instead of 256 GB.
It's a trade-off that means the drive should last and function well long into the life span of the typical system. If you had a machine in continuous use for five or more years you might notice the drive losing capacity. Not an issue for most people.
tipoo - Wednesday, October 24, 2012 - link
If I remember correctly, most modern drives would last years even in database use, let alone what a single user could do. The hard drive would probably fail before the wear limits were reached.derPat - Wednesday, October 24, 2012 - link
The Fusion drive reminds me the Hierarchical Storage Management of Irix I was using in the good old days ... HSM is still in use but is usually relegated to the big iron.It would be nice if apple will support multiple tiers of storage.
iwod - Wednesday, October 24, 2012 - link
For Random Write we have the 4GB sorted most of that. The problem is Random Read. Where do we do Random Read most since these needs to be inside SSD. For others Seq Read Write HDD isn't that much slower at all.Again, i think the experience will be hard to measure with any benchmarks tools. I am looking forward to Anand's review/
halbhh2 - Wednesday, October 24, 2012 - link
I just suspect this is going to "just work" (as in just work well).And a mystery to me is why hasn't Seagate gotten it together, after years, to get really great performance from their XT series. I mean, how long does it take to get it right? A few months I bet is enough if you actually try.
epobirs - Wednesday, October 24, 2012 - link
The problem for Seagate and other HD makers is getting the full value out of the SSD volume meant moving out of their comfort zone and into higher levels of the OS than their usual product. It also meant much higher price points if they were going to incorporate really large amounts of flash memory. This makes them nervous since they have very price sensitive customers in the big PC OEMs.I'm sure they'll keep at it but they've missed their window and that bird has flown. What is happening now is new computer models are coming with mSATA port/slots to allow a notebook to have an SRT drive along with a high capacity HD. Dell, for example, ow including a 32 GB mSATA cache as standard on several models.
This isn't as sophisticated as what Apple is doing. It remains to be seen if Intel and/or Microsoft will offer the functionality to enable PC OEMs to match the Fusion feature set. Much of it should be trivial, such as pre-loading the SRT cache volume with the OS files and any other favored items, although what you'd really need to do is mirror the appropriate sectors on the hard drive. At least until a file system level version of SRT is offered.
MGSsancho - Wednesday, October 24, 2012 - link
Sounds like ZFS to me.Bob-o - Wednesday, October 24, 2012 - link
:-) Apple certainly could have delivered the same benefits much earlier, had they not rejected ZFS. . .ThreeDee912 - Thursday, October 25, 2012 - link
Although Apple ended up ditching ZFS after running into licensing issues, they started working on the basic underpinnings for a possible new filesystem, and included it in 10.7 Lion, calling it Core Storage.While Apple's still using Core Storage with the ancient HFS+ filesystem, the new FileVault 2 disk encryption system uses Core Storage to present the encrypted drive as a virtual normal one to the system.
I'm guessing Fusion Drive is also using Core Storage.
orthorim - Wednesday, October 24, 2012 - link
Thanks for providing some actual information on how this works. I've been googling around for a while, apparently none of the other so-called "tech" publications are interested enough to actually ask... Anandtech stands out, once again. Thanks!!!piroroadkill - Thursday, October 25, 2012 - link
Just like all the other tiered/caching SSD and small NAND drive systems that have been around for ages.Still, it is of course a nice idea. But it is in no way a "breakthrough concept".
inplainview - Thursday, October 25, 2012 - link
"Just like all the other tiered/caching SSD and small NAND drive systems that have been around for ages."Exactly and where are these systems? While they may not be new, they weren't mainstream. They lived in the domains of the mommies cellar dwellers… While not new Apple at least made it mainstream… This is what they do...
space2001 - Thursday, October 25, 2012 - link
I read recently that the 1TB Fusion drive was supported with Bootcamp, but the 3TB was not... that's a shame since the internal drive is not upgradeable I would have gone with the 3TB Fusion drive. I also want the flexibility of installing Windows via Bootcamp, and see it's not supported on the 3TB drive. The Apple note also indicated that the Apple Disk Utility not be used for Bootcamp partition, and instead to use the Bootcamp utility.Would using Parallels or VMWare Fusion (with Fusion drive? that's sure to be an opportunity for marketing!) get around the Bootcamp and 3TB restriction?
Another question: What about the Emergency Restore partition that started with Mountain Lion? Is that still supported with Fusion >drive< ?
Hmm... lots of questions
KPOM - Thursday, October 25, 2012 - link
Apple has confirmed that the Boot Camp partition would reside entirely on the HDD. Also, for some reason the 3TB drive is not currently compatible with Boot Camp Assistant.http://support.apple.com/kb/HT5446?viewlocale=en_U...
conitor - Thursday, October 25, 2012 - link
Everyone seems to be overlooking the hdd portion of the Fusion drive. How fast does it spin? Is it 5400 or 7200? At some point this part of the drive will be critical to large file transfer performance.Filiprino - Friday, October 26, 2012 - link
Better get an SSD and an hybrid HDD drive, just saying.Hard disk is one more level in the memory hierarchy. It can be managed in the same way system RAM and processor cache are managed.
Oujmik - Friday, October 26, 2012 - link
Anand, do you have any insight into how this drive would handle a 200GB Aperture library? The library appears in the Finder as a single file, but will the fusion drive software be clever enough to know that it can subdivide it to put the bit you're working with on the SSD? This is really the make or break question for the fusion drive for me, if the fusion can make it feel like Aperture is on SSD, but without having to fork out for a massive external SSD then it could be a massive boost for photo work.liuping - Sunday, October 28, 2012 - link
The Aperture library is really a package, not a single file (right click on it and select "Open Package Contents" if you want to see the files inside the package.)It should work very well with a Fusion Drive.
dmbartender123 - Friday, October 26, 2012 - link
i've got a sandy bridge mac mini, if i put in a 128gb ssd in the extra hd slot, can I get mountain lion treat it like a fusion drive? Would I even want to?tangles - Saturday, October 27, 2012 - link
Whoever gets one of these macs with a Fusion drive and is willing to "play"... can the SSD and spindle drive be separated using Disk Utility, so that you have two physical disk icons appear on the desktop?If so, Apple have sure made my life easier with ZFS and Macs that have single drives!
SSD as HFS for booting and Apps
Spindle for data using ZFS filesystems
I could even create disk0s4 and disk0s5 on the SSD and play with cache/zil.
Fingers crossed.
akamateau - Saturday, October 27, 2012 - link
I that an AMD product? Why doesn't AMD sue APPLE for stealing their brand?cmdrdredd - Sunday, October 28, 2012 - link
I want to decide what stuff to put on HDD and what I want on my SSD. I don't trust any software to decide for me what deserves to be loaded on the fast drive. Gotta see this benchmarked.epobirs - Monday, October 29, 2012 - link
And you would decide what to put where by what method? Logically, you'd figure out what stuff you access the most and is therefore the stuff that would benefit the most from faster loading.Surprise! The system does that automatically and saves you the effort. It isn't a deep decision. It's just a matter of the system being better equipped to track file usage than you are and executing on your behalf.
This is the whole point of having automation.
AstonCheng - Tuesday, October 30, 2012 - link
I really wonder that if Fusion drive is achieved by Mac OS X or by a chip.Andhaka - Thursday, November 1, 2012 - link
Os X via Core Storage API.http://jollyjinx.tumblr.com/post/34638496292/fusio...
and it can be enabled on normal Macs as long as you have ML and a second drive. Even an external USB drive. ;)
Cheers
ijmmany - Thursday, November 1, 2012 - link
Funnily enough this was something done by myself and a few people who i copied from the internet about a year ago. installing a 128gb SSD and replacing super drive from Mac book pro combining both volumes, I also did the same using a 4GB write buffer. i also have done the same to my gaming computer (windows). this isn't something apple and just thought of, maybe bettered. slightly annoying.Kev.LoveMac - Thursday, November 1, 2012 - link
I am really looking forward to seeing the new iMac review. Please also spend some word on the 21'inch iMac. Last year most of the word were given to the 27'inch, hopefully the review will cover a little bit more on its companion.thetrystero - Sunday, January 6, 2013 - link
Can you only fusion ONE SSD and ONE HDD? I currently have ML running on a 128GB SSD where I also keep my apps, and two 1TB HDD in raid 0 for storage. Can I fusion the SSD and the raid 0 pair? I imagine if this were possible, I'll get benefits of both fusion *and* raid 0. True?9comp - Thursday, May 30, 2013 - link
Great post...:)Yesterday I install Adobe software to my client that bought a new iMac 27" and I'm really surprise from the read/write speed....
I go to the disk utilities and saw 1 Psychical drive of 3 tera. I thought that is a SATA drive, but how it working so fast??
No I understand that is a fusion drive. Apple do it well.
Regards,
Hagy