It costs a couple of bucks for company, that makes mobos, then they add something around 20% to that and sell those mobos to companies that import them to other countries, then those companies add another 20% to that and sell them to other companies, that sell those mobos on-line, then those companies add another 20% and sell it to you.
So even if at first some mobo costs only 100$, after all those +20% it will cost 173$. This is a lot of money, and if you add to this another couple of bucks + 73%, it gets even bigger.
Anything new costs more than what's out there today. It happened for USB and every new DDRx DIMM that's come out. Today USB is cheap (USB 3.0 a little more) and once DIMMs get into full production, you can get them for $50. I've been watching this for 35 years in the business.
Intel's Light Peak technology is amazing. Yes, the price right now is more, but it'll come down as the volumes go up. Eventually, everything will have optical interfaces.
It's of course not just DDRx memories that were more expensive than mainstream memory when they were new, it also happened with EDO RAM, then fast pagemode before that, and so on. DRAM itself was just CRAZY expensive back when it was new, and then Intel came with a revolutionary IC that was like half the price of the competition, and so on...
I'm one of those who believe in thunderbolt, it's an amazing interface. We've never had anything like it, 80Gbit/s and 20 watts of power in a tiny little connector for a highly affordable price even now in its infancy, with up to 3m cable length (I believe.) It's totally unheard of in the entire history of computing.
I totally don't understand the people who refuse to be excited by the possibilities and instead prefer to poo-poo thunderbolt, I myself believe it's just good ol' prejudiced apple hate at work because they pioneered the tech.
Admittedly, right now we don't quite know what to do with all of this, but that'll change soon. There's already a PCIe chassis on its way, and even though this particular product won't handle any high-powered graphics cards its still going to be useful for a lot of people. We can stick 10Gbit networking cards in that box, a monster PCIe SSD along with a video capture board or whatever else tickles our fancy, or whatever.
Why do people like you continue to be blind and ignorant about Intel's Light Peak/Thunderbolt technology? USB is a seperate Serial bus (Universal Serial Bus) and Intel's Light Peak is an extension of PCI express. It's not directed at people looking for cheap external storage or peripherals. Think of all the PCI/PCIe add on cards out there (not just USB devices) and you begin to see how useful Light Peak/Thunderbolt can be.
It's use will most likely be targeted at workstations, not consumer desktops. External graphics, external storage arrays, etc for mobile workstations.
"It's use will most likely be targeted at workstations"
It shouldn't be. It stands to provide the greatest benefit to laptops, which can't accept add-in boards, and that often suffer from limited internal storage, storage speed, and available I/O ports.
Thunderbolts speed and ability daisy chaining monitors, storage and many other devices is highly desirable to alot of people. It smokes USB 3.0 is many ways.
12mm means "twelve millimeters". You know, the metric system? Look it up on Wikipedia if you've never heard of it before, almost the entire world is using metric measurements, except for the united states and a few other backwards hillbilly idiot countries.
Apologies for my poor reply. I should have made a more constructive post.
The image at the top of the page depicts the TB controller in the Macbook Air. Judging by its proportons in relation to other components it would seem probable it's roughly 12mm-ish square; certainly not bigger. Intel should have datasheets on the component in question if it's important to know its exact dimensions.
While it may be accurate to say that Thunderbolt has seen limited implementation this year, to put it in perspective, with just two OEMs producing PCs with TB host controllers, TB will have a higher attach rate in 2011 than USB 3.0 had in 2010.
Your description of Thunderbolt having one or two DP "outputs" is intriguing to me. The current chips support 2 or 4 x 10 Gbps channels (1 or 2 ports), each of which is capable of transporting a DP 1.1a stream (among other things). So theoretically, with daisy chain-able displays such as the ATD, you could attach 2 displays to each port. The limitations would seem to be on the back side of the TB controller, in that they only support 1 or 2 DP connections from either the iGPU or dGPU. One thing I've never been quite clear on is whether these inputs are DP 1.2 or 1.1a. While a single 10 Gbps TB channel can only carry an 8.64 Gbps maximum DP 1.1a stream, if the inputs on the host controller were DP 1.2 then each input could be used to drive multiple displays using MST, and driving 4 displays would be possible with a 2 port controller.
Are the 1 or 2 DP inputs for Cactus Ridge DP 1.2? And if the connections for PCIe remained the same at 4 lanes, are they PCIe 2.0 or 3.0?
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
16 Comments
Back to Article
jabber - Tuesday, September 13, 2011 - link
Ok lets see, how much does a USB controller cost to put on a MB?How much does a USB3.0 controller cost to put on a MB?
Then how much does a Thunderbolt controller cost? Not to mention the cables.
It's really got to be within pennies of USB3.0 to get any serious adoption. I doubt thats the case.
Synaesthesia - Tuesday, September 13, 2011 - link
It doesn't have to cost pennies, motherboards cost hundreds of dollars, if it costs a couple of bucks, the PC will hardly cost any more.knedle - Wednesday, September 14, 2011 - link
Unfortunately, it's not how it works.It costs a couple of bucks for company, that makes mobos, then they add something around 20% to that and sell those mobos to companies that import them to other countries, then those companies add another 20% to that and sell them to other companies, that sell those mobos on-line, then those companies add another 20% and sell it to you.
So even if at first some mobo costs only 100$, after all those +20% it will cost 173$. This is a lot of money, and if you add to this another couple of bucks + 73%, it gets even bigger.
Hector2 - Tuesday, September 13, 2011 - link
Anything new costs more than what's out there today. It happened for USB and every new DDRx DIMM that's come out. Today USB is cheap (USB 3.0 a little more) and once DIMMs get into full production, you can get them for $50. I've been watching this for 35 years in the business.Intel's Light Peak technology is amazing. Yes, the price right now is more, but it'll come down as the volumes go up. Eventually, everything will have optical interfaces.
FaaR - Tuesday, September 13, 2011 - link
It's of course not just DDRx memories that were more expensive than mainstream memory when they were new, it also happened with EDO RAM, then fast pagemode before that, and so on. DRAM itself was just CRAZY expensive back when it was new, and then Intel came with a revolutionary IC that was like half the price of the competition, and so on...I'm one of those who believe in thunderbolt, it's an amazing interface. We've never had anything like it, 80Gbit/s and 20 watts of power in a tiny little connector for a highly affordable price even now in its infancy, with up to 3m cable length (I believe.) It's totally unheard of in the entire history of computing.
I totally don't understand the people who refuse to be excited by the possibilities and instead prefer to poo-poo thunderbolt, I myself believe it's just good ol' prejudiced apple hate at work because they pioneered the tech.
Admittedly, right now we don't quite know what to do with all of this, but that'll change soon. There's already a PCIe chassis on its way, and even though this particular product won't handle any high-powered graphics cards its still going to be useful for a lot of people. We can stick 10Gbit networking cards in that box, a monster PCIe SSD along with a video capture board or whatever else tickles our fancy, or whatever.
MobiusStrip - Sunday, July 22, 2012 - link
Except it's not Light Peak anymore. It's just Copper Plain.Still better than USB, but a cop-out on the original plan nonetheless.
TypeS - Tuesday, September 13, 2011 - link
Why do people like you continue to be blind and ignorant about Intel's Light Peak/Thunderbolt technology? USB is a seperate Serial bus (Universal Serial Bus) and Intel's Light Peak is an extension of PCI express. It's not directed at people looking for cheap external storage or peripherals. Think of all the PCI/PCIe add on cards out there (not just USB devices) and you begin to see how useful Light Peak/Thunderbolt can be.It's use will most likely be targeted at workstations, not consumer desktops. External graphics, external storage arrays, etc for mobile workstations.
darwinosx - Tuesday, September 13, 2011 - link
Becuase they are looking for a reason to not want it.MobiusStrip - Sunday, July 22, 2012 - link
"It's use will most likely be targeted at workstations"It shouldn't be. It stands to provide the greatest benefit to laptops, which can't accept add-in boards, and that often suffer from limited internal storage, storage speed, and available I/O ports.
darwinosx - Tuesday, September 13, 2011 - link
Wrong.Thunderbolts speed and ability daisy chaining monitors, storage and many other devices is highly desirable to alot of people. It smokes USB 3.0 is many ways.
Not everyone buys $500 Best Buy laptops.
iwodo - Tuesday, September 13, 2011 - link
It will be very nice if you could tell me what 12mm x 12mm means?Is it smaller then Eagle Ridge or same size?
FaaR - Tuesday, September 13, 2011 - link
12mm means "twelve millimeters". You know, the metric system? Look it up on Wikipedia if you've never heard of it before, almost the entire world is using metric measurements, except for the united states and a few other backwards hillbilly idiot countries.FaaR - Tuesday, September 13, 2011 - link
Apologies for my poor reply. I should have made a more constructive post.The image at the top of the page depicts the TB controller in the Macbook Air. Judging by its proportons in relation to other components it would seem probable it's roughly 12mm-ish square; certainly not bigger. Intel should have datasheets on the component in question if it's important to know its exact dimensions.
repoman27 - Monday, September 19, 2011 - link
Light Ridge is 15mm x 15mm. The actual chip is much smaller, but the flip chip package ends up being about 15mm / side.The Eagle Ridge controller used in the MacBook Air is about 8.5mm x 8.5mm, or maybe even a tad less than that.
Unfortunately it doesn't seem that Intel is providing any technical documents to the general public at this time.
repoman27 - Wednesday, September 14, 2011 - link
While it may be accurate to say that Thunderbolt has seen limited implementation this year, to put it in perspective, with just two OEMs producing PCs with TB host controllers, TB will have a higher attach rate in 2011 than USB 3.0 had in 2010.Your description of Thunderbolt having one or two DP "outputs" is intriguing to me. The current chips support 2 or 4 x 10 Gbps channels (1 or 2 ports), each of which is capable of transporting a DP 1.1a stream (among other things). So theoretically, with daisy chain-able displays such as the ATD, you could attach 2 displays to each port. The limitations would seem to be on the back side of the TB controller, in that they only support 1 or 2 DP connections from either the iGPU or dGPU. One thing I've never been quite clear on is whether these inputs are DP 1.2 or 1.1a. While a single 10 Gbps TB channel can only carry an 8.64 Gbps maximum DP 1.1a stream, if the inputs on the host controller were DP 1.2 then each input could be used to drive multiple displays using MST, and driving 4 displays would be possible with a 2 port controller.
Are the 1 or 2 DP inputs for Cactus Ridge DP 1.2? And if the connections for PCIe remained the same at 4 lanes, are they PCIe 2.0 or 3.0?
AbRASiON - Friday, September 16, 2011 - link
Bad bad implimentation, it's never going to get anywhere.