Why? AMD is an x86 processors that can run Windows and complete systems. Apple is an ARM processor that runs nice toys. They are not in the same ballpark.
OK, but that isnt really the point. Comparing ARM to x86 is like comparing a sports car to an 18 wheel truck. Both are good, both have their purpose, but they aren't really in the same market segments. Are is great at specific things, where x86 is the workhorse of the industry.
Generalizing like that is quite foolish, atom is x86, but it is closer to a wheelchair than an 18 wheeler.
From what I can tell, the apple's A10 is about the same performance as premium mobile device x86 chips from intel. The fact that it has no serious programs for it and doesn't allow you to run even your own programs without jailbreaking ... is a whole different topic.
Performance is one metric, usability - another metric. And sure, performance without usability is a shame, but that's what the corporations dictate - those aren't devices for you to use as much as they are devices for you to be used.
So you are a fascist who thinks some people are not entitled to opinions? I am not projecting anything, I am expressing my opinion. If you have a problem about it, that's your problem entirely. If my opinions threaten to shatter someone's fragile artificial worldview, that's their their problem entirely. Or maybe you are implying that people should only be allowed to express opinions that are coherent with the herd's baaing? Did you see me denying the right of opinions to dummies or anyone? Nah, because it is their right, even if it is mind-numbingly stupid and even if I don't like it.
Different ballparks entirely dood. One runs alot of fun mobile toys, the other runs the entire business world including every computer and every system in every factory that makes all of those toys.
That's not the problem, most people can't and quite frankly don't need to be powerusers - engineers, designers and such are a very small fraction of the population. The problem is the lack of choice - you don't even have the option to use an android or apple tablet for professional work. And the even bigger problem - that this becomes a trend.
Year after year the entire industry is moving towards the "walled garden spyware milking station" model, I wouldn't be surprised if in the near future owning an open general purpose computer becomes criminalized - because you know... only pirates, hackers, terrorists and pedophiles need it.
And last but not least, what is perhaps most worrying is people are getting increasingly dumb as technology gets more and more affordable. One would say "big whoop, I don't care about the dummies", however in the long run that's a huge problem, it affects the entire species, and thus every individual member of it. Call me old fashioned, but I think people should, if not get smarter, then at least not get dumber as technology becomes more available and widespread. Today software helps you finish your sentences, before you know it, software will be dictating your sentences. It is ok for machines to do the "legwork" for you - numerous small and repetitive tasks, but it is a very dangerous thing when machines begin doing the thinking for you and you forget how to do it... if you ever knew to begin with...
I don't disagree with most that... but I don't know if people are getting increasingly dumb... It used to be the dumber people didn't use computers much, and now everyone does. As computer/tablet/mobile usage went up the average computer user's IQ went down, but those dumbies always existed, its just that now they are online and have a voice.... Yadda yadda yadda, now we have Trump. LOL
They are being made dumber and it makes perfect sense. Longterm economic strategy 101 - with ample resources you breed a lot of workers to produce wealth, as resources begin to run out you automate menial labor and focus on services to take that wealth from the workers, and finally, using the wealth you produce a military complex to protect you from the workers as they become obsolete and infeasible both as workers and consumers. Being dumb will make it easy to march the general population straight into the void, unable to resist or survive without the system which no longer has need of them. Dumb is convenient - easy to exploit, easy to dispose of. More technology has allowed for more ways to dummify - more knowledge of the dummies, more control over the dummies, more distraction for the dummies from the issues at hand. Everything people do online passes through data mining algorithms, gets profiled, analyzed, and combed for anything anyone might find useful in any way. Including this very comment space.
I agree, there are a heck of alot of dumb people out there... Just not sure they are getting even dumber. Lazier maybe, but the average person has always been mindbogglingly stupid.
Just picture in your mind how stupid the average person is... Now think, half of everyone is stupider than that guy! - George Carlin (said long before the vast majority of people had any PC at all)
A way of looking at it via the "property is theft" philosophy is why, w/ all these new marvels, are even the fortunate employed (those serfs useful to the regime), feeling more on a treadmill over a shark tank than ever?
Our alleged protective regimes have focused more on how to meter mankinds recent gifts (and infrastructure, long paid for by our compatriot forbears), than disperse them free or at cost.
Most of the great "inventions" and their resultant fortunes were assigned to the best schemer in the group, not the true inventors. Some were unrelated predatory lawyers, & often they were major hinderances to wider adoption.
The last thing capitalism wants is to remove the necessity of serving the regime.
To all intents, it is illegal to live simply. You may not build your own shelter e.g. You must take a mortgage for a sale price bureaucrats very actively inflate.
Free wi-fi to the poor in some areas would cost a pittance to our allegedly caring commonwealths. Instead they hand out monopolistic licences at dodgy auctions, if that.
"Commited ethnic masochist". Did you mean to say "not astoundingly racist"? Because those words you just strung together mean nothing at all.
Also, braindead? Really? So anyone with a functioning brain would be unable to see the lying, gross inconsistency, thin-skinnedness and general stupidity that spews from the Trump campaign (and his own mouth)? Not to mention the racism, sexism, homophobia, transphobia, elitism, disdain for poor/middle class people, blatant disregard for the rule of law, due process and the courts, promotion of violence, and willingness to lie about anything and everything to win?
Hey, at least Trump is only preposterous and stupid. Hillary is all that PLUS crazy and evil. She is just as racist as Trump, if not more so, but she is not in the habit of being honest, she'd prefer to claim the votes of minorities.
Politics is a joke and the current situation is a very good example of it. People deserve all shit that coming their way if they still put faith in the political process after this.
You have no idea what you are talking about. Geekbench is very much optimized, there are basically three types of optimization:
optimization done by the compiler - it eliminates redundant code, vertorizes loops and all that good stuff, that happens automatically
optimization by using intrinsics - do manually what the compiler does automatically, sometimes you could do better, but in general, compiler optimizations are very mature and very good at doing what they do
"optimization" of the type "if (CPUID != INTEL) doWorse()" - harmful optimization that doesn't really optimize anything in the true sense of the word, but deliberately chooses a less efficient code path to purposely harm the performance of a competitor - such optimizations are ALWAYS in the favor of the TOP DOG - be that intel or nvidia - companies who have excess of money to spend on such idiotic things. Smaller and less profitable companies like amd or arm - they don't do that kind of shit.
Finally, performance is not magic, you can't "optimize" and suddenly get 5X the performance. Process and TDP are a limiting factor, there is only so much performance you can get out of a chip produced at a given process for a given thermal budget. And that's if it is some perfectly efficient design. A 5W 20nm x86 chip could not possibly be any faster than a 5W 20nm ARM chip, intel has always had a slight edge in process, but if you manufacture an arm and a x86 chip on identical process (not just the claimed node size) with the same thermal budget the amr chip will be a tad faster, because the architecture is less bloated and more efficient.
It is a part of a dummy's belief system that arm chips are somehow fundamentally incapable of running professional software - on the contrary, hardware wise they are perfectly capable, only nobody bothers to write professional software for them.
I have a Bachelor's in computer science and specialized in high performance parallel, vectorized, and heterogeneous computing. I've disassembled Geekbench on x86 platforms, and it doesn't even use anything SSE or higher, and that's ancient Pentium III instructions.
It does not happen automatically if you don't use the right compiler flags and don't have your data aligned to allow the instructions to work.
You need intrinsics for a lot of things. Clang and GCC both have huge compiler bug forums filled with examples of where people beat the compilers significantly.
Yes you can get 5x the performance by optimizing. Geekbench only handles 1 datem at a time on Intel hardware vs. the 8 you can do with AVX and AVX2. Assuming you don't choke on bandwidth, you can get an 8x speedup.
ARM is not more efficient on merit, and x86 is not bloated by any stretch. Both use microcode now. ARM is no longer RISC by any strict definition.
Cavium has. Oracle has. Google has. Amazon has. In all cases ARM could not keep up with Avoton and Xeon D in performance/watt/$ and thus the industry stuck with Intel instead of Qualcomm or Cavium.
As you can see the performance gains aren't really that great when it comes to real world usage, and as such we should probably start to use other benchmarks as well, and not just use Geekbench or browser javascript performance as indicators of actual performance of these SoCs especially when comparing one platform to another.
I've been using GCC mostly, and in most of the cases after doing explicit vectorization I found no perf benefits, analyzing assembly afterwards revealed that the compiled has done a very good job at vectorizing wherever possible.
However, I am highly skeptical towards your claims, I'll believe it when I see it. I can't find the link now, but last year I've read detailed analysis, showing that A9X core performance per watt better than skylake over most of the A9X's clock range. And not in geekbench, but in SPEC.
As for geekbench, you make it sound as if they actually disabled vectorization explicitly. Which would be an odd thing. Not entirely clear what you mean by "1 datem at a time", but if you mean they are using scalar rather than vector instructions, that would be quite odd too. Luckily, I have better things to do than rummage about in geekbench machine code, so I will take your word that it is not properly optimized.
And sure, 256bit wide SIMD will have higher throughput than 128bit SIMD, but nowhere nearly 8 or even 5 times. And that doesn't make arm chips any less capable of running devices, which are more than useless toys. Those chips are more powerful than workstations were some 10 years ago, but their usability is nowhere near that. As the benchmarks from the link Toss3 posted indicate, the A9X is only some ~40% slower than i5-4300U in the "true/real world benchmarks", and that's a 15 watt chip vs the A9X is like what, 5-ish or something like that? And ARM is definitely more efficient once you account for intel's process advantage. This will become obvious if intel ever dare to manufacture arm cores at the same process as their own products. And it is not because of the ISA bloat but because of the design bloat.
Naturally, ARM chips are a low margin product, one cannot expect a 50$ chip to outperform a 300$ chip, but the gap appears to be closing, especially keeping in mind the brickwall process is going to hit the next decade. A 50$ chip running equal to a 300$ (and much wider design) chip from 2 year ago opens up a lot of possibilities, but I am not seeing any of them being realized by the industry.
Geekbench is trash at comparing across different architectures. It makes steaming piles look good. Only using SSE (first gen, ancient) on x86 processors would certainly be a part of the puzzle regarding Geekbench results. Thanks, Patrick.
Not to take anything away from Apple's cores. I wouldn't be surprised that they have better performance per WATT than Skylake. Perf/watt is kind of a big deal for mobile, and Apple (though I don't care for them as a company) builds very efficient processor cores. With A10 using a big.LITTLE implementation of some variety, they stand to gain even more efficiency. But in terms of raw performance? Never rely on Geekbench unless maybe you're comparing an A9 Apple chip to an A10 or something. MAYBE.
@ddriver: "Hey, it is not me who uses crap like geekbench and sunspider to measure performnace, it is sites like AT ;)"
LOL. My gut reaction was to call you out on blame shifting until I realized ... You are correct. There hasn't exactly been a lot of benchmark comparison between ARM and x86. Of course, there isn't much out there with which to compare either so ...
Just because you write code simply enough using good modern form and properly align your data and make functions and loops small enough to be easily optimized does not mean GCC doesn't choke. Mike Acton gave a great lecture at CPPCon 2014 showing various examples where GCC, Clang, and MVCC choke.
Define very good.
Define detailed analysis. Under what workloads? Is it more efficient for throughput or latency (because I guarantee it can't be both)?
Yes, Geekbench uses purely scalar code on x86 platforms. It's ludicrously pathetic.
It's 8x over scalar, and that's where it matters, and it can even be better than that because of loop Muop decreases which allow the loops to fit into the detector buffers which can erase the prefetch and WB stages until the end of the loop.
No, they're not more powerful. A Pentium IV is still more powerful than the Helio X35 or Exynos 8890.
No, those are select benchmarks that are more network bound than CPU bound and are meaningless for the claims people are trying to make based on them.
@ddriver: "I've been using GCC mostly, and in most of the cases after doing explicit vectorization I found no perf benefits, analyzing assembly afterwards revealed that the compiled has done a very good job at vectorizing wherever possible."
It's not just about vectorizing. I haven't taken a look at Geekbench code, but it is pretty easy to under-utilize processor resources. Designing workloads to fit within a processors cache for repetitive operations is a common way to optimize. It does, however, leave a processor with a larger cache underutilized for the purposes of the workload. Similar examples can be found for wide vs narrow architectures and memory architectures feeding the processor. Even practical workloads can be done various ways that are much more or less suitable to a given platform. Compression / Encoding methods are some examples here.
@patrickjp93: "Yes you can get 5x the performance by optimizing. Geekbench only handles 1 datem at a time on Intel hardware vs. the 8 you can do with AVX and AVX2. Assuming you don't choke on bandwidth, you can get an 8x speedup."
If you have processor with a large enough cache to keep a workload almost entirely in cache and another with far less cache that has to access main memory repetitively to do the job, the difference can be an order of magnitude or more. Admittedly, the type of workload that is small enough to fit in any processor cache isn't common, but I've seen cases of it in benchmarks and (less commonly in) scientific applications.
Yep, for well over a decade, we hear from sisc fans how they are the future, yet i seem to live in a world where further miniturisation is the key to progress, and what better way than cisc on a single wafer, using commonly 14nm nodes, soon to be 7nm from GF.
Intuitively, Spread out, discrete chips cant compete with "warts and all" ciscS.
As it looks now, the new zen/vega amd apu, seems a new plateau of SOC, and may even be favoured in server gpu/cpu processes.
we know amd can make ryzen, which is 2x4 cpu core units on one am4 socket plug.
its a safe bet vega will be huge.
we know amd can glue an above 4 core unit to a vega gpu core on one am4 socket (from raven ridge apu specs) - i.e they can mix and match cpu/gpu on one am4 socket.
we know the biggest barrier to gpuS in the form of memory bandwidth, has been removed by vegaS HBM2 memory, and placing it practically on the chip.
We know it doesnt stop there. Naples will offer 2x ryzen on one socket soonish, and there is talk of 64 core, or 8 ryzens on one socket.
So why not 8 x APUs, or a mix of ryzen cpuS & APUs for g/cpu compute apps?
I'm coming in late and trying to understand what appears to me to be a ridiculous argument. Apple A10 Vs AMD A10??? What??? Totally unrelated. Might as well add an Air Force A10 to the list since we seem to be wanting to compare everything with A10 in the name.
Apple A10 is actually faster than any AMD chip at Jetstream, Kraken, Octane, and pretty much every other benchmark that measures real world web browsing performance. Such is the sad state of AMD.
JS benchmarking is is a sad joke. You compare apples to oranges, as the engine implementation is fundamentally different. No respectable source would even consider such benchmarks a measure of actual chip performance.
I’m as "happily locked in" into Apple’s platforms as anyone, but the whole "lol A10 kicks x86 ass" thing is getting retarded. It’s a fine CPU, sure, but how people can’t comprehend that it’s designed for a whole different set of usage scenarios is beyond me.
Now, that’s not to say Apple isn’t working on a desktop class ARM CPU/GPU combo, but _that_ would be a real surprise.
Yet HSA would yield far bigger performance gains. The only issue is unlike iOS-specific optimisations which you're running into all the time, unless you're using specifically optimised software then HSA won't be helping anybody.
If HSA was some intelligent force that automatically optimised workloads, I don't think anybody would dare suggest an Apple mobile CPU beating a desktop one.
No it wouldn't. HSA requires code bloat which will damage cache friendliness in all cases where code cannot be offloaded to the iGPU. This is why Oracle, Fujitsu, IBM, Intel, and Nvidia all turned their nose up at it. HPX, OpenMP, and OpenACC kick the snot out of HSA as both a programming model and a hardware interface.
Why? Apple is just old outdated crap propped up by dumb hollywood trendies that don't know any better. Also Apple basically has slave labor and suicide nets around the buildings that house their employees. They won't even let them escape in death.
Agreed... just release Zen already. I hope at least motherboards will be available by Xmas. Get the build ready, but a Zen CPU in jan/feb, drop it in and boom, new PC build.
how is releasing mobos early any good? what happens if for whatever reason there are problems with old mobos when the high end cpus come (in a way similar to early ddr3 being higher voltage)
Nope wrong all the 10 series nvidia graphics cards accelerate both the encoding and decoding at 4K60p of 10-bit HEVC. "Pascal can now encode Main10 Profile (10bit) video, and total encode throughput is rated by NVIDIA for 2 4Kp60 streams at once". Look up in the anandtech gtx 1080 review.
Personally, I've been crossing my fingers for hevc to just be skipped over for av1 (which is supposed to have a bitsteam freeze by this march). Hevc's been fairly disappointing,imho.
I really hope the Xen enthusiast chipset is a completely different design. While perfectly acceptable for its intended market, if the B350 is just a slightly cut down version of the big one, they're going to be painfully behind in the high end mobo feature checkboxing race in comparison to Intels much larger high end offerings.
The cost of fabbing a device at a different node is conservatively fifty million dollars; you're talking about a total reimplementation and revalidation, followed by a complete new mask set. There aren't half a million people willing to pay $100 more for an indifferent AMD chipset fabricated on a slightly shinier process.
That's assuming that the highend chipset was designed for the 55nm process from the start and would need to be redesigned for 28nm. If they designed the highend stuff for 28nm right from the start, it would just be fabrication cost differences.
You're probably right though. They probably didn't see a big enough benefit in doing the C380(or whatever it ends up being called) at a modern node.
Intel must be able to do it for far less, because there's no way Coffee Lake will sell that well, and it's based on the Cannonlake architecture which was originally built with 10nm in mind.
The desktop enthusiast market is unlikely to care for the couple of watts of power that could save.
For the laptop market where power matters a lot more, I expect the Zen APU to integrate more connectivity onto the die and do away with the south-bridge entirely.
Really looking forward to some actual benchmarks. I really am itching to build a new office computer, and right now, the i3-6100 is the only realistic chip, since I won't be doing much gaming on the system. If the new A12 and A10 can even come close to matching the i3 in CPU tasks, I'd be more than happy to snag that up, as the graphics will be nice, and the upgradability to Zen later if that processor pans out.
I'm writing an article on that at the moment (different site built around the economics of modern tech). Conclusion is unless you need specific Skylake tech on a desktop (m.2, DDR4, ECC, IGP, SGX, MPX or AVX2), get an Ivy Bridge i5/i7. Ex-lease Ivy machines are pretty much being sold for pennies these days (less than a new Pentium machine) and a quad Ivy i5/i7 will almost always demolish a Skylake i3 on the CPU side.
And so comes the end of the desktop computer, as people refurbish old computers and use them for years instead of buying new ones every year. That Ivy i5 system paired with lots of cheap RAM and a cheap SATA SSD would be more than fast enough for office tasks for years to come.
Could be good for AMD though. They could make good-enough APUs for mainstream usage at a price point Intel can't touch. It's a race to the bottom and AMD could conceivably win.
Skylake wasn't a good enough performance bump for my use cases, considering platform price. Kaby Lake has no IPC boost, and who knows when Cannonlake will arrive.
Waiting on Zen to arrive so I can take a look at benchmarks. Even if it's "slower" than Skylake, if the platform cost is right it would be a quite viable option.
Precisely, I just overclocked my 2500K to 4.5GHz the other day and it will definitely last me until Intel gets its act together and puts a focus on performance improvements again.
If AMD were competitive, Intel would probably be pushing a lot more performance on successive generations. Instead they're cashing in by shrinking dies and moving more components on die, while only incrementally improving performance.
So really it's a good thing, suddenly competing with the secondhand market will hopefully see a large performance boost in future generations.
@Danvelopment Please take a look at Agner Fog's x86 instruction latency tables. Intel can't squeeze blood from a rock and make instructions take less than 1 cycle. No one can.
So you're saying we've hit the limit for processor performance and there's nothing new anyone can do?
I find that hard to believe, especially as innovation is not required to increase performance, hell if they were desperate they could bring i3 to a quad minimum and flop four more cores into 5 and 7 and call it a day. It's not innovation but it's a significant increase in performance potential.
Instead we've gone from a 216mm2 die to a 122mm2 die.
If it were neck and neck, Intel wouldn't look at AMD and go, "whelp, nothing we can do".
Where can I find these "pennies" for Ivy i5? Best I'm finding is on fleaBay for about $200 shipped. With 4GB RAM and too small of a hard drive. After I upgrade the hard drive and RAM to where I need, I have a used computer that costs only about $40 less than if I build a new i3 system. And demolish? I'm not so sure about that:
Your processor alone is almost $200. You can buy a motherboard, chassis, 80+ psu (what Dell uses in their optiplex's), (well exclude the aftermarket cooler, extra fans and optical for fun sake), and 4gb ram for $83 including ship(200-157+40)? I'm impressed. Care to spec that up?
Plus the price of the parts you were going to add to the $200 machine.
And your choice in bench appears to be severely lacking in benchmarks but I see there aren't many ivy i5s, can get ex lease ivy i7s for about $30 more.
I had hoped for more numbers in Bench, but I guess the i3's don't get the same attention here. I didn't really want to link to "rival" review sites here in the comments. Main point was that the Skylake i3's are not that dramatically slower than Ivy i5's.
i3-6100 is pricing around $120 USD, $110 on sale. Asus B150M-A/M.2 is about $80, but I live close to a MicroCenter, so their combo deal knocks $30 off that price. Crucial MX300 M.2 for $70, 8GB of DDR4 for $35, 1TB hard drive for raw storage at $45, Case/PS for $65. Use my own license for OS. That comes up to ~$415-$445 for a brand new computer.
Main point, I can get a new computer for not much more than a used computer, once I bring the used computer up to my specification. Having a new computer over a used computer for me is more important than the performance difference of the i5.
It's not really building a new machine if you're reusing old parts, if you're talking the general populace rather than you personally (my target) they won't have the option of moving their Windows license, and there's a clock drop on the i3-6100 relative to the benchmarks earlier.
Also a chassis/PSU for $65 doesn't sound like a very good option. Going back to Dell (my old company was Dell heavy so I have a lot of experience with their enterprise lines, HP, Lenovo etc will probably be the same). The Optiplex chassis' were almost entirely toolless, well cooled and the 790 onwards looked decent, albeit not incredible (but a $65 chassis/PSU wouldn't). On top of that they used 80+ PSUs (the 3000/7000/9000 series used Gold, I can't remember if the older ones were the same), proven, quality units. i5/i7 builds also used Q series motherboards with Intel components (such as the NIC).
If you're matching quality like for like then you'd be looking to spend more on the new machine. I'd much rather personally run a secondhand Ivy i5/i7 using quality components. Their consumer lines are garbage from my experience but ex-lease machines are all enterprise devices.
Being able to do something doesn't make it a better option, especially if you drop the quality to do so. Ivy i5, even to the benchmarks above is more powerful, ex-lease component machines are higher quality and even with the above it's still cheaper. It just makes sense.
I don't work there anymore but I liked the Dell enterprise machines so much that I actually bought an (ex-lease) E7240 after I left. i5-4200U, 4GB RAM (I added another 8 that I had lying around), 256GB OEM SSD for $200. I can flip the back off with two screws and access almost everything. And the screen front bezel just pulls off with fingernails, although you wouldn't know it til you tried. You don't have to unbolt the hinges like most laptops.
Before I started there they bought Vostros (laptop and desktop) for some reason, rather than the enterprise machines and fark those things. They were the hardest farking things to work on, they literally went out of their way to make it hard. I phased the final ones out just before I left. It was the Vostro 3450 that was my most reviled computer ever. The hard drive was screwed onto the motherboard and you literally had to pull the whole thing apart, lift the motherboard then unscrew the HDD from it. If you took the back panel off, you could have done it from there but they put a small band of plastic on the bottom chassis to prevent it. It literally had no other purpose. If there was no warranty you could take a knife, cut that plastic off and do it directly.
If DIY motherboards, which presumably would allow either Bristol Ridge or Summit Ridge, follow the pattern of this OEM board, aren't we consigning Zen to just eight lanes of PCIe 3.0 for discrete graphics? Not necessarily an extinction-level event, but neither is it something gaming enthusiasts will be happy with. Hopefully, motherboard manufacturers won't drop the ball with this.
I take it you are thinking that the AM4 socket has more than 12 PCIe lanes, but that Bristol Ridge doesn't connect them all (sort of like the Intel i7-6800K has 28 PCIe lanes even though it uses a socket that has 40 lanes). That makes sense.
My guess is that motherboard manufacturers expect AM4 motherboard sales to be driven primarily by Zen. In the DIY market, even the people who do buy a Bristol Ridge processor may be doing it with the intention of upgrading to a more powerful processor later. So I would expect most motherboard manufacturers would try to do a good job of supporting the Zen-based processors.
It really looks like the connectivity onboard the APU is targeted at what a normal laptop would need. This should be a major design advantage for AMD compared to their previous mobile platforms in terms of power, design & material cost, and platform footprint.
- This class of CPU doesn't warrant a x16 PEG Link - Due to space constraints most non-DTR laptops will have fewer than 4x USB ports - maybe 3+1 USB-based card reader. They can probably use an onboard hub for more if they really need them. - x4 PCI-E 3.0 M.2 is an option
In fact, other than USB ports, this is probably enough connectivity for most non-enthusiast desktop users as well. This could help BOM and board design costs here as well. The optimistic part of me would love to see that reinvested elsewhere in the system. Realistically I see that split between a lower sticker price and an increase in margins for the system builder.
Almost certainly AMD is just reusing the Carrizo design as a cost cutting measure. There isn't a AMD CPU on the market which a x8 link would bottleneck first.
Yeah, no. Even the 5775C has thoroughly beaten everything AMD has so far offered, even if narrowly. Skylake GT3e increased that gap, and GT4e dug a canyon between Intel and AMD. This is why I sincerely doubt AMD's HPC APUs will get much traction.
Skylake's GT4e really is in a whole new league of iGPU performance (at a significant cost) - but to nitpick GT3e on Skylake very likely underperforms GT3e on Broadwell.
Broadwell GT3e (specifically Iris Pro 6200) is 48EUs at up to 1150MHz backed by 128MB eDRAM on a 65W TDP quad core. Skylake GT3e (specifically Iris Graphics 550) is 48EUs at up to 1100MHz backed by 64MB eDRAM on a 28W dual core. (I'm not counting the listing for the "Server" GT3e which is listed as P555 with 128MB)
Skylake has microarchitecture improvements to both the EUs and to the x86 cores. It also supports DDR4 and Gen9 includes delta compression - so the iGPU undoubtedly has more bandwidth available.
That said, I really think that the smaller eDRAM, lower TDP, and lower max clock speed will make SKL GT3e slower than BDW GT3e.
Due to the rearrangement of SKUs and iGPUs in SKL I think this is all natural. Intel is trying to *significantly* increase the iGPU perf available to their 15/28W dual core CPUs with its iteration of GT3e, however this allows for the potential that this architecture's GT3e underperforms the previous.
I noticed that too. I've seen tables and slides from various sites and they all list those specs. The only thing I can conclude is that since the A6 has half the CPU resources, maybe AMD just assumes that for most tasks you'll end up CPU bound to a large enough degree that even thought the iGPU is technically faster, performance will be lower.
Or... the alternative is that there's a typo of some sort in the original materials published by AMD.
Oh, really nice article. Thanks for this. Well the first Zen processors will probably cost more than the A12. Then when Zen based APUs will be ready to come to the market, we will see and smaller Zen models.
Thanks for the writeup, Ian. I even learned some new things which I haven't yet read elsewhere. I do hope that Anandtech can get its hands on a chip for reviews.
Marginal improvement IMO over last gen APU's. Looks interesting but an i3 will still "feel" faster as its single core power is still greater, although i almost always build AMD APU's for small offices because Microcenter has a instant $40 rebate when you buy in store. BUT.. ZEN SoC with HBM might be a game changer (to quote 2014), and rumor is Apple is eyeballing it for the next Macbook Pro. And AMD has a good track record on Unix based OS's which might be why the Mac Pro uses AMD... I'm going down a rabbit hole. SOO! A12... ehh.. no significant change but I'm waiting for ZEN.
Ian Cutress on the first page you say: "Using the updated 28nm process from TSMC, AMD was able to tweak the microarchitecture and allow full on APUs for desktops using a similar design." Problem is that Carrizo and Bristol Ridge are actually made at Global Foundries on a 28nm process.
"Back at Computex, AMD announced the thing to fill that hole: Bristol Ridge (and Stoney Ridge by extension). This is what AMD will call and market it's '7th Generation' of APUs, and will be built on TSMC's 28nm Bulk CMOS process."
From anandtech's look at carrizo in july. Everything else i've found doesnt say GF or TSMC, just 28nm.
'Diffused in Germany' means the chip itself was made at GF in Dresden. Then shipped to Malaysia for packaging. Unless the photo is fake. Which is unlikely.
Wondering if I understood correctly -- is video card slot limited to 8 lanes? If so, how will this affect APU with video card as primary, APU with video card in dual graphics, and Athlon x4 CPU with video card?
Mr Cutress, we are one month later, where is the Bristol Ridge hardware? It would seem easier to reach out to Costco than AMD and have the platform tested. Where are you guys with that?
Interesting article but disappointed in the overclocked benchmarks. AMD can say what they want about Excavator v2 cores but if these benches at 4.8ghz are legit its only going to slightly put it over the recent Athlon 860k. Im doing this comparison based on the future Athlon will be based off the A12 9800. My Athlon 860k gets a cinebench r15 score of 368 at 4.5ghz. IF it overclocked to 4.8ghz I'd bet it'd score right around the same 380 the A12 9800 did. On the plus side IF 4.8ghz is possible on the new Excavator v2 based chips and the power consumptuon is down thats a big plus. I was just hoping the new Athlons would be scoring a bit better say near 400 in cinebench r15 at stock clocks and for $100 or less. But hear this INTEL... I will never pay $180 for a dual core even if it does overclcock well. The new i3 7350k is a joke.
AMD did something similar with the A8-7600 midrange APU. Announced it, launched it, allowed testing by tech sites, and gave OEMs full access, but it took a full 8 months before it was available to the general public.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
122 Comments
Back to Article
gregounech - Friday, September 23, 2016 - link
I want to see an AMD A10 vs Apple A10 article.goatfajitas - Friday, September 23, 2016 - link
Why? AMD is an x86 processors that can run Windows and complete systems. Apple is an ARM processor that runs nice toys. They are not in the same ballpark.KPOM - Friday, September 23, 2016 - link
Universal Windows apps can run on ARM.wumpus - Friday, September 23, 2016 - link
All three of them, but don't expect even them to work until tested.Michael Bay - Saturday, September 24, 2016 - link
Update your mantra, it smells.goatfajitas - Friday, September 23, 2016 - link
OK, but that isnt really the point. Comparing ARM to x86 is like comparing a sports car to an 18 wheel truck. Both are good, both have their purpose, but they aren't really in the same market segments. Are is great at specific things, where x86 is the workhorse of the industry.ddriver - Friday, September 23, 2016 - link
Generalizing like that is quite foolish, atom is x86, but it is closer to a wheelchair than an 18 wheeler.From what I can tell, the apple's A10 is about the same performance as premium mobile device x86 chips from intel. The fact that it has no serious programs for it and doesn't allow you to run even your own programs without jailbreaking ... is a whole different topic.
Performance is one metric, usability - another metric. And sure, performance without usability is a shame, but that's what the corporations dictate - those aren't devices for you to use as much as they are devices for you to be used.
bji - Friday, September 23, 2016 - link
Do you think anyone actually cares about your sense of superiority over mobile app users?ddriver - Friday, September 23, 2016 - link
I don't care what anyone cares about, their biz, not mine.Samus - Saturday, September 24, 2016 - link
Using that ideology let's just say many people don't like your strong opinions projected upon them, either.ddriver - Saturday, September 24, 2016 - link
So you are a fascist who thinks some people are not entitled to opinions? I am not projecting anything, I am expressing my opinion. If you have a problem about it, that's your problem entirely. If my opinions threaten to shatter someone's fragile artificial worldview, that's their their problem entirely. Or maybe you are implying that people should only be allowed to express opinions that are coherent with the herd's baaing? Did you see me denying the right of opinions to dummies or anyone? Nah, because it is their right, even if it is mind-numbingly stupid and even if I don't like it.goatfajitas - Friday, September 23, 2016 - link
Different ballparks entirely dood. One runs alot of fun mobile toys, the other runs the entire business world including every computer and every system in every factory that makes all of those toys.ddriver - Friday, September 23, 2016 - link
That's not the problem, most people can't and quite frankly don't need to be powerusers - engineers, designers and such are a very small fraction of the population. The problem is the lack of choice - you don't even have the option to use an android or apple tablet for professional work. And the even bigger problem - that this becomes a trend.Year after year the entire industry is moving towards the "walled garden spyware milking station" model, I wouldn't be surprised if in the near future owning an open general purpose computer becomes criminalized - because you know... only pirates, hackers, terrorists and pedophiles need it.
And last but not least, what is perhaps most worrying is people are getting increasingly dumb as technology gets more and more affordable. One would say "big whoop, I don't care about the dummies", however in the long run that's a huge problem, it affects the entire species, and thus every individual member of it. Call me old fashioned, but I think people should, if not get smarter, then at least not get dumber as technology becomes more available and widespread. Today software helps you finish your sentences, before you know it, software will be dictating your sentences. It is ok for machines to do the "legwork" for you - numerous small and repetitive tasks, but it is a very dangerous thing when machines begin doing the thinking for you and you forget how to do it... if you ever knew to begin with...
goatfajitas - Friday, September 23, 2016 - link
I don't disagree with most that... but I don't know if people are getting increasingly dumb... It used to be the dumber people didn't use computers much, and now everyone does. As computer/tablet/mobile usage went up the average computer user's IQ went down, but those dumbies always existed, its just that now they are online and have a voice.... Yadda yadda yadda, now we have Trump. LOLddriver - Friday, September 23, 2016 - link
They are being made dumber and it makes perfect sense. Longterm economic strategy 101 - with ample resources you breed a lot of workers to produce wealth, as resources begin to run out you automate menial labor and focus on services to take that wealth from the workers, and finally, using the wealth you produce a military complex to protect you from the workers as they become obsolete and infeasible both as workers and consumers. Being dumb will make it easy to march the general population straight into the void, unable to resist or survive without the system which no longer has need of them. Dumb is convenient - easy to exploit, easy to dispose of. More technology has allowed for more ways to dummify - more knowledge of the dummies, more control over the dummies, more distraction for the dummies from the issues at hand. Everything people do online passes through data mining algorithms, gets profiled, analyzed, and combed for anything anyone might find useful in any way. Including this very comment space.goatfajitas - Friday, September 23, 2016 - link
I agree, there are a heck of alot of dumb people out there... Just not sure they are getting even dumber. Lazier maybe, but the average person has always been mindbogglingly stupid.Just picture in your mind how stupid the average person is... Now think, half of everyone is stupider than that guy! - George Carlin (said long before the vast majority of people had any PC at all)
msroadkill612 - Wednesday, April 26, 2017 - link
Its sad to think almost 50% of people are below average.msroadkill612 - Wednesday, April 26, 2017 - link
Sadly, i concur. Its depressing.A way of looking at it via the "property is theft" philosophy is why, w/ all these new marvels, are even the fortunate employed (those serfs useful to the regime), feeling more on a treadmill over a shark tank than ever?
Our alleged protective regimes have focused more on how to meter mankinds recent gifts (and infrastructure, long paid for by our compatriot forbears), than disperse them free or at cost.
Most of the great "inventions" and their resultant fortunes were assigned to the best schemer in the group, not the true inventors. Some were unrelated predatory lawyers, & often they were major hinderances to wider adoption.
The last thing capitalism wants is to remove the necessity of serving the regime.
To all intents, it is illegal to live simply. You may not build your own shelter e.g. You must take a mortgage for a sale price bureaucrats very actively inflate.
Free wi-fi to the poor in some areas would cost a pittance to our allegedly caring commonwealths. Instead they hand out monopolistic licences at dodgy auctions, if that.
Michael Bay - Saturday, September 24, 2016 - link
You really have to be braindead or commited ethnic masochist to support anyone but Trump in this cycle.Valantar - Saturday, September 24, 2016 - link
"Commited ethnic masochist". Did you mean to say "not astoundingly racist"? Because those words you just strung together mean nothing at all.Also, braindead? Really? So anyone with a functioning brain would be unable to see the lying, gross inconsistency, thin-skinnedness and general stupidity that spews from the Trump campaign (and his own mouth)? Not to mention the racism, sexism, homophobia, transphobia, elitism, disdain for poor/middle class people, blatant disregard for the rule of law, due process and the courts, promotion of violence, and willingness to lie about anything and everything to win?
ddriver - Saturday, September 24, 2016 - link
Hey, at least Trump is only preposterous and stupid. Hillary is all that PLUS crazy and evil. She is just as racist as Trump, if not more so, but she is not in the habit of being honest, she'd prefer to claim the votes of minorities.Politics is a joke and the current situation is a very good example of it. People deserve all shit that coming their way if they still put faith in the political process after this.
ClockHound - Friday, September 23, 2016 - link
+101Particularly enjoyed the term: "walled garden spyware milking station" model
Ok, not really enjoyed, cringed at the accuracy, however. ;-)
msroadkill612 - Wednesday, April 26, 2017 - link
An adage I liked "If its free, YOU are the product."hoohoo - Friday, September 23, 2016 - link
I see what you did there! Nicely done.patrickjp93 - Saturday, September 24, 2016 - link
No they aren't. If Geekbench optimized for x86 the way it does for ARM, the difference in performance per clock is nearly 5xddriver - Saturday, September 24, 2016 - link
You have no idea what you are talking about. Geekbench is very much optimized, there are basically three types of optimization:optimization done by the compiler - it eliminates redundant code, vertorizes loops and all that good stuff, that happens automatically
optimization by using intrinsics - do manually what the compiler does automatically, sometimes you could do better, but in general, compiler optimizations are very mature and very good at doing what they do
"optimization" of the type "if (CPUID != INTEL) doWorse()" - harmful optimization that doesn't really optimize anything in the true sense of the word, but deliberately chooses a less efficient code path to purposely harm the performance of a competitor - such optimizations are ALWAYS in the favor of the TOP DOG - be that intel or nvidia - companies who have excess of money to spend on such idiotic things. Smaller and less profitable companies like amd or arm - they don't do that kind of shit.
Finally, performance is not magic, you can't "optimize" and suddenly get 5X the performance. Process and TDP are a limiting factor, there is only so much performance you can get out of a chip produced at a given process for a given thermal budget. And that's if it is some perfectly efficient design. A 5W 20nm x86 chip could not possibly be any faster than a 5W 20nm ARM chip, intel has always had a slight edge in process, but if you manufacture an arm and a x86 chip on identical process (not just the claimed node size) with the same thermal budget the amr chip will be a tad faster, because the architecture is less bloated and more efficient.
It is a part of a dummy's belief system that arm chips are somehow fundamentally incapable of running professional software - on the contrary, hardware wise they are perfectly capable, only nobody bothers to write professional software for them.
patrickjp93 - Saturday, September 24, 2016 - link
I have a Bachelor's in computer science and specialized in high performance parallel, vectorized, and heterogeneous computing. I've disassembled Geekbench on x86 platforms, and it doesn't even use anything SSE or higher, and that's ancient Pentium III instructions.It does not happen automatically if you don't use the right compiler flags and don't have your data aligned to allow the instructions to work.
You need intrinsics for a lot of things. Clang and GCC both have huge compiler bug forums filled with examples of where people beat the compilers significantly.
Yes you can get 5x the performance by optimizing. Geekbench only handles 1 datem at a time on Intel hardware vs. the 8 you can do with AVX and AVX2. Assuming you don't choke on bandwidth, you can get an 8x speedup.
ARM is not more efficient on merit, and x86 is not bloated by any stretch. Both use microcode now. ARM is no longer RISC by any strict definition.
Cavium has. Oracle has. Google has. Amazon has. In all cases ARM could not keep up with Avoton and Xeon D in performance/watt/$ and thus the industry stuck with Intel instead of Qualcomm or Cavium.
Toss3 - Sunday, September 25, 2016 - link
This is a great post, and I just wanted to post an article by PC World where they discussed these things in simpler terms: http://www.pcworld.com/article/3006268/tablets/tes...As you can see the performance gains aren't really that great when it comes to real world usage, and as such we should probably start to use other benchmarks as well, and not just use Geekbench or browser javascript performance as indicators of actual performance of these SoCs especially when comparing one platform to another.
amagriva - Sunday, September 25, 2016 - link
Good post. To any interested a good paper on the subject : http://etn.se/images/expert/FD-SOI-eQuad-white-pap...ddriver - Sunday, September 25, 2016 - link
I've been using GCC mostly, and in most of the cases after doing explicit vectorization I found no perf benefits, analyzing assembly afterwards revealed that the compiled has done a very good job at vectorizing wherever possible.However, I am highly skeptical towards your claims, I'll believe it when I see it. I can't find the link now, but last year I've read detailed analysis, showing that A9X core performance per watt better than skylake over most of the A9X's clock range. And not in geekbench, but in SPEC.
As for geekbench, you make it sound as if they actually disabled vectorization explicitly. Which would be an odd thing. Not entirely clear what you mean by "1 datem at a time", but if you mean they are using scalar rather than vector instructions, that would be quite odd too. Luckily, I have better things to do than rummage about in geekbench machine code, so I will take your word that it is not properly optimized.
And sure, 256bit wide SIMD will have higher throughput than 128bit SIMD, but nowhere nearly 8 or even 5 times. And that doesn't make arm chips any less capable of running devices, which are more than useless toys. Those chips are more powerful than workstations were some 10 years ago, but their usability is nowhere near that. As the benchmarks from the link Toss3 posted indicate, the A9X is only some ~40% slower than i5-4300U in the "true/real world benchmarks", and that's a 15 watt chip vs the A9X is like what, 5-ish or something like that? And ARM is definitely more efficient once you account for intel's process advantage. This will become obvious if intel ever dare to manufacture arm cores at the same process as their own products. And it is not because of the ISA bloat but because of the design bloat.
Naturally, ARM chips are a low margin product, one cannot expect a 50$ chip to outperform a 300$ chip, but the gap appears to be closing, especially keeping in mind the brickwall process is going to hit the next decade. A 50$ chip running equal to a 300$ (and much wider design) chip from 2 year ago opens up a lot of possibilities, but I am not seeing any of them being realized by the industry.
Alexvrb - Sunday, September 25, 2016 - link
Geekbench is trash at comparing across different architectures. It makes steaming piles look good. Only using SSE (first gen, ancient) on x86 processors would certainly be a part of the puzzle regarding Geekbench results. Thanks, Patrick.Not to take anything away from Apple's cores. I wouldn't be surprised that they have better performance per WATT than Skylake. Perf/watt is kind of a big deal for mobile, and Apple (though I don't care for them as a company) builds very efficient processor cores. With A10 using a big.LITTLE implementation of some variety, they stand to gain even more efficiency. But in terms of raw performance? Never rely on Geekbench unless maybe you're comparing an A9 Apple chip to an A10 or something. MAYBE.
ddriver - Monday, September 26, 2016 - link
Hey, it is not me who uses crap like geekbench and sunspider to measure performnace, it is sites like AT ;)BurntMyBacon - Monday, September 26, 2016 - link
@ddriver: "Hey, it is not me who uses crap like geekbench and sunspider to measure performnace, it is sites like AT ;)"LOL. My gut reaction was to call you out on blame shifting until I realized ... You are correct. There hasn't exactly been a lot of benchmark comparison between ARM and x86. Of course, there isn't much out there with which to compare either so ...
patrickjp93 - Monday, September 26, 2016 - link
Linpack and SAP. Both are massive benchmark suites that will give you the honest to God truth, and the truth is ARM is still 10 years behind.patrickjp93 - Monday, September 26, 2016 - link
They use it in context and admit the benchmarks are not equally optimized across architectures.patrickjp93 - Monday, September 26, 2016 - link
It doesn't even use SSE. It uses x86_64 and x87 scalar float instructions. It doesn't even give you MMX or SSE. That's how biased it is.patrickjp93 - Monday, September 26, 2016 - link
Just because you write code simply enough using good modern form and properly align your data and make functions and loops small enough to be easily optimized does not mean GCC doesn't choke. Mike Acton gave a great lecture at CPPCon 2014 showing various examples where GCC, Clang, and MVCC choke.Define very good.
Define detailed analysis. Under what workloads? Is it more efficient for throughput or latency (because I guarantee it can't be both)?
Yes, Geekbench uses purely scalar code on x86 platforms. It's ludicrously pathetic.
It's 8x over scalar, and that's where it matters, and it can even be better than that because of loop Muop decreases which allow the loops to fit into the detector buffers which can erase the prefetch and WB stages until the end of the loop.
No, they're not more powerful. A Pentium IV is still more powerful than the Helio X35 or Exynos 8890.
No, those are select benchmarks that are more network bound than CPU bound and are meaningless for the claims people are trying to make based on them.
BurntMyBacon - Monday, September 26, 2016 - link
@ddriver: "I've been using GCC mostly, and in most of the cases after doing explicit vectorization I found no perf benefits, analyzing assembly afterwards revealed that the compiled has done a very good job at vectorizing wherever possible."It's not just about vectorizing. I haven't taken a look at Geekbench code, but it is pretty easy to under-utilize processor resources. Designing workloads to fit within a processors cache for repetitive operations is a common way to optimize. It does, however, leave a processor with a larger cache underutilized for the purposes of the workload. Similar examples can be found for wide vs narrow architectures and memory architectures feeding the processor. Even practical workloads can be done various ways that are much more or less suitable to a given platform. Compression / Encoding methods are some examples here.
BurntMyBacon - Monday, September 26, 2016 - link
@patrickjp93: "Yes you can get 5x the performance by optimizing. Geekbench only handles 1 datem at a time on Intel hardware vs. the 8 you can do with AVX and AVX2. Assuming you don't choke on bandwidth, you can get an 8x speedup."If you have processor with a large enough cache to keep a workload almost entirely in cache and another with far less cache that has to access main memory repetitively to do the job, the difference can be an order of magnitude or more. Admittedly, the type of workload that is small enough to fit in any processor cache isn't common, but I've seen cases of it in benchmarks and (less commonly in) scientific applications.
patrickjp93 - Tuesday, September 27, 2016 - link
Heh, they're usually based on Monte Carlo simulations if they can.msroadkill612 - Wednesday, April 26, 2017 - link
Good post. Ta.Yep, for well over a decade, we hear from sisc fans how they are the future, yet i seem to live in a world where further miniturisation is the key to progress, and what better way than cisc on a single wafer, using commonly 14nm nodes, soon to be 7nm from GF.
Intuitively, Spread out, discrete chips cant compete with "warts and all" ciscS.
As it looks now, the new zen/vega amd apu, seems a new plateau of SOC, and may even be favoured in server gpu/cpu processes.
we know amd can make ryzen, which is 2x4 cpu core units on one am4 socket plug.
its a safe bet vega will be huge.
we know amd can glue an above 4 core unit to a vega gpu core on one am4 socket (from raven ridge apu specs) - i.e they can mix and match cpu/gpu on one am4 socket.
we know the biggest barrier to gpuS in the form of memory bandwidth, has been removed by vegaS HBM2 memory, and placing it practically on the chip.
We know it doesnt stop there. Naples will offer 2x ryzen on one socket soonish, and there is talk of 64 core, or 8 ryzens on one socket.
So why not 8 x APUs, or a mix of ryzen cpuS & APUs for g/cpu compute apps?
pattycake0147 - Friday, September 23, 2016 - link
Pretty sure it was mainly a joke playing on the names...Ratman6161 - Tuesday, October 4, 2016 - link
I'm coming in late and trying to understand what appears to me to be a ridiculous argument. Apple A10 Vs AMD A10??? What??? Totally unrelated. Might as well add an Air Force A10 to the list since we seem to be wanting to compare everything with A10 in the name.paffinity - Friday, September 23, 2016 - link
Lol, Apple A10 would actually win.Shadowmaster625 - Friday, September 23, 2016 - link
Apple A10 is actually faster than any AMD chip at Jetstream, Kraken, Octane, and pretty much every other benchmark that measures real world web browsing performance. Such is the sad state of AMD.ddriver - Friday, September 23, 2016 - link
JS benchmarking is is a sad joke. You compare apples to oranges, as the engine implementation is fundamentally different. No respectable source would even consider such benchmarks a measure of actual chip performance.xype - Saturday, September 24, 2016 - link
I’m as "happily locked in" into Apple’s platforms as anyone, but the whole "lol A10 kicks x86 ass" thing is getting retarded. It’s a fine CPU, sure, but how people can’t comprehend that it’s designed for a whole different set of usage scenarios is beyond me.Now, that’s not to say Apple isn’t working on a desktop class ARM CPU/GPU combo, but _that_ would be a real surprise.
Meteor2 - Saturday, September 24, 2016 - link
It's a measure of end-user experience, however.Alexvrb - Sunday, September 25, 2016 - link
Not necessarily. Those benches Shadow mentioned are more of a measure of a particular browser's optimizations for those benches, than anything.silverblue - Saturday, September 24, 2016 - link
Yet HSA would yield far bigger performance gains. The only issue is unlike iOS-specific optimisations which you're running into all the time, unless you're using specifically optimised software then HSA won't be helping anybody.If HSA was some intelligent force that automatically optimised workloads, I don't think anybody would dare suggest an Apple mobile CPU beating a desktop one.
Meteor2 - Saturday, September 24, 2016 - link
This. If HSA was *actually used* AMD performance would be fantastic.Shadowmaster625 - Sunday, September 25, 2016 - link
HSA is a pipe dream. Too much latency.patrickjp93 - Monday, September 26, 2016 - link
No it wouldn't. HSA requires code bloat which will damage cache friendliness in all cases where code cannot be offloaded to the iGPU. This is why Oracle, Fujitsu, IBM, Intel, and Nvidia all turned their nose up at it. HPX, OpenMP, and OpenACC kick the snot out of HSA as both a programming model and a hardware interface.Vayra - Friday, September 23, 2016 - link
I don't. Who cares about Apple in any comparison? They walled their garden. You go Apple or you go for something sensible.AbbieHoffman - Friday, November 11, 2016 - link
Why? Apple is just old outdated crap propped up by dumb hollywood trendies that don't know any better. Also Apple basically has slave labor and suicide nets around the buildings that house their employees. They won't even let them escape in death.romrunning - Friday, September 23, 2016 - link
Forget these older designs - let's get Zen released already, and work on a Zen mobile/ULV platform.euskalzabe - Friday, September 23, 2016 - link
Agreed... just release Zen already. I hope at least motherboards will be available by Xmas. Get the build ready, but a Zen CPU in jan/feb, drop it in and boom, new PC build.kn00tcn - Friday, September 23, 2016 - link
how is releasing mobos early any good? what happens if for whatever reason there are problems with old mobos when the high end cpus come (in a way similar to early ddr3 being higher voltage)Murloc - Saturday, September 24, 2016 - link
yeah it's just stupid to buy stuff to have it become old in your house before you can even use itKrysto - Friday, September 23, 2016 - link
Yeah, when can we expect Zen mobile APUs already?nathanddrews - Friday, September 23, 2016 - link
"Luckily we are told that all AM4 systems should be dual channel"Hopefully OEMs actually use two sticks of RAM.
Sad to see that the rumors of HEVC 10-bit acceleration were false. I was kinda hoping this would be an upgrade path for Zen, but I guess I'll skip it.
barn25 - Friday, September 23, 2016 - link
Nothing can fully accelerate 10-bit HEVC on desktop. Not even nvidia.zmeul - Friday, September 23, 2016 - link
isn't KabyLake going to?hahmed330 - Friday, September 23, 2016 - link
Nope wrong all the 10 series nvidia graphics cards accelerate both the encoding and decoding at 4K60p of 10-bit HEVC. "Pascal can now encode Main10 Profile (10bit) video, and total encode throughput is rated by NVIDIA for 2 4Kp60 streams at once". Look up in the anandtech gtx 1080 review.patrickjp93 - Tuesday, September 27, 2016 - link
Encoding only on Quadros. Kaby Lake will do it with any processor that has its iGPU.nathanddrews - Friday, September 23, 2016 - link
HEVC Main 10:Intel Kaby Lake can
AMD Polaris can
NVIDIA Pascal can
NVIDIA Maxwell (GM206) can
tuxRoller - Friday, September 23, 2016 - link
Personally, I've been crossing my fingers for hevc to just be skipped over for av1 (which is supposed to have a bitsteam freeze by this march).Hevc's been fairly disappointing,imho.
Murloc - Saturday, September 24, 2016 - link
HEVC has been good for me, stuff takes so much less space.Good that another innovation is coming fast
DanNeely - Friday, September 23, 2016 - link
I really hope the Xen enthusiast chipset is a completely different design. While perfectly acceptable for its intended market, if the B350 is just a slightly cut down version of the big one, they're going to be painfully behind in the high end mobo feature checkboxing race in comparison to Intels much larger high end offerings.kn00tcn - Friday, September 23, 2016 - link
zen* you've been dreaming of halflife too muchDanNeely - Friday, September 23, 2016 - link
X = 1Y = 2
Z = 3
HL3 confirmed for 2017 launch.
anatol1981 - Friday, September 23, 2016 - link
Please, correct L1D info - its wrong.anatol1981 - Friday, September 23, 2016 - link
As i can see, L1-I swapped with L1-D, please, reverse them^)Mr Perfect - Friday, September 23, 2016 - link
Any chance the enthusiast grade chipset will be fabbed at the more expensive 28nm or 16nm node? Highend systems could justify the cost.Tom Womack - Friday, September 23, 2016 - link
The cost of fabbing a device at a different node is conservatively fifty million dollars; you're talking about a total reimplementation and revalidation, followed by a complete new mask set. There aren't half a million people willing to pay $100 more for an indifferent AMD chipset fabricated on a slightly shinier process.Mr Perfect - Friday, September 23, 2016 - link
That's assuming that the highend chipset was designed for the 55nm process from the start and would need to be redesigned for 28nm. If they designed the highend stuff for 28nm right from the start, it would just be fabrication cost differences.You're probably right though. They probably didn't see a big enough benefit in doing the C380(or whatever it ends up being called) at a modern node.
patrickjp93 - Tuesday, September 27, 2016 - link
Intel must be able to do it for far less, because there's no way Coffee Lake will sell that well, and it's based on the Cannonlake architecture which was originally built with 10nm in mind.Kalelovil - Sunday, September 25, 2016 - link
The desktop enthusiast market is unlikely to care for the couple of watts of power that could save.For the laptop market where power matters a lot more, I expect the Zen APU to integrate more connectivity onto the die and do away with the south-bridge entirely.
patrickjp93 - Tuesday, September 27, 2016 - link
Those are cheaper nodes due to maturity, jsyk.starlightmica - Friday, September 23, 2016 - link
HP Canada's 510-p127c spec page:http://support.hp.com/ca-en/document/c05281246
AMD A12-9000
16GB DDR4
R7 450 2Gb GDDR5, 4K over DP1.2, HDMI 1.4
180W uATX PSU :(
jardows2 - Friday, September 23, 2016 - link
Really looking forward to some actual benchmarks. I really am itching to build a new office computer, and right now, the i3-6100 is the only realistic chip, since I won't be doing much gaming on the system. If the new A12 and A10 can even come close to matching the i3 in CPU tasks, I'd be more than happy to snag that up, as the graphics will be nice, and the upgradability to Zen later if that processor pans out.Danvelopment - Saturday, September 24, 2016 - link
I'm writing an article on that at the moment (different site built around the economics of modern tech). Conclusion is unless you need specific Skylake tech on a desktop (m.2, DDR4, ECC, IGP, SGX, MPX or AVX2), get an Ivy Bridge i5/i7. Ex-lease Ivy machines are pretty much being sold for pennies these days (less than a new Pentium machine) and a quad Ivy i5/i7 will almost always demolish a Skylake i3 on the CPU side.Use the leftover cash for a couple SSDs and beer.
serendip - Saturday, September 24, 2016 - link
And so comes the end of the desktop computer, as people refurbish old computers and use them for years instead of buying new ones every year. That Ivy i5 system paired with lots of cheap RAM and a cheap SATA SSD would be more than fast enough for office tasks for years to come.Could be good for AMD though. They could make good-enough APUs for mainstream usage at a price point Intel can't touch. It's a race to the bottom and AMD could conceivably win.
LordanSS - Saturday, September 24, 2016 - link
Still rocking my 3770k.Skylake wasn't a good enough performance bump for my use cases, considering platform price. Kaby Lake has no IPC boost, and who knows when Cannonlake will arrive.
Waiting on Zen to arrive so I can take a look at benchmarks. Even if it's "slower" than Skylake, if the platform cost is right it would be a quite viable option.
Danvelopment - Sunday, September 25, 2016 - link
Precisely, I just overclocked my 2500K to 4.5GHz the other day and it will definitely last me until Intel gets its act together and puts a focus on performance improvements again.If AMD were competitive, Intel would probably be pushing a lot more performance on successive generations. Instead they're cashing in by shrinking dies and moving more components on die, while only incrementally improving performance.
So really it's a good thing, suddenly competing with the secondhand market will hopefully see a large performance boost in future generations.
patrickjp93 - Tuesday, September 27, 2016 - link
@Danvelopment Please take a look at Agner Fog's x86 instruction latency tables. Intel can't squeeze blood from a rock and make instructions take less than 1 cycle. No one can.patrickjp93 - Tuesday, September 27, 2016 - link
Continuing from the above, that's why SIMD and MIMD instructions were created.Danvelopment - Tuesday, September 27, 2016 - link
So you're saying we've hit the limit for processor performance and there's nothing new anyone can do?I find that hard to believe, especially as innovation is not required to increase performance, hell if they were desperate they could bring i3 to a quad minimum and flop four more cores into 5 and 7 and call it a day. It's not innovation but it's a significant increase in performance potential.
Instead we've gone from a 216mm2 die to a 122mm2 die.
If it were neck and neck, Intel wouldn't look at AMD and go, "whelp, nothing we can do".
Danvelopment - Tuesday, September 27, 2016 - link
216 is Sandy Bridge and 122 is Skylake.jardows2 - Monday, September 26, 2016 - link
Where can I find these "pennies" for Ivy i5? Best I'm finding is on fleaBay for about $200 shipped. With 4GB RAM and too small of a hard drive. After I upgrade the hard drive and RAM to where I need, I have a used computer that costs only about $40 less than if I build a new i3 system. And demolish? I'm not so sure about that:http://www.anandtech.com/bench/product/1783?vs=702
Beat? Yes, but I wouldn't say demolish.
Danvelopment - Tuesday, September 27, 2016 - link
Your processor alone is almost $200. You can buy a motherboard, chassis, 80+ psu (what Dell uses in their optiplex's), (well exclude the aftermarket cooler, extra fans and optical for fun sake), and 4gb ram for $83 including ship(200-157+40)? I'm impressed. Care to spec that up?Plus the price of the parts you were going to add to the $200 machine.
And your choice in bench appears to be severely lacking in benchmarks but I see there aren't many ivy i5s, can get ex lease ivy i7s for about $30 more.
jardows2 - Tuesday, September 27, 2016 - link
I had hoped for more numbers in Bench, but I guess the i3's don't get the same attention here. I didn't really want to link to "rival" review sites here in the comments. Main point was that the Skylake i3's are not that dramatically slower than Ivy i5's.i3-6100 is pricing around $120 USD, $110 on sale. Asus B150M-A/M.2 is about $80, but I live close to a MicroCenter, so their combo deal knocks $30 off that price. Crucial MX300 M.2 for $70, 8GB of DDR4 for $35, 1TB hard drive for raw storage at $45, Case/PS for $65. Use my own license for OS. That comes up to ~$415-$445 for a brand new computer.
Main point, I can get a new computer for not much more than a used computer, once I bring the used computer up to my specification. Having a new computer over a used computer for me is more important than the performance difference of the i5.
Danvelopment - Tuesday, September 27, 2016 - link
It's not really building a new machine if you're reusing old parts, if you're talking the general populace rather than you personally (my target) they won't have the option of moving their Windows license, and there's a clock drop on the i3-6100 relative to the benchmarks earlier.Also a chassis/PSU for $65 doesn't sound like a very good option. Going back to Dell (my old company was Dell heavy so I have a lot of experience with their enterprise lines, HP, Lenovo etc will probably be the same). The Optiplex chassis' were almost entirely toolless, well cooled and the 790 onwards looked decent, albeit not incredible (but a $65 chassis/PSU wouldn't). On top of that they used 80+ PSUs (the 3000/7000/9000 series used Gold, I can't remember if the older ones were the same), proven, quality units. i5/i7 builds also used Q series motherboards with Intel components (such as the NIC).
If you're matching quality like for like then you'd be looking to spend more on the new machine. I'd much rather personally run a secondhand Ivy i5/i7 using quality components. Their consumer lines are garbage from my experience but ex-lease machines are all enterprise devices.
Being able to do something doesn't make it a better option, especially if you drop the quality to do so. Ivy i5, even to the benchmarks above is more powerful, ex-lease component machines are higher quality and even with the above it's still cheaper. It just makes sense.
Danvelopment - Tuesday, September 27, 2016 - link
I don't work there anymore but I liked the Dell enterprise machines so much that I actually bought an (ex-lease) E7240 after I left. i5-4200U, 4GB RAM (I added another 8 that I had lying around), 256GB OEM SSD for $200. I can flip the back off with two screws and access almost everything. And the screen front bezel just pulls off with fingernails, although you wouldn't know it til you tried. You don't have to unbolt the hinges like most laptops.Before I started there they bought Vostros (laptop and desktop) for some reason, rather than the enterprise machines and fark those things. They were the hardest farking things to work on, they literally went out of their way to make it hard. I phased the final ones out just before I left. It was the Vostro 3450 that was my most reviled computer ever. The hard drive was screwed onto the motherboard and you literally had to pull the whole thing apart, lift the motherboard then unscrew the HDD from it. If you took the back panel off, you could have done it from there but they put a small band of plastic on the bottom chassis to prevent it. It literally had no other purpose. If there was no warranty you could take a knife, cut that plastic off and do it directly.
Look at this joke of a thing:
http://www.laptopultra.com/guide/wp-content/upload...
https://i.ytimg.com/vi/6QwZ71iAdLA/maxresdefault.j...
4fifties - Friday, September 23, 2016 - link
If DIY motherboards, which presumably would allow either Bristol Ridge or Summit Ridge, follow the pattern of this OEM board, aren't we consigning Zen to just eight lanes of PCIe 3.0 for discrete graphics? Not necessarily an extinction-level event, but neither is it something gaming enthusiasts will be happy with. Hopefully, motherboard manufacturers won't drop the ball with this.prtskg - Friday, September 23, 2016 - link
I think both Summit ridge and Raven ridge will have better chipset(enthusiast level).KAlmquist - Saturday, September 24, 2016 - link
I take it you are thinking that the AM4 socket has more than 12 PCIe lanes, but that Bristol Ridge doesn't connect them all (sort of like the Intel i7-6800K has 28 PCIe lanes even though it uses a socket that has 40 lanes). That makes sense.My guess is that motherboard manufacturers expect AM4 motherboard sales to be driven primarily by Zen. In the DIY market, even the people who do buy a Bristol Ridge processor may be doing it with the intention of upgrading to a more powerful processor later. So I would expect most motherboard manufacturers would try to do a good job of supporting the Zen-based processors.
MrCommunistGen - Friday, September 23, 2016 - link
It really looks like the connectivity onboard the APU is targeted at what a normal laptop would need. This should be a major design advantage for AMD compared to their previous mobile platforms in terms of power, design & material cost, and platform footprint.- This class of CPU doesn't warrant a x16 PEG Link
- Due to space constraints most non-DTR laptops will have fewer than 4x USB ports - maybe 3+1 USB-based card reader. They can probably use an onboard hub for more if they really need them.
- x4 PCI-E 3.0 M.2 is an option
In fact, other than USB ports, this is probably enough connectivity for most non-enthusiast desktop users as well. This could help BOM and board design costs here as well. The optimistic part of me would love to see that reinvested elsewhere in the system. Realistically I see that split between a lower sticker price and an increase in margins for the system builder.
stardude82 - Friday, September 23, 2016 - link
Almost certainly AMD is just reusing the Carrizo design as a cost cutting measure. There isn't a AMD CPU on the market which a x8 link would bottleneck first.Samus - Friday, September 23, 2016 - link
Nice to see AMD trumping Intel's Crystalwell GPU for half the cost...patrickjp93 - Tuesday, September 27, 2016 - link
Yeah, no. Even the 5775C has thoroughly beaten everything AMD has so far offered, even if narrowly. Skylake GT3e increased that gap, and GT4e dug a canyon between Intel and AMD. This is why I sincerely doubt AMD's HPC APUs will get much traction.MrCommunistGen - Tuesday, September 27, 2016 - link
Skylake's GT4e really is in a whole new league of iGPU performance (at a significant cost) - but to nitpick GT3e on Skylake very likely underperforms GT3e on Broadwell.Broadwell GT3e (specifically Iris Pro 6200) is 48EUs at up to 1150MHz backed by 128MB eDRAM on a 65W TDP quad core.
Skylake GT3e (specifically Iris Graphics 550) is 48EUs at up to 1100MHz backed by 64MB eDRAM on a 28W dual core. (I'm not counting the listing for the "Server" GT3e which is listed as P555 with 128MB)
Skylake has microarchitecture improvements to both the EUs and to the x86 cores. It also supports DDR4 and Gen9 includes delta compression - so the iGPU undoubtedly has more bandwidth available.
That said, I really think that the smaller eDRAM, lower TDP, and lower max clock speed will make SKL GT3e slower than BDW GT3e.
Due to the rearrangement of SKUs and iGPUs in SKL I think this is all natural. Intel is trying to *significantly* increase the iGPU perf available to their 15/28W dual core CPUs with its iteration of GT3e, however this allows for the potential that this architecture's GT3e underperforms the previous.
Danvelopment - Saturday, September 24, 2016 - link
Something seems a bit funny about the IGPs of the A8 and A6. The A8 states R7 with 384SPs and a lower clock than the R5 in the A6 also with 384SPs.Should the A6 be R7 or 256 SPs? And if it's correct, how does the IGP compare between the two? And what makes the A8 faster?
Danvelopment - Saturday, September 24, 2016 - link
Also, any chance in getting a couple R5 performance indicators?MrCommunistGen - Tuesday, September 27, 2016 - link
I noticed that too. I've seen tables and slides from various sites and they all list those specs. The only thing I can conclude is that since the A6 has half the CPU resources, maybe AMD just assumes that for most tasks you'll end up CPU bound to a large enough degree that even thought the iGPU is technically faster, performance will be lower.Or... the alternative is that there's a typo of some sort in the original materials published by AMD.
yannigr2 - Saturday, September 24, 2016 - link
Oh, really nice article. Thanks for this.Well the first Zen processors will probably cost more than the A12. Then when Zen based APUs will be ready to come to the market, we will see and smaller Zen models.
ET - Saturday, September 24, 2016 - link
Thanks for the writeup, Ian. I even learned some new things which I haven't yet read elsewhere. I do hope that Anandtech can get its hands on a chip for reviews.NeatOman - Sunday, September 25, 2016 - link
Marginal improvement IMO over last gen APU's. Looks interesting but an i3 will still "feel" faster as its single core power is still greater, although i almost always build AMD APU's for small offices because Microcenter has a instant $40 rebate when you buy in store. BUT.. ZEN SoC with HBM might be a game changer (to quote 2014), and rumor is Apple is eyeballing it for the next Macbook Pro. And AMD has a good track record on Unix based OS's which might be why the Mac Pro uses AMD... I'm going down a rabbit hole. SOO! A12... ehh.. no significant change but I'm waiting for ZEN.utroz - Sunday, September 25, 2016 - link
Ian Cutress on the first page you say: "Using the updated 28nm process from TSMC, AMD was able to tweak the microarchitecture and allow full on APUs for desktops using a similar design."Problem is that Carrizo and Bristol Ridge are actually made at Global Foundries on a 28nm process.
TheinsanegamerN - Monday, September 26, 2016 - link
"Back at Computex, AMD announced the thing to fill that hole: Bristol Ridge (and Stoney Ridge by extension). This is what AMD will call and market it's '7th Generation' of APUs, and will be built on TSMC's 28nm Bulk CMOS process."From anandtech's look at carrizo in july. Everything else i've found doesnt say GF or TSMC, just 28nm.
so far, evidence is pointing at TSMC, not GF.
Haawser - Thursday, September 29, 2016 - link
Actual evidence points to GF.Look at the photo of the chip- http://wccftech.com/amd-bristol-ridge-a12-9800-am4...
'Diffused in Germany' means the chip itself was made at GF in Dresden. Then shipped to Malaysia for packaging. Unless the photo is fake. Which is unlikely.
DOS5 - Tuesday, October 4, 2016 - link
Wondering if I understood correctly -- is video card slot limited to 8 lanes? If so, how will this affect APU with video card as primary, APU with video card in dual graphics, and Athlon x4 CPU with video card?Marstg - Saturday, October 22, 2016 - link
Mr Cutress, we are one month later, where is the Bristol Ridge hardware? It would seem easier to reach out to Costco than AMD and have the platform tested. Where are you guys with that?Camdex - Tuesday, February 7, 2017 - link
Interesting article but disappointed in the overclocked benchmarks. AMD can say what they want about Excavator v2 cores but if these benches at 4.8ghz are legit its only going to slightly put it over the recent Athlon 860k. Im doing this comparison based on the future Athlon will be based off the A12 9800. My Athlon 860k gets a cinebench r15 score of 368 at 4.5ghz. IF it overclocked to 4.8ghz I'd bet it'd score right around the same 380 the A12 9800 did. On the plus side IF 4.8ghz is possible on the new Excavator v2 based chips and the power consumptuon is down thats a big plus. I was just hoping the new Athlons would be scoring a bit better say near 400 in cinebench r15 at stock clocks and for $100 or less. But hear this INTEL... I will never pay $180 for a dual core even if it does overclcock well. The new i3 7350k is a joke.Gadgety - Sunday, February 26, 2017 - link
AMD did something similar with the A8-7600 midrange APU. Announced it, launched it, allowed testing by tech sites, and gave OEMs full access, but it took a full 8 months before it was available to the general public.msroadkill612 - Wednesday, April 26, 2017 - link
testGlock24 - Sunday, May 14, 2017 - link
We already have Zen and Bristol Ridge is nowhere to be found anywhere apart from OEM systems. What a shame, would've made a decent office computer.Shayne03 - Saturday, December 2, 2017 - link
Hi i bought a10-9700 with stock hs and a320m vh plus msi motherboard.. my cpu temperature is always at 50 to 60 degree is it normal?