At this year’s Tech Summit from Hawaii, it’s time again for Qualcomm to unveil and detail the company’s most important launch of the year, showcasing the newest Snapdragon flagship SoC that will be powering our upcoming 2022 devices. Today, as the first of a few announcements at the event, Qualcomm is announcing the new Snapdragon 8 Gen 1, the direct follow-up to last year’s Snapdragon 888.

The Snapdragon 8 Gen 1 follows up its predecessors with a very obvious change in marketing and product naming, as the company is attempting to simplify its product naming and line-up. Still part of the “8 series”, meaning the highest end segment for devices, the 8 Gen 1 resets the previous three-digit naming scheme in favor of just a segment and generation number. For Qualcomm's flagship part this is pretty straightforward, but it remains to be seen what this means for the 7 and 6 series, both of which have upwards of several parts for each generation.

As for the Snapdragon 8 Gen 1, the new chip comes with a lot of new IP: We’re seeing the new trio of Armv9 Cortex CPU cores from Arm, a whole new next-generation Adreno GPU, a massively improved imaging pipeline with lots of new features, an upgraded Hexagon NPU/DSP, integrated X65 5G modem, and all manufactured on a newer Samsung 4nm process node.

The new chip promises large increases in performance and efficiency in a lot of the processing elements, as well as new features enabling new user experiences. Let’s start over the basic specifications and drill down the details that we have on the chip:

Qualcomm Snapdragon Flagship SoCs 2020-2021
SoC

Snapdragon 8 Gen 1

Snapdragon 888
CPU 1x Cortex-X2
@ 3.0GHz 1x1024KB pL2

3x Cortex-A710
@ 2.5GHz 3x512KB pL2

4x Cortex-A510
@ 1.80GHz 2x??KB sL2

6MB sL3
1x Cortex-X1
@ 2.84GHz 1x1024KB pL2

3x Cortex-A78
@ 2.42GHz 3x512KB pL2

4x Cortex-A55
@ 1.80GHz 4x128KB pL2

4MB sL3
GPU Adreno next-gen Adreno 660 @ 840MHz
DSP / NPU Hexagon Hexagon 780

26 TOPS AI
(Total CPU+GPU+HVX+Tensor)
Memory
Controller
4x 16-bit CH

@ 3200MHz LPDDR5  /  51.2GB/s

4MB system level cache
ISP/Camera Triple 18-bit Spectra ISP

1x 200MP or 108MP with ZSL
or
64+36MP with ZSL
or
3x 36MP with ZSL

8K HDR video & 64MP burst capture
Triple 14-bit Spectra 580 ISP

1x 200MP or 84MP with ZSL
or
64+25MP with ZSL
or
3x 28MP with ZSL

4K video & 64MP burst capture
Encode/
Decode
8K30 / 4K120 10-bit H.265

Dolby Vision, HDR10+, HDR10, HLG

720p960 infinite recording
Integrated Modem X65 integrated

(5G NR Sub-6 + mmWave)
DL = 10000 Mbps
UL = 3000 Mbps
X60 integrated

(5G NR Sub-6 + mmWave)
DL = 7500 Mbps
UL = 3000 Mbps
Mfc. Process Samsung
4nm (unspecified)
Samsung
5nm (5LPE)

CPUs: Cortex-X2 and Armv9 siblings

Starting off with the CPUs of the new Snapdragon 8 Gen 1 (I’ll shorthand it as S8g1 here and there): This is Qualcomm’s first chip featuring the new Armv9 generation of CPU IPs from Arm, which includes the Cortex-X2, Cortex-A710, and Cortex-A510 in a big, middle, and little setup. Qualcomm continues to use a 1+3+4 core count, a setup that’s been relatively successful for the designers over the past few years and iterations ever since the Snapdragon 855.

The Cortex-X2 core of the new chip clocks in at 3.0GHz, which is a tad higher than the 2.84GHz clock of the X1 core on the Snapdragon 888. This was actually a bit surprising to me, as I hadn’t expected much in the way of clock increases this generation, but it’s nice to see Arm vendors now routinely achieving this. For context, MediaTek’s recently announced Dimensity 9000 achieves 3.05GHz on its X2 core, however that’s on a TSMC N4 node. In contrast, Qualcomm manufactures the Snapdragon 8 Gen 1 on a Samsung 4nm node. The company wouldn’t confirm if it’s a 4LPE variant or something more custom, hence why we’re leaving it as a “4nm” node description in the specification table.

What is most surprising about the X2 core is that Qualcomm is claiming 20% faster performance or 30% power savings, the latter figure being especially intriguing. Samsung Foundry only describe a 16% reduction in power in going from a 5nm to 4nm node, and obviously 30% is significantly better than what the process node promises. We asked Qualcomm what kind of improvements lead to such a large power decrease; however, the company wouldn’t specify any details. I particularly asked if the new X2 cores have their own voltage domain (Previous Snapdragon 1+3 big+middle implementations shared the same voltage rail), but the company wouldn’t even confirm if this was the case or not. Arm had noted that the X2 can have quite lower power at the same peak performance point of the X1, if Qualcomm’s marketing materials refer to such a comparison, then the numbers might make sense.

The X2 core is configured with 1MB of L2 cache, while the three Cortex-X710 cores have 512KB each. The middle cores here are clocked slightly higher at 2.5GHz this year, a little 80MHz jump over the previous generation. Usually, the middle cores pay more attention to the power budget, so maybe this slightly increase does represent more accurately the process node improvements.

Lastly, the new chip also makes use of four Cortex-A510 cores at 1.8GHz. Unlike the Dimensity 9000 from a couple of weeks back, Qualcomm does make use of Arm’s new “merged-core” approach of the new microarchitecture, meaning that the chip actually has two Cortex-A510 complexes with two cores each, sharing a common NEON/SIMD pipeline and L2 cache. The merged core approach is meant to achieve better area efficiency. Qualcomm rationalized the approach by saying that in everyday use cases with fewer threads active and overall low activity, having a single core able to access a larger L2 cache shared by two cores can result in better performance and efficiency. Unfortunately even while making this comment, the company wouldn’t actually detail what the L2 size was, whether it’s 512KB or 256KB – if it’s the latter, then the configuration definitely isn’t as aggressive as the Dimensity 9000.

The new Armv9 CPU IPs from Arm also came with a new generation DSU (DynamiQ Shared Unit, the cluster IP) which the new Snapdragon makes use of. Qualcomm here opted for a 6MB L3 cache size, noting that this was a decision in balancing out system performance across target workloads.

As for system caches, Qualcomm mentioned that the chip remains unchanged with a 4MB cache, and the memory controllers are still 3200MHz LPDDR5 (4x 16bit channels). It’s to be noted that, as with last year’s Snapdragon 888, the CPUs no longer have access to the system cache, in order to improve DRAM latency. We can’t help but make comparisons to MediaTek’s Dimensity 9000, which likely will have worse DRAM latency, but also offer up to 14MB of shared caches to the CPUs versus just 6MB on the Snapdragon 8 Gen 1. How the two chips will compare to each other remains to be seen in actual commercial devices.

GPU: New Adreno architecture with no name

Back in the day, Qualcomm’s Adreno GPU architectures were easy to identify in terms of their family as well as performance levels. Particularly on the architecture side, the Adreno 600 series started off with the Adreno 630 in the Snapdragon 845 a few years ago, but unlike in previous iterations from the 400- and 500 series, we remained with that high-level description up until the Snapdragon 888 series.

The Snapdragon 8 Gen 1 here changes things, and frankly, Qualcomm did a quite horrible job at marketing what they have this time around. The new GPU name completely drops any model number, and as such doesn’t immediately divulge that it’s part of a larger microarchitecture shift that in the past would have been marketed as a new Adreno series.

Qualcomm notes that from an extremely high-level perspective, the new GPU might look similar to the previous generations, however there are large architectural changes included that are meant to improve performance and efficiency. Qualcomm gave examples such as concurrent processing optimizations that are meant to give large boosts in performance to real-world workloads that might not directly show up in benchmarks. Another example was that the GPU’s “GMEM” saw large changes this generation, such as an increase of 33% of the cache (to 4MB), and now being both a read & write cache rather than just a writeback cache for DRAM traffic optimizations.

The high-level performance claims are 30% faster peak performance, or 25% power reduction at the same performance as the Snapdragon 888. Qualcomm also uncharacteristically commented on the situation of peak power figures and the current situation in the market. Last year, Qualcomm rationalized the Snapdragon 888’s high peak GPU power figures by noting that this is what vendors had demanded in response to what we saw from other players, notably Apple, and that vendors would be able to achieve better thermal envelopes in their devices. Arguably, this strategy ended up as being quite disastrous and negative in terms of perception for Qualcomm, and I feel that in this year’s briefing we saw Quaclomm attempt to distance themselves more from the situation, largely by outright saying that the only point of such peak performance and power figures is for vendors to achieve higher first-run benchmarking numbers.

Unfortunately, unlike Apple, who actually use their GPU’s peak performance figures in transient compute workloads such as camera processing, currently the Android ecosystem just doesn’t make any advanced use of GPU compute. This admission was actually a breath of fresh air and insight into the situation, as it’s been something I’ve especially noted in our Kirin 9000, Snapdragon 888 and Exynos 2100 and Tensor deep-dives in criticizing all the new chips. It’s an incredibly stupid situation that, as long as the media continues to put weight on peak performance figures, won’t be resolved any time soon, as the chip vendors will have a hard time saying no to their customer’s requests to operate the silicon in this way.

Qualcomm states that one way to try to alleviate this new focus on peak performance is to change the way the GPU performance and power curve behaves. The team stated that they’ve gone in to change the architecture to try to flatten the curve, to not only achieve those arguably senseless peak figures, but actually focus on making larger improvements in the 3-5W power range, a range where the Snapdragon 888 last year didn’t significantly improve upon the Snapdragon 865.

That being said, even with a 25% decrease in power at similar Snapdragon 888 performance, the new Snapdragon 8 Gen 1 likely still won’t be able to compete against Apple’s A14 or A15 chips. MediaTek’s Dimensity 9000 also should also be notably more efficient than the new Snapdragon at equal performance levels given the claimed efficiency figures, so it still looks like Qualcomm’s choice of going with a Samsung process node, even this new 4nm one, won’t close the gap to the TSMC competitors.

Massive ISP Upgrades, AI Uplifts
Comments Locked

219 Comments

View All Comments

  • vlad42 - Friday, December 3, 2021 - link

    vladx was complaining about the lack of ANY hardware encoder support, which is blatantly false. He did not quantify it as consumer encoders.

    It is also important to note that consumer hardware encoders are of little value until the major players (twitch, YouTube, etc.) are ready to start supporting AV1 in a manner in which client side encoding is actually needed. For YouTube, Facebook, etc., those companies already re-encode videos uploaded to them so they can just change the encode target on their end to be AV1 – they can even use the hardware encoders I listed above! AV1 hardware decode support has been around for a few years already and the software decode performs well on most laptop and desktops.

    There are already software encoders that are reasonably fast on desktops & laptops, so the hardware encoders are really needed for things like cellphones and live streaming/real time encoding. Cellphone videos typically end up stored in the cloud where space does not matter to the consumer or posted to social media, where as I mentioned above the company will re-encode the video anyway. For live streaming, twitch announced years ago that they would start to offer AV1 in 2022 (and twitch is the most aggressive that I have seen). So, as long as hardware encoders show up next year or software encoders' performance improves enough/CPUs improve enough, then everything is on track.

    As for adoption of HEVC, Apple was very early with support but only for FaceTime (there is no indication if it was hardware encoded on your link but let us assume it is for the sake of argument). Nvidia was also early. If there were others, then I missed them as the link is filled with announcements on codec support and software encoders/decoders. However, considering MacBooks still have 720p webcams, I doubt iPhones and iPads are encoding at a resolution higher than 720p. At these resolutions AV1 and VVC would bring minimal if any bitrate efficiency improvements at reasonable image quality. This same problem of low resolution video conferencing exists for Zoom, Teams, Skype, etc. on other platforms. As for Nvidia, they probably realized that HEVC encoding on Maxwell went largely unused for a long time due to the lack of adoption by the live streaming/video calling services (and anyone who wanted a high quality encode used software not hardware).

    The point is, there has been little motivation to rush the adopt of either AV1 or VVC encoding support on cellphone chips or GPUs due to the lack of a non-niche usage case. I think vendors have simply prioritized the die area that would have gone to hardware encoding support to the other parts of the SOC such as the X1/X2/P-Cores, NPUs, image processors, and GPUs as they would proved a more tangible benefit to end users.
  • vladx - Friday, December 3, 2021 - link

    > vladx was complaining about the lack of ANY hardware encoder support, which is blatantly false. He did not quantify it as consumer encoders.

    Umm stop putting words into 'my mouth". Let me quote myself:

    "At this rate, consumer hardware supporting encode will be released for VVC before AV1 let alone AV2."

    I specifically mentioned "consumer hardware" on which I believe VVC encoding will be supported before AV1, not "ANY hardware encoder support" as you claimed.
  • BlueSwordM - Friday, December 3, 2021 - link

    Nah, you're the one who's wrong here.

    There's already a ton of HW that has HW AV1 decode support: Samsung SOCs, Mediatek SOCs, Amlogic, etc.

    Almost all 2020+ TVs now include an AV1 HW decode, etc.
  • vladx - Friday, December 3, 2021 - link

    Compared to HEVC hardware decode which is supported by all consumer hardware from 2016 and onwards, AV1 support doesn't even come close right now.
  • Zoolook - Wednesday, December 8, 2021 - link

    Netflix started streaming in AV1 two weeks ago.
  • vladx - Wednesday, December 8, 2021 - link

    Sure, but that doesn't they dropped HEVC as well.
  • eastcoast_pete - Tuesday, November 30, 2021 - link

    Thanks for the information! Any idea why QC doesn't like AV1? It's free to use/implement AFAIK, so license fees can't be the reason.
  • tuxRoller - Tuesday, November 30, 2021 - link

    They want to reap the licensing fees from vvc, and ignoring av1 means people will rely less on av1, they might say.
    The decode issue isn't much of one given the speed of modern cores & dav1d's efficiency, but we're well past the point where the market is begging for more efficient codecs to satisfy our ultrahd hdr habits. That's not even mentioning the continued jpeg dominance.
  • Adonisds - Tuesday, November 30, 2021 - link

    Why would they get money from VVC? Shouldn't they have to pay to use it instead?
  • ikjadoon - Tuesday, November 30, 2021 - link

    Qualcomm's IP is the #1 whole contributor to VVC by a sizeable margin (~12% of VVC's contributions are Qualcomm's, more than any other single vendor).

    https://www.iam-media.com/market-developments/qual... (paywall)

    As a reminder, Qualcomm's earnings pre-tax as of fiscal Q3 2021:

    $1.795 billion from all its hardware (28% margin)
    $1.053 billion from its IP patents / licensing (71% margin)

    Qualcomm always seems to forget to mention their lopsided incentives during the yearly Summits, but it's frequently lurking behind most of their "unexplainable" decisions: reduce SoC hardware costs, increase licensing opportunities.

Log in

Don't have an account? Sign up now