Massive ISP Upgrades

I’ve been hearing for some time now that 2022 flagships will have massive camera upgrades, and the new features of the next-gen SoCs being described by MediaTek and now also Qualcomm explain why that is.

The new ISP of the Snapdragon 8 Gen 1 falls under a new marketing name – “Snapdragon Sight”, and includes large improvements of the capabilities of the IP blocks within the image processing chain.

The big flagship feature being advertised is the fact that the new ISP is now capable of 18-bits of color depth per channel, up from the previous generation 14-bit ISP. While mobile image sensors nowadays still are barely 12-bit native in terms of their ADCs, the ushering of new HDR techniques such as staggered HDR capture, where exposures are immediately subsequent to each other on the sensor’s readout, means that new phones now are able to capture images a lot faster, recombining them into higher bit-depth results. Particularly here, the new 18-bit ISP pipeline now allows for three exposure HDR stacking off these new sensors.

The increased bit-depth should allow for an increase of 4 stops in dynamic range (or 2^4 = 16x the range), which greatly helps with very contrasting environments and challenging lighting situations. This is quite beyond any other camera solution right now, and being able to have this implemented in such a hardware fashion sort of blurs the line between traditional image capture techniques and the more software-defined computational photography methods of the last few years.

Indeed, the new ISP architecture seems to be very much a way to implement many of the existing computational photography techniques into fixed-function blocks: there’s a new neural-net controlled 3AA (auto-exposure, auto-focus, auto-white-balance) and face detection block, which sounds eerily similar to Google’s HDRnet implementations.

Night mode is also said to be vastly improved through a new multi-frame noise reduction and image stacking block, being able to now stack and align up to 30 images, and achieve also much finer detail this generation. Qualcomm here claims up to 5x better night mode shots.

Further improvements include a new distortion correction block that’s able to now also correct for chromatic aberrations, and a hardware video Bokeh engine, being able to operate at up to 4K video recording. Think of it as the same as the new Cinematic mode on the new A15 iPhones, but not only limited to 1080p.

Qualcomm notes that all the AI/ML/neural network features on the ISP are actually run and accelerated on the ISP itself, meaning that it is not offloaded onto the Hexagon dedicated ML processing blocks or the GPU.

Just as a note- Qualcomm’s 3.2Gigapixel/s throughput metric here seems low compared to the Dimensity 9000’s 9Gpixel/s, it’s possible that the companies are advertising very different metrics, with MediaTek advertising the throughput of lower-bit depth pixels coming from the image sensors per frame, while Qualcomm quoting the full bit depth pixel processing within the ISP itself.

In terms of video encoders and decoders, the new chip allows for 8K HDR recording now, but otherwise is seemingly on par with the Snapdragon 888 media blocks. Unfortunately, this also means no AV1 decoding this year yet again. Qualcomm isn’t part of the Alliance for Open Media consortium and instead is backing VVC/H.266 and EVC, however with AV1 being actively pushed by Google and YouTube, and seeing large adoptions such as by Netflix, it’s becoming questionable for Qualcomm to still not support the format in 2022 devices.

AI Performance - Iterative, but solid

Last year’s Hexagon IP block was a very large change for the Snapdragon 888. At the time, Qualcomm moved on from a more segregated DSP/AI architecture to a single more fused-together block being able to operate on scalar, vector, and tensor operations at the same time. This year’s iteration is an improvement of that larger change. Qualcomm notes that amongst many changes, they’ve doubled up on the shared memory of the block, allowing for greater performance for larger ML models (which are growing at a very fast pace).

Qualcomm didn’t note any TOPS figures this time around, instead stating we’re seeing 2x the tensor throughput performance, and smaller increases for scalar and vector processing. They do quote a day-1 to day-1 performance increase of 4x when compared to the Snapdragon 888, via a combination of both hardware and software improvements, but of course that figure is smaller when comparing both platforms on an equal software footing.

Power efficiency for AI workloads is said to be 70% better this generation, which is actually more significant, and should help with more demanding sustained ML workloads.

X65 Modem Integrated

In terms of connectivity, the Snapdragon 8 Gen 1 is quite straightforward, as it integrates the X65 modem IP that Qualcomm had already announced as a discrete model earlier this year.

The improvements here are the fact that it’s a 3GPP Release 16 compatible modem, including new features such as uplink carrier aggregation. Other improvements are 300MHz of Sub-6 bandwidth on 3 100MHz carriers, and an increase of the mmWave bandwidth from 800 MHz to 1000MHz, allowing a new peak theoretical downlink speeds of 10Gbps.

Introduction & Specifications - CPU & GPU Conclusion & First Impressions
POST A COMMENT

219 Comments

View All Comments

  • vlad42 - Friday, December 3, 2021 - link

    vladx was complaining about the lack of ANY hardware encoder support, which is blatantly false. He did not quantify it as consumer encoders.

    It is also important to note that consumer hardware encoders are of little value until the major players (twitch, YouTube, etc.) are ready to start supporting AV1 in a manner in which client side encoding is actually needed. For YouTube, Facebook, etc., those companies already re-encode videos uploaded to them so they can just change the encode target on their end to be AV1 – they can even use the hardware encoders I listed above! AV1 hardware decode support has been around for a few years already and the software decode performs well on most laptop and desktops.

    There are already software encoders that are reasonably fast on desktops & laptops, so the hardware encoders are really needed for things like cellphones and live streaming/real time encoding. Cellphone videos typically end up stored in the cloud where space does not matter to the consumer or posted to social media, where as I mentioned above the company will re-encode the video anyway. For live streaming, twitch announced years ago that they would start to offer AV1 in 2022 (and twitch is the most aggressive that I have seen). So, as long as hardware encoders show up next year or software encoders' performance improves enough/CPUs improve enough, then everything is on track.

    As for adoption of HEVC, Apple was very early with support but only for FaceTime (there is no indication if it was hardware encoded on your link but let us assume it is for the sake of argument). Nvidia was also early. If there were others, then I missed them as the link is filled with announcements on codec support and software encoders/decoders. However, considering MacBooks still have 720p webcams, I doubt iPhones and iPads are encoding at a resolution higher than 720p. At these resolutions AV1 and VVC would bring minimal if any bitrate efficiency improvements at reasonable image quality. This same problem of low resolution video conferencing exists for Zoom, Teams, Skype, etc. on other platforms. As for Nvidia, they probably realized that HEVC encoding on Maxwell went largely unused for a long time due to the lack of adoption by the live streaming/video calling services (and anyone who wanted a high quality encode used software not hardware).

    The point is, there has been little motivation to rush the adopt of either AV1 or VVC encoding support on cellphone chips or GPUs due to the lack of a non-niche usage case. I think vendors have simply prioritized the die area that would have gone to hardware encoding support to the other parts of the SOC such as the X1/X2/P-Cores, NPUs, image processors, and GPUs as they would proved a more tangible benefit to end users.
    Reply
  • vladx - Friday, December 3, 2021 - link

    > vladx was complaining about the lack of ANY hardware encoder support, which is blatantly false. He did not quantify it as consumer encoders.

    Umm stop putting words into 'my mouth". Let me quote myself:

    "At this rate, consumer hardware supporting encode will be released for VVC before AV1 let alone AV2."

    I specifically mentioned "consumer hardware" on which I believe VVC encoding will be supported before AV1, not "ANY hardware encoder support" as you claimed.
    Reply
  • BlueSwordM - Friday, December 3, 2021 - link

    Nah, you're the one who's wrong here.

    There's already a ton of HW that has HW AV1 decode support: Samsung SOCs, Mediatek SOCs, Amlogic, etc.

    Almost all 2020+ TVs now include an AV1 HW decode, etc.
    Reply
  • vladx - Friday, December 3, 2021 - link

    Compared to HEVC hardware decode which is supported by all consumer hardware from 2016 and onwards, AV1 support doesn't even come close right now. Reply
  • Zoolook - Wednesday, December 8, 2021 - link

    Netflix started streaming in AV1 two weeks ago. Reply
  • vladx - Wednesday, December 8, 2021 - link

    Sure, but that doesn't they dropped HEVC as well. Reply
  • eastcoast_pete - Tuesday, November 30, 2021 - link

    Thanks for the information! Any idea why QC doesn't like AV1? It's free to use/implement AFAIK, so license fees can't be the reason. Reply
  • tuxRoller - Tuesday, November 30, 2021 - link

    They want to reap the licensing fees from vvc, and ignoring av1 means people will rely less on av1, they might say.
    The decode issue isn't much of one given the speed of modern cores & dav1d's efficiency, but we're well past the point where the market is begging for more efficient codecs to satisfy our ultrahd hdr habits. That's not even mentioning the continued jpeg dominance.
    Reply
  • Adonisds - Tuesday, November 30, 2021 - link

    Why would they get money from VVC? Shouldn't they have to pay to use it instead? Reply
  • ikjadoon - Tuesday, November 30, 2021 - link

    Qualcomm's IP is the #1 whole contributor to VVC by a sizeable margin (~12% of VVC's contributions are Qualcomm's, more than any other single vendor).

    https://www.iam-media.com/market-developments/qual... (paywall)

    As a reminder, Qualcomm's earnings pre-tax as of fiscal Q3 2021:

    $1.795 billion from all its hardware (28% margin)
    $1.053 billion from its IP patents / licensing (71% margin)

    Qualcomm always seems to forget to mention their lopsided incentives during the yearly Summits, but it's frequently lurking behind most of their "unexplainable" decisions: reduce SoC hardware costs, increase licensing opportunities.
    Reply

Log in

Don't have an account? Sign up now