Back in 2006, technology newcomer Bigfoot Networks announced the Killer NIC, an unusual network interface card targeted at the high-end gaming market. With this coming fairly shortly after solid consumer-grade NICs became universally integrated in to motherboards, the timing wasn’t the best. At $280, neither was the price.

The Killer NIC marked a sort of return to the idea of a complex NIC, forgoing a simple controller/PHY combination in favor of a complex dedicated device capable of acting more independently. By endowing a NIC with a full-fledged microcomputer running Linux, not only could the card offload virtually every part of processing required for network operations, but it could even bypass Windows’ notorious networking stack and handle packets in a manner better suited for low-latency use (i.e. games). In turn the Killer NIC would be able offer lower and more stable pings than standard NICs, and with lower CPU overhead increase game/application performance too.


The original Killer NIC in 2006

The reality of the hardware didn’t quite live up to the initial hype. In our review of the original Killer NIC the results were a mixed bag: offloading did improve game performance slightly in CPU-limited situations, but we couldn’t find any significant improvements to ping/latency in games. Ultimately any benefits the card could offer couldn’t justify the $280 price tag even among the high-spending gaming crowd. Later in 2009 they released their second-generation Killer Xeno Pro, but even with a lower price the results were much of the same.

Flash forward to today and Bigfoot Networks has continued their product development unabated, bringing us to their latest product: the Killer 2100. The 2100 is Bigfoot’s 3rd generation Killer NIC, and will be replacing the Killer Xeno Pro. We will be reviewing the card next week when our sample arrives, but ahead of that we wanted to do a quick rundown of what we know thus far after talking to Bigfoot.


The Killer 2100

Let’s start with the hardware:

  • 10/100/1000 Ethernet
  • PCIe x1 interface
  • 400MHz dedicated network processor (PowerPC)
  • 128MB DDR2 RAM

Bigfoot tells us that the Killer 2100 hardware isn’t significantly different from the current Killer Xeno Pro. The biggest change is that Bigfoot has ditched all non-Ethernet functionality, dropping the USB port and the 3.5mm audio jacks, leaving only the RJ-45 port. Internally hardware is largely the same. The network processor continues to be an embedded PowerPC CPU, which was likely chosen for PowerPC’s big endian support.

As the hardware is being left relatively unchanged, the significant introduction for the Killer 2100 is the software stack, which Bigfoot is telling us had a major overhaul. As you may recall the entire Killer NIC line runs an embedded Linux distribution on the NIC, so in this case a software overhaul means upgrading both the embedded Linux distribution and the end-user Windows software.

Bigfoot hasn’t told us much about what they’ve done on the Linux side of things, although they are calling it a whole new architecture. In any case the fundamentals of the Killer NIC haven’t changed – it’s still based around bypassing the Windows networking stack on low-latency applications – with features such as application prioritization, offloading, and shaping/throttling bringing up the rear. The biggest low-level change we do know about is that Bigfoot has done something about the Killer Xeno Pro’s previously poor throughput, which means that the Killer 2100 should now be competitive at GigE speeds when operating on a LAN.

Beyond the usual optimizations, the bulk of the other changes are to the end-user software. Late last year Bigfoot introduced a new utility called Dashboard that augmented the Killer Xeno’s basic control panel with a more advanced graphical readout of the card’s status. The Killer 2100 is going to take this all the way, giving the Dashboard (now the Killer Network Manager) greater control over the card. This culminates in a new feature they are calling Visual Bandwidth Control, which in a nutshell is a graphical readout of each application’s bandwidth usage in order to see what each application is using and being able to set bandwidth throttles accordingly. Throttling itself isn’t new, but being able to visualize per-application usage from the NIC is, and is the cornerstone of what they’re calling their key breakthrough – allowing the user to better monitor the card.


The Killer Network Manager

Along with this, Bigfoot is also rolling out a new NIC benchmark they are calling Gaming Network Efficiency (GaNE). This is intended to solve one of Bigfoot’s biggest issues in selling the card – being able to showcase the difference in a reliable manner. Although we’re always leery of vendor-provided benchmarks, Bigfoot has offered us the source code to GaNE and we’ll be looking at it once our sample arrives to see if there’s anything to it.

GaNE also marks a bit of a shift in how Bigfoot will be marketing the Killer lineup. In their meeting with us, less of an emphasis was placed on total latency in favor of network jitter (which GaNE can conveniently benchmark). Bigfoot claims that the Killer 2100 has much less jitter than standard NICs, which would serve to keep latency more stable and avoid triggering undesired responses from games that believe that latency is spiking. Bigfoot still believes that the Killer 2100 will offer better latency through the combination of better jitter and their own optimizations towards latency by reducing buffering where applicable, but it’s certainly a more realistic approach than focusing solely on latency, something that’s largely out of the NIC’s hands in the first place.  All of this we will of course be testing to see how truthful it is and what impact it actually has.

Finally, we’ll wrap up this announcement with some good news for existing Killer Xeno Pro owners. Since the hardware is largely unchanged, Bigfoot will be releasing a version of their new software stack for the Xeno Pro, giving it most of the functionality of the Killer 2100. Ultimately any future improvements to their software will be reserved for the Killer 2100, but this is an appropriate gesture given the short time of under a year between the launch of the Killer Xeno Pro and the Killer 2100.

The Killer 2100 will be selling at the same $129 price as the existing Killer Xeno Pro, cementing its spot as the Xeno Pro’s replacement. Bigfoot’s official press release leaves the actual availability date open with a simple “coming weeks”, although for North American distribution it sounds like it could show up as early as this week.

We’ll have more on the Killer 2100 in our full review next week, so stay tuned.

Comments Locked

78 Comments

View All Comments

  • bunnyfubbles - Tuesday, May 11, 2010 - link

    I love how they're moving away from the cheesy bling look, but more than $100 for a NIC is just too much. I could see myself maybe spending $50-75 just for the features as performance has never been impressive enough for me.
  • Arrakiv - Tuesday, May 11, 2010 - link

    Yeah, we're much, much happier with the current generation's look. The Xeno wasn't too bad either, I don't think, but we wanted the 2100 to look more modern and attractive and less, well, like you said. Basically the as a reflection of where we're going as a company, too.
  • SanLouBlues - Wednesday, May 12, 2010 - link

    I'd consider paying in that range, but only if it had multiple ethernet ports so I could repurpose it as a router interface once it's obsolete.
  • shmuck - Wednesday, May 12, 2010 - link

    Repurpose nothing, you could daisy chain those suckers. Picture a LAN party with no hub required, no processor overhead from the daisy chaining, and only a trivial amount of additional latency. As long as the party wasn't huge it'd work really well.

    Mostly what's got me disappointed by this is that all the improvements seem to be in the proprietary Windows software. The appeal of having a little Linux box that fits inside my Linux box is dampened somewhat by lack of Linux compatibility.
  • HangFire - Thursday, May 13, 2010 - link

    You wish to repurpose Ethernet into a token ring protocol, or go back to broadcast only?
  • shmuck - Thursday, May 13, 2010 - link

    Well it is a ring topology, but no token needed. And it's not repurposing, it's expanding capability.
  • kmmatney - Tuesday, May 11, 2010 - link

    I'd be more interested to know if a dedicated sound card can improve performance in cpu-limited situations. I've heard it could help, but maybe that's just a myth - you can have a mythbusters article about that...
  • shangshang - Tuesday, May 11, 2010 - link

    a dedicated sound card will improve gaming performance, not to mention sound quality. It's been discussed over the years. Google it.
  • Guspaz - Tuesday, May 11, 2010 - link

    This is less true today than it used to be for several reasons:

    1) Sound quality. This is entirely dependent on the quality of the DACs used in the onboard audio versus the dedicated sound cards (the SNR). There are good and bad of both varieties. There are onboard solutions that have better noise margins than discrete cards. If you're buying a discrete card, verify that the noise margin is better than your existing onboard solution, or you're throwing money after bad.

    2) Performance. Performance differences are pretty much a thing of the past, especially on modern multicore computers. Virtually no soundcards, discrete or otherwise, have hardware-accelerated 3D sound support anymore, and the only two APIs that support that (DirectSound3D and OpenAL) are either deprecated (DS3D doesn't exist in anything newer than WinXP), or third party (OpenAL requires support from individual games).

    The majority of sound processing is done in software these days, be it by Windows or by the game itself. This was necessitated by the varying capabilities of sound hardware; the only way for a game to be sure that sound mixing and effects would be consistent between different computers was to do it themselves. It was around the introduction of Id Tech 4 and Source that this became the norm.

    Furthermore, many people these days have quad core processors; not many games can effectively utilize all four cores completely due to the difficulty of splitting the load completely evenly. That often means that there is spare CPU resources available for sound processing. Even if there aren't, Moore's Law has ensured that the amount of CPU time required for audio processing has dropped to an extremely tiny amount relative to the overall performance of the processor.

    Gone are the days of 40% performance drops when using unaccelerated 3D sound on tbird Athlons. You might see a few FPS difference by spending $200 on a soundcard today, but you'll see a far bigger difference if you put that $200 into a faster CPU or GPU.

    So, what conclusion can we draw from this? That the only reason to buy a discrete sound card for the vast majority of users is purely audio quality; it's impossible in this day and age to get a big enough speed boost from a soundcard that the money is better spent there than on the CPU/GPU. And even then, you need to buy fairly high-end to get a noise margin that's better than onboard.
  • Zorlac - Tuesday, May 11, 2010 - link

    Exactly!!! And what most "hardcore" PC folks dont know is that your sound card funds are much better off going to an external dedicated DAC if you are looking for "quality". Great post by the way. :)

Log in

Don't have an account? Sign up now