So what is an “AI PC” anyway?

Apple, nVidia, Microsoft, Qualcomm, AMD and Intel have different opinions on the matter, here’s what consumers need to know


It used to be than AI-accelerating hardware was designed for enterprise-level servers or scientific workstations, but the consumer “AI PC” will also be a thing going forward. What is an “AI PC”, though? (Image: Igor Omilaev, Unsplash)


It was just a matter of time, really: with machine learning applications taking the world by storm over the past two years in the form of art generators, chatbots and the like, several tech companies now feel that it’s time to ride the AI wave on a consumer hardware level. Why just charge small amounts of money for access to services when people can also be persuaded that they need AI in the form of new computers? Microsoft, Intel, Qualcomm, AMD, nVidia and Apple are planning on riding that wave in 2024 as all six will be promoting different devices equipped with processing blocks specially designed with machine learning and AI in mind.

The problem, at least from a consumer perspective, is that when these companies speak of “AI devices” or “AI PCs”, they do not necessarily mean the same thing. In fact, they often do not mean the same thing at all. It’s confusing, it’s alarming and it’s sure to cause a period of uncertainty in a global market that is already trying to decide what AI currently is, can or should be. That’s a much broader discussion but, when it comes to devices, here’s what consumers need to know.

Intel, Qualcomm, AMD, Microsoft: the hybrid approach

When these companies are talking about “AI”, what they basically refer to is the machine learning models that services like Dall-E or ChatGPT are based on, as well as how these models process input data in order to produce usable results. AI data processing is different to e.g. general computational or graphics processing, so specially designed hardware and/or software is needed in order to get fast results. That’s where things get… complicated.

Up until now most consumers had only known one type of AI data processing: the server-side one. Web services like Midjourney or ChatGPT accept the data people submit – in the form of e.g. picture descriptions or prompts – and forward them to complex server farms, equipped with powerful hardware, where machine learning models run 24/7. That’s where the actual processing takes place: the devices themselves most people currently use for AI work or experimentation just receive and display the results of data requests they have sent over the Internet.

Intel has already started including NPU processing blocks within its latest consumer desktop or laptop CPU models and fully expects this to become the norm going forward. AMD, Qualcomm and Apple have similar plans. (Image: Intel)


This – if Intel, AMD and Qualcomm have their way – might be changing soon. All three point out that their best forthcoming processors for 2024 sport quite capable NPUs (Neural Processing Units): processing blocks specifically designed to handle machine learning computational tasks. These processing blocks are obviously nowhere near as powerful as the hardware found in those specialized server farms, but they do help a lot with AI work. As a result, an NPU-equipped PC will take minutes, not hours, to complete e.g. Stable Diffusion image generation runs all on its own (without access to third-party servers that is). This is why these companies claim that personal computers based on these processors should be called “AI PCs” now, even though those tasks can still be carried out by “non-AI” PCs (just way more slowly).

Microsoft’s role in all of this is somewhat… strange. On one hand, the company has been promoting Copilot – its virtual assistant currently based on ChatGPT 4.0 – for more than a year now on all Windows 11 PCs, regardless of their hardware. That’s because Copilot only works as a cloud-based service (so it absolutely needs Internet access in order to operate as intended) and it very well continue to do so in the future. On the other hand, Microsoft is also a strong advocate for the “AI PC” concept, which describes personal computers specifically equipped with an NPU – of at least 45 TOPS of processing power – and even a Copilot keyboard key.

Microsoft has already added AI functionality to Windows 11 in the form of the ChatGPT-based Copilot, but it also plans on supporting on-device AI hardware natively at some point in 2024. (Image: Microsoft)


To that end, the company is expected to add native Windows 11 support for these new NPU-equipped processors through the 24H2 OS update it will be releasing at some point over the next few months. In other words, Microsoft – whose operating systems are the cornerstone of the majority of personal computers out there – is going for a hybrid approach: it will continue to offer cloud-based AI services to all Windows 11 PCs on a system level, but it will also offer hardware acceleration options to “AI PCs” for local, “on-device” processing of various AI tasks in 2024 and beyond.

Apple, nVidia: the on-device approach

Driven by different (but not necessarily wrong) motives, nVidia and Apple seem to lean towards local, on-device AI processing instead. Apple has not yet made an official announcement about the exact way it plans to offer new, machine learning-based functionality in future versions of macOS/iOS/iPadOS and future Mac, iPhone or iPad models, but company executives have stated in the past that AI-related tasks should be handled on-device for security and privacy reasons. Since the company is a strong advocate for user privacy and data protection, everyone expects it to implement machine learning in a way that respects those – so the necessary AI work will probably be done either exclusively on-device or on-device with the help of Apple servers, depending on the kind of functionality we are talking about.

It’s worth pointing out that Apple has included a progressively more powerful NPU block in its processors since 2017 (called the Neural Engine), but this was not extensively used by the company’s operating systems, certainly not for consumer-facing functionality other than e.g. computational photography or Face ID. Apple plans to go all out with its new processors, though, the M4 for the Mac/iPad Pro and the A18 for the iPhone later in the year – along with macOS 15 and iOS/iPadOS 18 – so it will be interesting to see how far the company intends to push the envelope in terms of AI-related tasks without relying on third-party Internet services… for now.

Apple has been using machine learning for a few specific functions on the iPhone already, it will go all out this year though by offering extensive AI functionality on the Mac and the iPad too. (Image: Apple)


nVidia is even more bullish when it comes to AI processing done locally. Not only is the company of the opinion that modern graphics cards – like most of its own RTX 30/40 series models – are much more suitable for machine learning work than current NPUs, but it goes so far as to classify Microsoft’s 45 TOPS minimum requirement of NPU processing power as “Basic AI” while labeling GPUs as “Premium AI” subsystems (putting them in the 100-1300 TOPS range). According to nVidia, there are already millions of “AI PCs” out there: any personal computer sporting a GeForce RTX 30xx or 40xx graphics card equipped with 8GB of memory is capable of running advanced AI models that no “NPU inside a CPU” will be able to handle. Not anytime soon, that is.

It’s certainly true that nVidia has done a lot of work over the past few years on the software side of things, enabling the use of the Tensor cores inside its RTX graphics cards on a variety of different machine learning applications. Some of these applications are impressive, proving than modern graphics cards can work very well as consumer-level AI accelerators – almost just as well as nVidia’s server-class immensely successful AI hardware already does in the enterprise market.

nVidia points out that even an affordable RTX 4060/4060 Ti graphics card is way more powerful for AI work than a basic NPU processing block – assuming that it’s properly supported by the specific application any consumer is interested in, though. (Image: nVidia)


It’s important to understand, though, that all this is done – for the most part – on a proprietary API/driver level and on a per-application basis: this is not the same thing as the operating system-level support Microsoft seems intent on offering to Intel, AMD and Qualcomm NPUs (or the system-level support Apple will presumably be offering to developers on its own platforms). That may or may not change in the future, but it’s definitely something to consider going forward.

AI devices: it’s early days yet, so just ignore the hype – for now

Taking all of the above into account, it’s easy to see why the consumer electronics industry is keen on adopting AI as soon as possible: this is the kind of marketing tool that could potentially give a considerable boost in sales of personal computers or mobile devices over the next few years, assuming that what all these companies promise arrives in the form of intriguing, genuinely useful new functionality.

What is also clear, though, is that all these companies are still trying to figure out how to go about all this: how to offer the kind of new functionality that would excite consumers without breaking what is already working for them and, at the same time, without investing huge amounts of money in server infrastructure early on. Both sides of this coin are equally important to any of these companies, as rushing out sets of disappointing AI features would not help their cause, while scaling up too soon – before they have a good idea of just how interested consumers actually are in AI functionality – would definitely not help their bottom line.

The fact that these are early days when it comes to AI functionality in consumer electronic products is also evident in the way these companies currently struggle to even describe it. They have a hard time defining what an “AI computer” or “AI device” actually is, how it’s supposed to work, what can or cannot be expected of such a product and what level of machine learning power should be considered the absolute minimum (since that would also determine what said products should be expected to offer by default). Will that fact keep any of these companies from actively promoting a number of products as “AI PCs” or “AI devices” in 2024? Of course not. Marketing doesn’t have to make sense in tech, does it?

Many laptops will be promoted as an “AI PCs” in 2024 and beyond – because of the NPU processing blocks they’ll be sporting – but most consumers should probably not purchase one until the software they are interested in using catches up. (Image: Intel)


In light of all that, most consumers should probably not buy into the hype of these products… for the time being. There will certainly be a number of early adopters who will pay good money in order to get a taste of what the AI future of mainstream tech products looks like – even of one that’s only vaguely defined now. That’s perfectly fine (even necessary at times). But most people would do well to just wait and see how all of this plays out before deciding whether it’s worth investing in a new computer, tablet or smartphone “for AI”. Consumers will just get new products of this kind for any other reason anyway but, this time around, maybe we should not take big tech on its word about “the next big thing” for consumer products. Can’t hurt to let them prove it for once, no?

ABOUT THE AUTHOR


Kostas Farkonas

Veteran reporter and business consultant with over 30 years of industry experience in various media and roles, focusing on consumer tech, modern entertainment and digital culture.

Veteran reporter and business consultant with over 30 years of industry experience in various media and roles, focusing on consumer tech, modern entertainment and digital culture.