AIM Media House

While Big Tech Builds Bigger AI Clusters, Qualcomm Goes Small

While Big Tech Builds Bigger AI Clusters, Qualcomm Goes Small

The company’s new wearable chip advances its effort to move meaningful inference onto personal devices

On day one of Mobile World Congress in Barcelona, Qualcomm introduced a new chip for AI wearables. The product, called Snapdragon Wear Elite, is built on a 3-nanometer process.

The chip integrates two neural processors: a Hexagon NPU for heavier tasks and a smaller eNPU for low-power AI. Qualcomm says it can run models with up to two billion parameters directly on the device. It also supports Wi-Fi 8, 5G RedCap, satellite connectivity (NB-NTN), Bluetooth 6.0 and ultra-wideband, according Qualcomm’s press materials.

Qualcomm described the platform as designed for “Personal AI” devices: pins, pendants and other wearables that can operate independently. Alex Katouzian, executive vice president and group general manager of mobile, compute and XR, said Snapdragon is enabling “a new category of Personal AI devices” that function as part of “a distributed AI network across mobile, compute, XR, wearables, and more”

The emphasis was on where AI processing occurs.

That message stands against the current structure of the AI market. NVIDIA has reported record data center revenue driven by demand for AI accelerators, with its data center business generating tens of billions of dollars in recent quarters.

Cloud providers continue expanding infrastructure for large models. Amazon Web Services has deployed large Trainium2-based clusters under “Project Rainier” to support training and inference at scale.

Investment remains concentrated in centralized data centers. Qualcomm’s wearable chip points toward computation happening closer to the user.

The Push Toward On-Device AI

Snapdragon Wear Elite integrates local AI processing with sensor handling and multi-mode connectivity. Qualcomm said the platform “delivers powerful edge AI with an integrated NPU architecture,” enabling what it calls “true, Personal AI experiences.”

The claim is that some AI tasks can be handled directly on the device rather than routed to remote servers.

Cloud inference carries usage-based costs. Industry analysis notes that AI workloads are billed by compute usage or tokens, meaning expenses scale with volume. Reducing routine cloud calls by running smaller models locally can lower recurring costs. Latency is also practical. Analysts point out that voice assistants and contextual prompts perform better when inference happens locally because round-trip requests to remote servers introduce delay.

Privacy adds another constraint. The European Union’s General Data Protection Regulation (GDPR) restricts how personal data can be processed and transferred, strengthening the case for keeping some computation on-device.

Still, the chip’s capacity remains limited compared with frontier systems. A two-billion-parameter model is small relative to the largest models deployed in hyperscale data centers. It can support structured prompts, summaries and limited conversation, but does not replace large-scale reasoning systems.

Snapdragon Wear Elite aligns with Qualcomm’s stated product direction. The company has highlighted dedicated NPUs in its Snapdragon X PC platform and recent mobile processors to support on-device AI workloads.

Diversifying Beyond Smartphones

The AI market remains centered on hyperscale compute. Nvidia’s financial results underscore how much demand is tied to data center accelerators.

Cloud providers continue investing heavily in proprietary silicon to support training and inference at scale. AWS’s Project Rainier demonstrates how custom AI infrastructure can be deployed rapidly.

A wearable chip does not shift that balance on its own.

For Qualcomm, the move expands its silicon exposure. In its most recent quarterly results, the company reported more than $12 billion in revenue, with handset-related business still a major contributor.

Smartphones and licensing remain central to Qualcomm’s financial base. Expanding into AI PCs, automotive systems and wearable categories diversifies that exposure.

If more devices handle some AI tasks locally, each becomes a potential point of chip demand. Qualcomm increasingly competes in supplying processors to endpoint devices (phones, PCs, vehicles and wearables) where inference might increasingly occur.

Major partners are signaling interest. In Qualcomm’s announcement, Bjørn Kilburn, general manager of Wear OS at Google, said the platform opens “new possibilities” for always-on intelligent systems.

Executives from Motorola and Samsung Electronics also endorsed the chip, citing performance and battery life improvements.

Uncertainty remains. Earlier attempts to build wearable AI devices, such as the Humane AI Pin, faced criticism over battery life, overheating and unclear everyday use cases before the company ultimately sold its assets.

Many advanced AI functions continue to rely on cloud processing. The timeline for broader local inference remains unclear as Qualcomm positions its architecture across phones, PCs, vehicles and wearables in case more AI processing moves closer to the user.