Intel has announced that Microsoft’s Copilot AI service will now run locally on PCs, eliminating the need for everything to be processed in the cloud. The next generation of AI PCs will require built-in neural processing units (NPUs) with over 40 TOPS of power, surpassing any consumer processor currently on the market.
By running more elements of Copilot locally, Intel aims to reduce lag and improve performance and privacy. Currently, Copilot relies heavily on cloud processing, resulting in delays for smaller tasks. Windows has not fully utilized NPUs yet, unlike ChromeOS and macOS which use them for various video and audio processing features.
The Apple M3 processor leads the pack with 18 TOPS, followed by AMD’s Ryzen 8040 and 7040 chips with 16 and 10 TOPS respectively. Intel’s Meteor Lake and Qualcomm’s Snapdragon X Elite also offer significant AI compute speed. Intel’s Lunar Lake chips, expected in 2025, will triple the current NPU speeds.
Intel has introduced 300 new AI features optimized for its OpenVino platform and launched an AI PC development kit using the ASUS NUC Pro with Meteor Lake silicon. The company has ambitious plans for desktop AI PCs and next-gen AI PCs, meeting the 40 TOPS requirement for advanced processing needs.
Source link