Mediatek to integrate Llama 2 right inside its mobility semiconductor chips

Written by
mediatek

Generative AI has many potential applications in various domains, such as entertainment, education, healthcare, and business. However, most generative AI processing is currently performed through cloud computing, which requires high bandwidth, low latency, and reliable internet connection. Cloud computing also has its data privacy, security, and cost concerns and challenges. To overcome these limitations, MediaTek , a leading global fabless semiconductor powering more than two billion connected edge devices every year, announced it is working closely with Meta’s Llama 2, the company’s next-generation open-source Large Language Model (LLM). Utilizing Meta’s LLM as well as MediaTek’s latest APUs and NeuroPilot AI Platform, MediaTek aims to build a complete edge computing ecosystem designed to accelerate AI application development on smartphones, IoT, vehicles, smart home, and other edge devices.

Llama 2 is a generative AI model that can produce realistic and diverse text based on natural language requests. Llama 2 is trained on 2 trillion tokens of publicly available online data sources and can handle a wide range of tasks, such as answering questions, writing essays, generating code, and creating chatbots. Llama 2 is also capable of generating images from text using its fine-tuned version called DALL·E 2.

By integrating Llama 2 with MediaTek’s APUs and NeuroPilot AI Platform, MediaTek will enable generative AI applications to run directly on-device as well. Doing so provides several advantages to developers and users, including seamless performance, greater privacy, better security and reliability, lower latency, the ability to work in areas with little to no connectivity, and lower operation cost.

To truly take advantage of on-device Generative AI technology, edge device makers will need to adopt high computing, low-power AI processors and faster, more reliable connectivity to enhance computing capabilities. Every MediaTek-powered 5G smartphone SoC shipped today is equipped with APUs designed to perform a wide variety of Generative AI features, such as AI Noise Reduction, AI Super Resolution, AI MEMC and more.

Additionally, MediaTek’s next-generation flagship chipset, to be introduced later this year, will feature a software stack optimized to run Llama 2, as well as an upgraded APU with Transformer backbone acceleration, reduced footprint access and use of DRAM bandwidth, further enhancing LLM and AIGC performance. These advancements facilitate an expedited pace for building use cases for on-device Generative AI.

“The increasing popularity of Generative AI is a significant trend in digital transformation, and our vision is to provide the exciting community of Llama 2 developers and users with the tools needed to fully innovate in the AI space,” said JC Hsu, Corporate Senior Vice President and General Manager of Wireless Communications Business Unit at MediaTek. “Through our partnership with Meta, we can deliver hardware and software with far more capability in the edge than ever before.”

MediaTek expects Llama 2-based AI applications to become available for smartphones powered by the next-generation flagship SoC, scheduled to hit the market by the end of the year.

Read: MLH App – An App for a perfect hairstyle

Article Categories:
Artificial Intelligence

Comments are closed.

Shares