The Offline AI: On-Device LLMs in React Native With AI SDK
The Offline AI: On-Device LLMs in React Native With AI SDK
Learn how to run on-device LLMs in React Native using Vercel’s AI SDK from Michał Pierzchała's talk at DevAI by Data Science Summit.
The Offline AI: On-Device LLMs in React Native With AI SDK

On-device LLMs, unlike remotely accessed models, unlock private-by-default, low-latency AI experiences that work anywhere, offline; ideal for mobile. In this talk, Michał will show how to run LLMs directly inside React Native apps using Vercel's AI SDK that provides a robust abstraction layer to simplify building AI applications. Same interface to run local and remote models. These capabilities are possible thanks to an open-source set of libraries we created. He'll dive deep into the provider architecture and demonstrate how we integrated it with the MLC LLM Engine and Apple’s Foundation Models available on their latest mobile devices.
The Offline AI: On-Device LLMs in React Native With AI SDK
Learn how to run on-device LLMs in React Native using Vercel’s AI SDK from Michał Pierzchała's talk at DevAI by Data Science Summit.

Learn more about AI
Here's everything we published recently on this topic.
React Native Performance Optimization
Improve React Native apps speed and efficiency through targeted performance enhancements.
New Architecture Migration
Migrate confidently to React Native’s New Architecture to keep shipping features, unlock new platform capabilities, and stay fully compatible with upcoming releases.
Quality Assurance
Combine automated and manual testing with CI/CD integration to catch issues early and deliver reliable React Native releases.
Scalability Engineering
Design and validate React Native architectures that scale-supporting high traffic, modular teams, and long-term performance.












