How to Run Any LLM On-Device With React Native
How to Run Any LLM On-Device With React Native
In this talk, Szymon will show how to run LLMs directly inside React Native apps using an AI SDK that provides a powerful abstraction layer to simplify building AI applications. Join him as he explores react-native-ai, a library that enables local LLM execution.
How to Run Any LLM On-Device With React Native

Local, on-device LLMs unlock private-by-default, low-latency AI experiences that work offline, which is ideal for mobile. In this talk, Szymon will show how to run LLMs directly inside React Native apps using an AI SDK that provides a powerful abstraction layer to simplify building AI applications. Join him as he explores react-native-ai, a library that enables local LLM execution. He'll dive into the provider architecture, integrations with MLC LLM Engine and Apple's foundation models, and what's new: improved debugging experiences, tool calling support, and agent pipelines for building AI workflows that run entirely on-device.
How to Run Any LLM On-Device With React Native
In this talk, Szymon will show how to run LLMs directly inside React Native apps using an AI SDK that provides a powerful abstraction layer to simplify building AI applications. Join him as he explores react-native-ai, a library that enables local LLM execution.

Learn more about AI
Here's everything we published recently on this topic.
React Native Performance Optimization
Improve React Native apps speed and efficiency through targeted performance enhancements.
C++ Library Integration for React Native
Wrap existing C-compatible libraries for React Native with type-safe JavaScript APIs.
Shared Native Core for Cross-Platform Apps
Implement business logic once in C++ or Rust and run it across mobile, web, desktop, and TV.
Custom High-Performance Renderers
Build custom-rendered screens with WebGPU, Skia, or Filament for 60fps, 3D, and pixel-perfect UX.























