React Native Evals: Making AI Code Quality Measurable

Date
Thursday, March 12, 2026
Time
5 PM - 7 PM [CET]
Location
Online

React Native Evals: Making AI Code Quality Measurable

Callstack engineers discuss React Native Evals, a benchmark for measuring AI coding models on real React Native tasks.

Date
12 March 2026
-
Time
5 PM - 7 PM [CET]
Location
Online

React Native Evals: Making AI Code Quality Measurable

youtube-cover
Video Unavailable
Organizer
Organizer
Presented
Callstack
@
Callstack
Speakers
Speakers
Featuring
Kewin Wereszczyński
Software Engineer
@
Callstack
Artur Morys-Magiera
Software Engineer
@
Callstack
Lech Kalinowski
Senior AI Systems Engineer
@
Callstack
Piotr Miłkowski
Senior AI System Engineer
@
Callstack
Featuring
Kewin Wereszczyński
Software Engineer
@
Callstack
Artur Morys-Magiera
Software Engineer
@
Callstack
Lech Kalinowski
Senior AI Systems Engineer
@
Callstack
Piotr Miłkowski
Senior AI System Engineer
@
Callstack

Debates about which AI coding model writes the best React Native code usually rely on anecdotes. A single good or bad experience often shapes strong opinions, but those claims are rarely reproducible. React Native Evals was created to change that by introducing a structured, evidence-based way to measure how well AI models handle real React Native development tasks.

In this live stream, Callstack engineers Kewin Wereszczyński, Artur Morys‑Magiera, Lech Kalinowski, and Piotr Miłkowski will walk through the ideas behind the benchmark and the work that went into building it. The discussion will cover how the evals dataset works, the generation and judging pipeline built with TypeScript and Bun, and why reproducibility matters when evaluating AI coding models.

The team will also explore what the early results tell us about current models and where the benchmark is heading next. Expect insights into categories like animations, async state management, and navigation, along with a broader conversation about AI tooling in the React Native ecosystem and the future direction of developer workflows.

Join us on March 12 at 17:00 CET for a technical deep dive into React Native Evals and a wider discussion about AI in development, including topics from the This Week in React newsletter.

Register now
Integrating AI into your React Native workflow?

We help teams leverage AI to accelerate development and deliver smarter user experiences.

Let's chat
Link copied to clipboard!
//
Save my spot

React Native Evals: Making AI Code Quality Measurable

Callstack engineers discuss React Native Evals, a benchmark for measuring AI coding models on real React Native tasks.

//
Insights

Learn more about AI

Here's everything we published recently on this topic.

//
AI

We can help you move
it forward!

At Callstack, we work with companies big and small, pushing React Native everyday.

On-device AI

Run AI models directly on iOS and Android for privacy-first experiences with reliable performance across real devices.

AI Knowledge Integration

Connect AI to your product’s knowledge so answers stay accurate, up to date, and backed by the right sources with proper access control.

Generative AI App Development

Build and ship production-ready AI features across iOS, Android, and Web with reliable UX, safety controls, and observability.

AI Vibe Coding Cleanup

Turn AI-generated code from tools like Cursor, Claude Code, Codex, or Replit into production-ready software by tightening structure, validating safety, and making it stable under real-world usage.

React Native Performance Optimization

Improve React Native apps speed and efficiency through targeted performance enhancements.

C++ Library Integration for React Native

Wrap existing C-compatible libraries for React Native with type-safe JavaScript APIs.

Shared Native Core for Cross-Platform Apps

Implement business logic once in C++ or Rust and run it across mobile, web, desktop, and TV.

Custom High-Performance Renderers

Build custom-rendered screens with WebGPU, Skia, or Filament for 60fps, 3D, and pixel-perfect UX.