Memory Ownership Models: When JavaScript Meets Native Code

Authors
Kamil Paradowski
Software Engineer
@
Calstack
No items found.

Picture this: You're building a React Native app that processes video frames. Each frame typically arrives as an ArrayBuffer in JavaScript, which you hand off to a native module for GPU work. Everything should be straightforward, yet suddenly the app crashes or the data coming back is corrupted. What actually went wrong?

The answer lies in a fundamental collision between two worlds: JavaScript's garbage-collected, single-threaded paradise and native code's explicit, multi-threaded reality. When ArrayBuffer crosses the bridge between these worlds, memory ownership becomes the battlefield.

In this article, we take a closer look at how each environment manages the lifetime and ownership of that memory, and why that difference matters once both sides want to access the same bytes.

The two worlds

JavaScript: The world of automatic memory

Most JavaScript developers live in a world where memory just works:

const buffer = new ArrayBuffer(1024);
// Use it
// Forget about it
// GC cleans it up eventually

You never think about when memory gets freed. You never worry about dangling pointers. And you definitely never coordinate access between threads. There's only one thread anyway.

Under the hood, the JavaScript engine (V8, JavaScriptCore, Hermes) allocates this buffer in its managed heap.

For an ArrayBuffer, the engine creates two parts: an object header and the backing store.

  • The object header is typically ~16-24 bytes and contains metadata like the buffer’s type, length, and a pointer to the actual data.
  • The backing store is the actual memory where your 1024 bytes live, which often gets allocated separately for larger buffers.

The engine then tracks this object in its garbage collection system, adding it to the object graph so it knows what's still in use.

When your ArrayBuffer becomes unreachable (no references from any GC roots), the collector eventually kicks in. It marks reachable objects, sweeps away the unmarked ones, and optionally compacts memory to reduce fragmentation (moving objects closer together to eliminate gaps left by freed objects). You don't get to choose when this happens.

But this simplicity comes with trade-offs. GC runs when it wants, not when you want. The JavaScript thread can only do one thing at a time. When garbage collection kicks in, your code pauses. And you can't say "free this now" or "keep this alive". You're just along for the ride.

Native code: The world of explicit control

Native code operates under different rules:

void processData(uint8_t* data, size_t length) {
// Who allocated this?
// When will it be freed?
// Can I use it after this function returns?
// What if another thread modifies it?
}

That pointer data is just a 64-bit integer (on 64-bit systems) containing a virtual memory address. It could point to the stack (local variables), the heap (dynamically allocated memory) or a memory-mapped region (memory mapped directly from a file or shared between processes). It might be read-only or writable. The code or thread that allocated it could free it at any moment, invalidating the pointer. And while you're reading this, another thread might be writing to data[100] right now.

Every raw pointer, a direct memory address without any safety wrapper, is a contract. To use memory safely, you need to know who's responsible for freeing it, how long it's valid, whether it can change while you're using it, if multiple threads can access it, whether it's properly aligned, and what the valid address range is. The compiler won't check any of this for you.

Native code gives you control but demands discipline. The compiler won't save you from use-after-free bugs (accessing memory after it's been freed) which can crash your program or corrupt data. The runtime won't detect data races. And the CPU? It'll happily load whatever garbage is sitting at that address, or crash with a segmentation fault if the OS no longer has that memory address mapped to real memory. It doesn't care.

JSI: Where worlds collide

Meet JavaScript Interface (JSI). When you pass an ArrayBuffer from JavaScript into native code, you move it from one execution model into another. Sometimes everything works smoothly. Sometimes that's exactly where trouble starts.

// JavaScript side
const pixels = new ArrayBuffer(1920 * 1080 * 4); // 4K RGBA image
nativeModule.applyFilter(pixels);

On the native side, JSI exposes the raw pointer:

// Native side
void applyFilter(jsi::Runtime& runtime, jsi::ArrayBuffer buffer) {
	uint8_t* pixels = buffer.data(runtime); // Raw pointer!
	size_t size = buffer.size(runtime);
	// Now we have a pointer to JavaScript's heap
	// But who owns it? When does it become invalid?
}

Here's the problem: pixels points into the JavaScript engine’s managed heap, but JavaScript still owns that memory. The GC can reclaim it, JS can mutate it at any moment, and queued native work may end up using a pointer that is no longer valid.

In other words, native code is now relying on memory whose lifetime and state it does not control.

The ownership models: Three approaches

So how do real-world systems handle this? They've converged on three ownership models.

Borrowing

The consumer (native code, in our case) treats received memory as borrowed.

const buffer = new ArrayBuffer(1024);

// Pass to native - it "borrows" the pointer
nativeModule.processAsync(buffer);

// JavaScript can still access it
const view = new Uint8Array(buffer);
view[0] = 255; // 💥 Data race if native is reading!

From native code's perspective:

void processAsync(uint8_t* data, size_t length) {
// Pointer is valid... for now
// But JavaScript might:
// - Modify the data concurrently (data race)
// - Drop all references (GC might free it)
// - Pass it to another native function

backgroundQueue.async([data, length]() {
  // Is this pointer still valid? Nobody knows.
  processImage(data, length);
});
}

The guarantee here is simple: the pointer is valid during the synchronous call. That's it.

The problem? Once you leave that synchronous context, all bets are off. If native code queues work or stores the pointer, you're in undefined behaviour territory. Things can go wrong in ways you can't predict.

Why use it? Zero-copy performance. No allocation, no copying, just raw pointer access. For synchronous operations, it's the fastest option.

Transfer

Another solution is transferring ownership. One side gives it up, the other takes it.

const buffer = new ArrayBuffer(1024);
// Transfer ownership to native
nativeModule.processWithTransfer(buffer, [buffer]);

// Buffer is now "detached" in JavaScript
console.log(buffer.byteLength); // 0

// Any access throws: TypeError: 
// Cannot perform operation on detached ArrayBuffer

This is how Web Workers handle shared memory:

const worker = new Worker("processor.js");
const data = new ArrayBuffer(1024);

// Transfer ownership to worker
worker.postMessage(data, [data]);

// data is now unusable in main thread
console.log(data.byteLength); // 0

This approach guarantees that only one side can access the buffer. It's thread-safe by construction. However, it requires runtime support. The JavaScript engine needs to mark the buffer as "detached," prevent all access attempts, coordinate the ownership transfer, and eventually free the memory. That's a lot to ask.

And here's the kicker: current JSI and Hermes don't expose this API. You can't truly detach an ArrayBuffer from JavaScript, which means you can't implement true transfer semantics without runtime changes.

Why use it? Safety. No data races, no use-after-free, no manual coordination. It aligns with JavaScript's typical safety guarantees, if only it existed.

Copying

Then there's copying. It's slow, but stable.

const buffer = new ArrayBuffer(1024);

// Bridge internally copies the buffer
nativeModule.processWithCopy(buffer);

// JavaScript keeps its copy
const view = new Uint8Array(buffer);
view[0] = 255; // Safe, native has separate copy

Native code gets its own allocation:

void processWithCopy(uint8_t* data, size_t length) {
	// Bridge allocated and copied this
	uint8_t* ourCopy = new uint8_t[length];
	memcpy(ourCopy, data, length);

	// Now we can do whatever we want
	backgroundQueue.async([ourCopy, length]() {
	processImage(ourCopy, length);
		delete[] ourCopy;
	});
}

The guarantee? Complete isolation. Each side owns its own memory. No questions asked.

The problem? Performance. Copying a 4K video frame (8MB) every frame is expensive. For high-throughput scenarios, that overhead is unacceptable.

Why use it? Safety and simplicity. No coordination needed, no lifetime issues, no data races. Sometimes slow and correct beats fast and broken.

Real-world solutions

Different frameworks have tackled this problem in different ways. Let's see what they did.

ExpoModules

Expo went with a borrowing approach and added an automatic lifetime extension mechanism. TypedArray objects give you direct pointer access, but that's meant for synchronous operations on the JS runtime thread. For async operations, ExpoModules copies data when crossing the JS/native boundary, so each side ends up with independent ownership. It's borrowing with safety nets.

Nitro Modules

Nitro took a different route: a two-tier ownership model. Native-created buffers are thread-safe and owning. JS-received buffers are non-owning, but they get runtime thread validation. For async processing, you have to copy non-owning buffers first. It's a defensive approach that prevents common threading mistakes without adding too much performance overhead.

TurboModules

Right now, TurboModules just copies everything. This whole exploration actually came from discussions about adding ArrayBuffer support to React Native TurboModules. If you want to dive deeper, check out the RFC here.

Conclusion

When JavaScript meets native code at the ArrayBuffer boundary, there are no perfect solutions. Only trade-offs.

Borrowing gives you performance but demands manual thread coordination and lifetime management. Transfer gives you safety but requires runtime features that don't exist yet. Copying gives you simplicity but sacrifices performance for high-throughput scenarios.

The tension remains: JavaScript abstracts memory, native code exposes it. When you connect them, you’re dealing with two different expectations about how data should live and move.

We’ve been running into these challenges directly while working on first-class ArrayBuffer support in Turbo Modules. Even seemingly simple decisions around passing raw buffers raise questions about how to balance safety, performance, and developer control. It’s a surprisingly deep design space, and the trade-offs show up quickly once you start prototyping.

If this is an area you care about, the RFC goes into the details. Have a look and don’t forget to leave your feedback.

Table of contents
Need to boost your app’s performance?

We help React Native teams enhance speed, responsiveness, and efficiency.

Let’s chat

//

//
Insights

Learn more about

Performance

Here's everything we published recently on this topic.

Sort
//
Performance

We can help you move
it forward!

At Callstack, we work with companies big and small, pushing React Native everyday.

React Native Performance Optimization

Improve React Native apps speed and efficiency through targeted performance enhancements.

Monitoring & Observability

Enable production-grade monitoring and observability for React Native apps with real-time insights and alerts.

Release Process Optimization

Ship faster with optimized CI/CD pipelines, automated deployments, and scalable release workflows for React Native apps.

React Compiler Implementation

Use React Compiler to achieve instant performance benefits in your existing applications.