How I Used MCP to Control My Friend's Home

MCP Madness

I feel like recently, everything I see is about MCP (model context protocol), but at the end of the day, I’m still unsure about what it is. So, let’s discover this together with a simple use case and practical approach. Hopefully, by the end of this article, MCP will be an additional tool for AI and LLM-related problem-solving! In this article, you will learn how to make MCP server-side useful with a real-world example.

Real-world MCP use case: Claude + smart home

Let’s start with the use case we will be working on. My dear friend and co-worker Jan has a bunch of devices in his home that measure things like temperature, person presence, light, and other metrics. He has a server with API that can give you this information back as a response to the request. But instead of entering a boring page to display boring graphs, he would like to be able to ask a nice and fancy LLM about his home situation.

Also, Jan doesn’t like making stuff from scratch, and he would like to use a ready-to-go, downloadable desktop LLM  client. But here is the problem: How do we inform LLM about all of those metrics?

There are plenty of techniques to pass more context and data to conversation with AI:

  • RAG
  • System-Prompt
  • Tool-Calls

But none of them seem to really satisfy our needs! Either they are not dynamic enough or require code that we cannot run. Here is where the MCP comes in.

What Is Model Context Protocol (MCP)

“Model Context Protocol” sounds fancy, but let’s strip it down from the jargon.

  • Protocol = a standardized way
  • Context = to pass additional information
  • Model = to conversation with LLM

It doesn’t sound that complicated now, right?

Why MCP is cool

Let’s imagine a world without standardized hardware specifications, where every bolt can have any size. You would need custom-made tools for every mechanical project! This is what the current situation looks like with LLMs.

Even if you write a fantastic tool for your AI, there is no mainstream way to share it with others. Of course, you can share the code, and others will re-implement it in their own systems, but it’s like passing blueprints, not tools!

With MCP you can create a server. That server can have access to resources like thermometer. Now you can take that server and pass it on to your client. And that’s it!

Thanks to standardization and turning that into a “protocol,” your client will know exactly how to ask and what it will get in return! You can now pass the same link to other clients, who will also know how to use it! Quite handy.

By the way, I sneaked in the explanation of basic MCP architecture in that explanation: client — server — resource.

But enough of that theory, let’s make some servers!

Client - server - resource

Where to start with MCP

MCP was conceptualized and announced by Anthropic. They also prepared an amazing site with documentation!

As a client, we will use Claude Desktop because it already supports MCP.

As a server, we will use Node. On request, it will ask all the devices about parameters.

The targeted outcome: we will ask Claude about Jan’s home situation, and it will respond.

Preparing boilerplate for server

Before I visit Jan, and we finish the project, I want to start with some nice boilerplate to avoid the overhead of reading through documents, etc. After following the steps in the documentation, we have a simple node project.

Inside the index.ts file, we initialize the server:

import { McpServer } from '@modelcontextprotocol/sdk/server/mcp.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import { z } from 'zod';

// Create server instance
const server = new McpServer({
  name: 'jans-crib',
  version: '1.0.0',
  capabilities: {
    resources: {},
    tools: {},
  },
});

Then, we register a simple tool. For now, we will just inform that every room in Jan’s home is COLD!

// Register weather tools
server.tool(
  'get-home-status',
  "Get status about Jan's the home",
  {
    room: z.enum(['kitchen', 'living room', 'bedroom', 'bathroom', 'office']),
  },
  async ({ room }) => {
    return {
      content: [
        {
          type: 'text',
          text: `${room} is COLD`,
        },
      ],
    };
  }
);

As you can see, with MCP, we can give a name and description and inform the client about the parameters it can send to us. It’s just like tools from Vercel’s ai-sdk, but with more portability!

As a last element, we provide the function that runs the server:

async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  console.error('Weather MCP Server running on stdio');
}

main().catch((error) => {
  console.error('Fatal error in main():', error);
  process.exit(1);
});

Now we npm run build, and let’s move to the other part of preparation. One thing that surprised me at this point was that this server is responsible for “serving” only the context. It does not expose any HTTP access point you could use. So, no localhost and passing URL to your client. Instead, your client takes in the path to your server build!

Preparing connection on the client side

After installing Claude Desktop, I went into settings → developer → edit config and added the location of my server:

{
    "mcpServers": {
        "jans-crib": {
            "command": "node",
            "args": [
                "/absolute/path/to/jans-crib/build/index.js"
            ]
        }
    }
}

I was surprised that Claude would host it! My biggest worry was all the authorization and authentication, but it looks like I can just program that right into my server’s function calls.

After that, I restarted the client, and here it is:

MCP server visible in Claude desktop app

After a quick exchange, we can confirm that every room is, in fact, cold.

Chat with Claude about Jan’s rooms

It is possible to use HTTP with your MCP server, but at the time of writing this article, setting this up with the Claude desktop client is very non-intuitive.

Adding actual smart home data to server

Now that the proof of concept is ready, I went to meet Jan to get access to his API, which is hosted on a private local network. Then, I found documentation for his home assistant API and info about access tokens and other required headers for making authorized requests.

In a very short time, we expanded the MCP server and allowed Claude to read room and outdoor temperature, check how many people are detected in the room, check the state of lights, and change the state of lights (turning them on and off). You can check the repository for full code!

Full project repo

The only things slowing us down were weird quirks of Jan’s API, naming conventions, etc. Surprisingly, nothing related to MCP itself!

After each change, all I had to do was rebuild the server and restart the Claude desktop client, which allowed for extremely fast iterations and tests. The code was messy, but we were just playing around, not making a full-blown product.

However, if we improved the code quality and allowed for passing in custom entity names, we could easily prepare that server for other people! All they would have to do is fetch the repo, pass env variables, build it, and pass the location to their client.

Chat with Claude including changing lights and getting lights state thanks to MCP

Is MCP the USB-C for LLM clients?

One of the comparisons I heard is that MCP is like USB-C for LLM clients. It establishes the standard and shape of communication. So every client can easily use it to extend its capabilities. And there are already sites that gather servers for your convenience, like https://mcp.so/ or https://mcpmarket.com/

I bet there will be some other ideas for standards, but for now, the MCP is the most widely known and talked about.

USB-C for AI

Should you go “all in” on MCP?

Now, to be completely honest, this is not a ground-breaking idea or technology. But turning what everyone already uses into a protocol or standard speeds up innovations and lowers the entry bar for many people.

From what I experienced and tested, there is still a huge room for improvement. Locally running MCP servers are cool, but I strongly believe that their full power will be unleashed once the HTTP standards are set in place.

Unfortunately, I’m too young (or fortunately in some sense) to remember the days of RSS Feeds, but I can see plenty of similarities between those technologies. Very soon, MCP will mature enough for full-blown integrations with context providers.

Paid accounts, MCP extension marketplaces, crazy integrations (like talk with your car?! Why not? Just let your clients connect their MCP server to your car-metrics gathering servers). I know that all of this is coming.

Why prepare a website when you could just expose your MCP server to an MCP search engine and allow clients to talk about your products directly?

Now, pardon my excitement. To answer my own question, yes, I do think that MCP is the future of AI / LLM communication.

What a time to be alive.

Imaginary chat with Hypothetical MCP connected with a car
Table of contents
Integrating AI into your React Native workflow?

We help teams leverage AI to accelerate development and deliver smarter user experiences.

Let’s chat
Link copied to clipboard!
//
Insights

Learn more about

AI

Here's everything we published recently on this topic.

Sort
//
AI

We can help you move
it forward!

At Callstack, we work with companies big and small, pushing React Native everyday.

React Native Development

Hire expert React Native engineers to build, scale, or improve your app — from day one to production.