Anthropic Model Context Protocol (MCP).

Anthropic Model Context Protocol (MCP).

Understanding What Lies Ahead.

The MCP Explained

Anthropic dropped MCP (Model Context Protocol) yesterday. I think it's pretty interesting, so I'll try to highlight some of the things that are happening with it. MCP, in a nutshell, is a standardisation of ways for AI (Artificial Intelligence) to connect with different resources, including databases, file systems, and anything else that you could think of. Currently, there have been a number of great examples shown online of what this technology can do, but I think, ultimately, some of its best features are yet to come. In their own words:

The Model Context Protocol (MCP) is an open protocol that enables seamless integration between LLM applications and external data sources and tools. Whether you’re building an AI-powered IDE, enhancing a chat interface, or creating custom AI workflows, MCP provides a standardized way to connect LLMs with the context they need.

Testing MCP with Claude Desktop

I decided to build out a research task tracker application and tasked Claude to build it with React (a JavaScript library for building user interfaces) and Tailwind (a utility-first CSS framework). Of course, I have a computing background, so I knew exactly what I wanted, and the prompt I gave it included the suggestion to incorporate Glassmorphism (a design trend that uses background blur to create a frosted-glass effect) and modern design elements. On the first try, the AI automatically created the different files associated with the project and populated them with the necessary code. If you’d like to try this out, you can follow the steps here to get set up.

Claude with MCP in action

After a while, I came back and found instructions on how to run the application. However, I encountered an error when installing the dependencies, navigating to the folder, and executing the run command. This was easily fixable; I simply pasted the terminal output back into the AI. The AI then fixed the issue inside the file and instructed me to run it again. This time, when I ran the program, it worked and used local storage in the browser.

Research Task Tracker Application

When I reviewed the code, I was impressed because I had essentially created a research tracker application in about five minutes. However, I wanted to push it further and see how much it could accomplish, so I tasked it to ensure the application could run with SQLite (a C-language library that implements a small, fast, self-contained SQL database engine). With that command, it adjusted the code and provided new code. Again, there were errors, but I pasted those errors back into the AI input field, and it made corrections directly to the generated code. I reran the code, and though there was an issue connecting to the server, it was resolved, and finally, the page worked. Now, the tasks were being stored inside a database, which could be moved and used elsewhere. This was impressive as well but I wanted to take it a step further.

Dockerizing the application

I instructed it to make the application into a Docker (an open platform for developing, shipping, and running applications) application that could be easily run by anyone without coding knowledge by simply using Docker Compose (a tool for defining and running multi-container Docker applications). I didn’t provide specific instructions on how to do this, but the AI figured it out independently. This particular task had the most errors. However, after pasting outputs multiple times from both the front end and back end, it eventually figured it out, and the application worked. The end result was an application running inside Docker that could easily be replicated and used anywhere. This entire process took about 30 to 40 minutes, and I didn’t edit any of the files directly.

Orbstack (Docker Alternative)

Research Task Tracker app (Running in Docker)

This showcases the potential of this technology, especially since it can now connect to databases. Despite my best efforts, I wanted to connect it to GitHub, have it push the code, and perform a few other tasks. Unfortunately, that failed for me. If anyone knows what I might have done wrong, please let me know.

Looking Forward

More interestingly, this raises some important discussions in the field of computing. We heavily rely on tools like GitHub to assess a user’s job viability, but how useful are metrics like commits if this technology becomes widely adopted? Essentially, someone could set up a script to commit daily to GitHub using this technology, making them appear more productive than they actually are.

Additionally, I believe this technology has far-reaching implications. The standardisation of connections to various resources indicates that Anthropic has bigger ideas. One of the first things that came to mind is that these models are excellent at coding, and the different server components and plugins being used are technically code. There is nothing stopping these AI models from generating their own services. If you follow this line of thinking, it suggests that AI models could bootstrap themselves, creating the functionalities they need on the fly without requiring human intervention.

With added security features, like key chains, where sensitive information is not exposed, but the AI knows where to access necessary resources, these models could directly interact with services requiring authentication. This opens up fascinating possibilities for automation and system integration.

Personally, I find this scenario exciting and look forward to seeing how more examples and use cases emerge. I plan to use this technology more in my day-to-day activities to explore its potential. For now, I think its best use case is replacing tools like Cursor and other emerging IDEs (Integrated Development Environments). If you already use VS Code, Claude could essentially act as a direct agent without needing third-party IDE services. This would allow similar functionality without installing multiple IDEs.

More importantly, this technology could simplify interactions between cloud environments and local files. It could enable AI to retrieve knowledge directly from files on your system and use that in prompts, providing a more efficient retrieval system than current vector database interactions.

In summary, MCP represents a significant step forward, and I am eager to see how it develops, especially in combination with emerging technologies like computer use.

Custom File Listing Tool

To give Anthropic the ability to list files, I added the following to the configuration file.

{
    "mcpServers": {
        // Given in example
        "filesystem": {
            "command": "npx",
            "args": ["-y", "@modelcontextprotocol/server-filesystem", "/Users/bakunga/Documents/Projects/"]
        },
        // File listing command
        "listfiles": {
            "command": "ls",
            "args": ["/Users/user/Documents"]
        },
    }
}