The accident isn't that somehow we got a protocol to do things we couldn't do before. As other comments point out MCP (the specificaiton), isn't anything new or interesting.
No, the accident is that the AI Agent wave made interoperability hype, and vendor lock-in old-fashioned.
I don't know how long it'll last, but I sure appreciate it.
The whole hype around AI replacing entire job functions does not have as much traction as the concept of using agents to handle all of the administrative stuff that connects a workflow together.
Any open source model that supports MCP can do it, so there’s no vendor lock in, no need to learn the setup for different workflow tools, and a lot of money saved on seats for expensive SaaS tools.
But the way I see it, AI agents created incentives for interoperability. Who needs an API when everyone is job secure via being a slow desktop user?
Well, your new personal assistant who charges by the Watt hour NEEDS it. Like when the CEO will personally drive to get pizzas for that hackathon because that’s practically free labor, so does everyone want everything connected.
For those of us who rode the API wave before integrating became hand-wavey, it sure feels like the world caught up.
I hope it will last, but I don’t know either.
I tried to find a rebuttal to this article from Slack, but couldn't. I'm on a flight with slow wifi though. If someone from Slack wants to chime in that'd be swell, too.
I've made the argument to CFOs multiple times over the years why we should continue to pay for Slack instead of just using Teams, but y'all are really making that harder and harder.
[0]: https://www.reuters.com/business/salesforce-blocks-ai-rivals...
The reality is that Slack isn’t that sticky. The only reason I fended off the other business units who've demanded Microsoft Teams through the years is my software-engineering teams QoL. Slack has polish and is convenient but now that Slack is becoming inconvenient and not allowing me to do what I want, I can't justify fending off the detractors. I’ll gladly invest the time to swap them out for a platform that respects our ownership and lets us use our data however we need to. We left some money on the table but I am glad we didn’t bundle and upgrade to Slack Grid and lock ourselves into a three-year enterprise agreement...
There are no new incentives for interoperability. Compare that were already providing API access added MCP servers of varying quality.
The rest couldn't care less, unless they can smell an opportunity to monetize hype
For those that don't remember/don't know, everything network related in Windows used to use their own, proprietary setup.
Then one day, a bunch of vendors got together and decided to have a shared standard to the benefit of basically everyone.
It feels like 2 or 3 companies have paid people to flood the internet with content that looks educational but is really just a sales pitch riding the hype wave.
Honestly, I just saw a project manager on LinkedIn telling his followers how MCP, LLMs and Claude Code changed his life. The comments were full of people asking how they can learn Claude Code, like it's the next Python.
Feels less like genuine users and more like a coordinated push to build hype and sell subscriptions.
In my experience Claude and Gemini can take over tool use and all we need to do is tell them the goal. This is huge, we always had to specify the steps to achieve anything on a computer before. Writing a fixed program to deal with dynamic process is hard, while a LLM can adapt on the fly.
(And we know that because there was a brief period in time where basics of spreadsheets and databases were part of curriculum in the West and people had no problem with that.)
> It (the main benefit?) is the LLM itself, if it knows how to wield tools.
LLMs and their ability to use tools are not a benefit or feature that arose from MCP. There has been tool usage/support with various protocols and conventions way before MCP.
MCP doesn't have any novel aspects that are making it successful. It's relatively simple and easy to understand (for humans), and luck was on Anthropic's side. So people were able to quickly write many kinds of MCP servers and it exploded in popularity.
Interoperability and interconnecting tools, APIs, and models across providers are the main benefits of MCP, driven by its wide-scale adoption.
Say i'm building a app and I want my users to be able to play spotify songs. Yea, i'll hit the spotify api. But now, say i've launched my app, and I want my users to be able to play a song from sonofm when they hit play. Alright, now I gotta open up the code and do some if statements hard code the sonofm api and ship a new version, show some update messages.
MCP is literally just a way to make this extensible so instead of hardcoding this in, it can be configured at runtime
You will need to do that anyway. Easier discovery of the API doesn't say much.
The user might want a complicated functionality, which combines several API calls, and more code for filtering/sorting/searching of that information locally. If you let the LLM to write the code by itself, it might take 20 minutes and millions of wasted tokens of the LLM going back and forth in the code to implement the functionality. No user is going to find that acceptable.
Interestingly, ActiveX was quite the security nightmare for very similar reasons actually, and we had to deal with infamous "DLL Hell". So, history repeats itself.
(Even if only the former, it would of course be a huge step forward, as I could have the LLM generate schemata. Also, at least, everyone is standardizing on a base protocol now, and a way to pass command names, arguments, results, etc. That's already a huge step forward in contrast to arbitrary Rest+JSON or even HTTP APIs)
To speculate about this, perhaps the informality is the point. A full formal specification of something is somewhere between daunting and Sisyphean, and we're more likely to see supposedly formal documentation that nonetheless is incomplete or contains gaps to be filled with background knowledge or common sense.
A mandatory but informal specification in plain language might be just the trick, particularly since vibe-APIing encourages rapid iteration and experimentation.
Obviously, for http apis you might often see something like an open API specification or graphql which both typically allow an api to describe itself. But this is not commonly a thing for non-http, which is something that mcp supports.
MCP might be the first standard for self-described apis across all protocols(I might be misusing protocols here but not sure what the word technically should be. I think the MCP spec calls it transport but I might be wrong there), making it slightly more universal.
I think the author is wrong to discount the importance of an llm as an interface here though. I do think the majority of mcp clients will be llms. An API might get you 90% of the way there but if the llm gets you 99.9% by handling that last bit of plumbing it's going to go mainstream.
You mean, like OpenAPI, gRPC, SOAP, and CORBA?
MCP seems like a more "in-between" step until the AI models get better. I imagine in 2 years, instead of using an MCP, we will point to the tool's documentation or OpenAPI, and the AI can ingest the whole context without the middle layer.
I don't have a high opinion of MCP and the hype it's generating is ridicolous, but the problem it supposedly solves is real. If it can work as an excuse to have providers expose an API for their functionality like the article hopes, that's exciting for developers.
I don't think this is true.
My Claude Code can:
- open a browser, debug a ui, or navigate to any website
- write a script to interact with any type of accessible api
All without MCP.
Within a year I expect there to be legitimate "computer use" agents. I expect agent sdks to take over llm apis as defacto abstractions for models, and MCP will have limited use isolated to certain platforms - but with that caveat that an MCP-equipped agent performs worse than a native computer-use agent.
I mean, that’s just saying the same thing — at the end of the day, there’s are underlying deterministic systems that it uses
I had similar skepticism initially, but I would recommend you dip toe in water on it before making judgement
The conversational/voice AI tech now dropping + the current LLMs + MCP/tools/functions to mix in vendor APIs and private data/services etc. really feels like a new frontier
It's not 100% but it's close enough for a lot of usecases now and going to change a lot of ways we build apps going forward
https://layercode.com/ (https://x.com/uselayercode has demos)
Have you used the live mode on the Gemini App (or stream on AI Studio)?
What blocked me initially was watching NDA'd demos a year or two back from a couple of big software vendors on how Agents were going to transform enterprise ... what they were showing was a complete non-starter to anyone who had worked in a corporate because of security, compliance, HR, silos etc. so I dismissed it
This MCP stuff solves that, it gives you (the enterprise) control in your own walled garden, whilst getting the gains from LLMs, voice etc. ... the sum of the parts is massive
It more likely wraps existing apps than integrates directly with them, the legacy systems becoming data or function providers (I know you've heard that before ... but so far this feels different when you work with it)
MCP is already a useless layer between AIs and APIs, using it when you don't even have GenAI is simply idiotic.
The only redeeming quality of MCP is actually that it has pushed software vendors to expose APIs to users, but just use those directly...
I made this MCP server so that you could chat with real-time data coming from the API - https://github.com/AshwinSundar/congress_gov_mcp. I’ve actually started using it more to find out, well, what the US Congress is actually up to!
Typically, in these kinds of developments there are two key things that need to appear at the same time: 1. Ubiquitous hardware, so e.g. everyone buys a car, or a TV, or a toaster. 2. An "interface" (whether that's a protocol or a UI or an API or a design standard) which is hyper low cognitive load for the user e.g. the flush button on a toilet is probably the best example I've ever seen, but the same can be said for the accelerator + brake + steering wheel combo, or indeed in digital/online it's CSV for me, and you can also say the same about HTTP like this article does.
Obviously these two factors feed into each other in a kind of feedback loop. That is basically what the role of "hype" is, to catalyse that loop.
Maybe I'm not fully understanding the approach, but it seems like if you started relying on third-party MCP servers without the AI layer in the middle, you'd quickly run into backcompat issues. Since MCP servers assume they're being called by an AI, they have the right to make breaking changes to the tools, input schemas, and output formats without notice.
Maybe the author is okay with that and just want new APIs (for his toaster).
For example, the Kagi MCP server interacts with the Kagi API. Wouldn't you have a better experience just using that API directly then?
On another note, as the number of python interpreters running on your system increases with the number of MCP servers, does anyone think there will be "hosted" offerings that just provide a sort of "bridge" running all your MCP servers?
The additional API is /list-tools
And all the clients consume the /list-tools first and then rest of the APIs depending on which tool they want to call.
Locally you just need a consumer/client, isn't?
On the other hand, in the absence of an existing API, you can implement your MCP server to just [do the thing] itself, and maybe that's where the author sees things trending.
Let’s assume I want to write an MCP HTTP server without a library, just an HTTP handler, how do I do it? What’s its schema? If I want to call an MCP server from curl what endpoint do I call? Can someone help me find where this is documented?
MCP clients can query these endpoints (new vibe term is "invoke tools")
That is almost the entirety of it.
The difference with traditional API endpoints is: they are geared towards LLMs, so LLMs can ask servers to list "tools" and can call these tools at will during execution.
It's a vibe-coded spec for an extremely hype-based space.
After like an hour of searching I finally found the Lifecycle page: https://modelcontextprotocol.io/specification/2025-06-18/bas... and I think it contains the answers I’m looking for. But I feel this should be roughly explained in the first introduction.
Agree that most of the pages feel LLM generated, and borderline unreadable
Yes, technically you could, but you are "supposed" to just use a library that builds the actual endpoints based on the schema for the version of MCP you are using. And only worry about building your tools, to expose them to a LLM, so it can be consumed (LLM function calling, but with lots of abstractions to make it more developer friendly)
(Sorry, I know this isn't really a helpful answer)
So yes, adding a tool is trivial, adding an MCP server to your existing application might require some non-trivial work of probably unnecessary complexity.
I’m not familiar with the details but I would imagine that it’s more like:
”An MCP server which re-exposes an existing public/semi-public API should be easy to implement, with as few changes as possible to the original endpoint”
At least that’s the only way I can imagine getting traction.
Interoperability means user portability. And no tech bro firm wants user portability, they want lock in and monopoly.
We had a non technical team member write an agent to clean up a file share. There are hundreds of programming languages, libraries, and apis that enabled that before MCP but now people don’t even have to think about it. Is it performant no, is it the “best” implementation absolutely not. Did it create enormous value in a novel way that was not possible with the resources, time, technology we had before 100%. And that’s the point.
This has to be BS(or you think its true) unless it was like 1000 files. In my entire career I've seen countless crazy file shares that are barely functional chaos. In nearly ever single "cleanup" attempt I've tried to get literally ANYONE from the relevant department to help with little success. That is just for ME to do the work FOR THEM. I just need context from them. I've on countless occasion had to go to senior management to force someone to simply sit with me for an hour to go over the schema they want to try to implement. SO I CAN DO IT FOR THEM and they don't want to do it and literally seemed incapable of doing so when forced to. COUNTLESS Times. This is how I know AI is being shilled HARD.
If this is true then I bet you anything in about 3-6 months you guys are going to be recovering this file system from backups. There is absolutely no way it was done correctly and no one has bothered to notice yet. I'll accept your downvote for now.
Cleaning up a file share is 50% politics, 20% updating procedures, 20% training and 10% technical. I've seen companies go code red and practically grind to a halt over a months long planned file share change. I've seen them rolled back after months of work. I've seen this fracture the files shares into insane duplication(or more) because despite the fact it was coordinated, senior managers did not as much as inform their department(but attended meetings and signed off on things) and now its too late to go back because some departments converted and some did not. I've seen helpdesk staff go home "sick" because they could not take the volume of calls and abuse from angry staff afterwards.
Yes I have trauma on this subject. I will walk out of a job before ever doing a file share reorg again.
You'll roll it out in phases? LOL
You'll run it in parallel? LOL
You'll do some <SUPER SMART> thing? LOL.
Now I am excited by MCP and would be all in except security.
Security is a huge issue.
Forget AI and imagine a system where you call APIs and you get both data and JS. And that JS executes at global scope with full access to other APIs. And so do all the other MCP servers. Furthermore the MCP server may go to arbitrary Web pages and download JS. And that JS e.g. from a strangers Github issue or Web search gets executes with full API privileges.
<cute animal interject> This isn't something MCP can fix. It is built into the dice rolling nature of LLMs. Turning predictions into privileged executions. And those dice can be loaded by any MCP server.
Or imagine surfing the Web using a 2001 browser with no protections against cross domain scripting. Then having a page where you choose what init scripts to run and then it cascades from there. You are logged into your bank at the time!This is what worries me. It's not USBC. It's sort of USBC but where you are ordering all your peripherals from Amazon, Ali express and Temu and the house is made of tinder.