MCP: An (Accidentally) Universal Plugin System

https://worksonmymachine.substack.com/p/mcp-an-accidentally-universal-plugin

phh
I agree with the article, and I love how the author is (mis-)using MCP. I just want to rephrase what the accident actually is.

The accident isn't that somehow we got a protocol to do things we couldn't do before. As other comments point out MCP (the specificaiton), isn't anything new or interesting.

No, the accident is that the AI Agent wave made interoperability hype, and vendor lock-in old-fashioned.

I don't know how long it'll last, but I sure appreciate it.

ljm
I genuinely believe that low-code workflow orchestrators like Zapier or IFTTT will be the first major victims of agentic LLM workflows. Maybe not right now but already it’s easier to write a prompt describing a workflow than it is to join a bunch of actions and triggers on a graph.

The whole hype around AI replacing entire job functions does not have as much traction as the concept of using agents to handle all of the administrative stuff that connects a workflow together.

Any open source model that supports MCP can do it, so there’s no vendor lock in, no need to learn the setup for different workflow tools, and a lot of money saved on seats for expensive SaaS tools.

sshine
Hype, certainly.

But the way I see it, AI agents created incentives for interoperability. Who needs an API when everyone is job secure via being a slow desktop user?

Well, your new personal assistant who charges by the Watt hour NEEDS it. Like when the CEO will personally drive to get pizzas for that hackathon because that’s practically free labor, so does everyone want everything connected.

For those of us who rode the API wave before integrating became hand-wavey, it sure feels like the world caught up.

I hope it will last, but I don’t know either.

mh-
Unfortunately, I think we're equally likely to see shortsighted lock-in attempts like this [0] one from Slack.

I tried to find a rebuttal to this article from Slack, but couldn't. I'm on a flight with slow wifi though. If someone from Slack wants to chime in that'd be swell, too.

I've made the argument to CFOs multiple times over the years why we should continue to pay for Slack instead of just using Teams, but y'all are really making that harder and harder.

[0]: https://www.reuters.com/business/salesforce-blocks-ai-rivals...

jetsnoc
I wasn’t aware of this, it’s extremely shortsighted. My employees’ chats are my company’s data, and I should be able to use them as I see fit. Restricting API access to our own data moves them quickly in to the 'too difficult to continue doing business with' category.

The reality is that Slack isn’t that sticky. The only reason I fended off the other business units who've demanded Microsoft Teams through the years is my software-engineering teams QoL. Slack has polish and is convenient but now that Slack is becoming inconvenient and not allowing me to do what I want, I can't justify fending off the detractors. I’ll gladly invest the time to swap them out for a platform that respects our ownership and lets us use our data however we need to. We left some money on the table but I am glad we didn’t bundle and upgrade to Slack Grid and lock ourselves into a three-year enterprise agreement...

dgacmu
I'm happier we went with Zulip each day.
iechoz6H
Well, interoperability requires competition and if there's one thing we've learnt it's that the tech industry loves a private monopoly.
troupo
> But the way I see it, AI agents created incentives for interoperability.

There are no new incentives for interoperability. Compare that were already providing API access added MCP servers of varying quality.

The rest couldn't care less, unless they can smell an opportunity to monetize hype

alexpotato
Reminds me of the days of Winsock.

For those that don't remember/don't know, everything network related in Windows used to use their own, proprietary setup.

Then one day, a bunch of vendors got together and decided to have a shared standard to the benefit of basically everyone.

https://en.wikipedia.org/wiki/Winsock

ggambetta
Trumpet Winsock! Brings back memories :)
pyman
I think we're seeing a wave of hype marketing on YouTube, Twitter and LinkedIn, where people with big followings create posts or videos full with buzzwords (MCP, vibe coding, AI, models, agentic) with the sole purpose of promoting a product like Cursor, Claude Code or Gemini Code, or get people to use Anthropic's MCP instead of Google's A2A.

It feels like 2 or 3 companies have paid people to flood the internet with content that looks educational but is really just a sales pitch riding the hype wave.

Honestly, I just saw a project manager on LinkedIn telling his followers how MCP, LLMs and Claude Code changed his life. The comments were full of people asking how they can learn Claude Code, like it's the next Python.

Feels less like genuine users and more like a coordinated push to build hype and sell subscriptions.

visarga
The main benefit is not that it made interoperability fashionable, or that it make things easy to interconnect. It is the LLM itself, if it knows how to wield tools. It's like you build a backend and the front-end is not your job anymore, AI does it.

In my experience Claude and Gemini can take over tool use and all we need to do is tell them the goal. This is huge, we always had to specify the steps to achieve anything on a computer before. Writing a fixed program to deal with dynamic process is hard, while a LLM can adapt on the fly.

freeone3000
The issue holding us back was never that we had to write a frontend — it was the data locked behind proprietary databases and interfaces. Gated behind API keys and bot checks and captchas and scraper protection. And now we can have an MCP integrator for IFTTT and have back the web we were promised, at least for a while.
TeMPOraL
Indeed, the frontend itself is usually the problem. If not for data lock in, we wouldn't need that many frontends in the first place - most of the web would be better operated through a few standardized widgets and a spreadsheet and database interfaces - and non-tech people would be using it and be more empowered for it.

(And we know that because there was a brief period in time where basics of spreadsheets and databases were part of curriculum in the West and people had no problem with that.)

troupo
So... How do MCPs magically unlock data behind proprietary databases and interfaces?
HumanOstrich
I don't understand what you mean.

> It (the main benefit?) is the LLM itself, if it knows how to wield tools.

LLMs and their ability to use tools are not a benefit or feature that arose from MCP. There has been tool usage/support with various protocols and conventions way before MCP.

MCP doesn't have any novel aspects that are making it successful. It's relatively simple and easy to understand (for humans), and luck was on Anthropic's side. So people were able to quickly write many kinds of MCP servers and it exploded in popularity.

Interoperability and interconnecting tools, APIs, and models across providers are the main benefits of MCP, driven by its wide-scale adoption.

secos
To me it feels like an awkward API that creates opportunities to work the limitations of a normal API... which to me is not a great thing. Potentially useful, sure, but not great.
jadar
I don’t want to undermine the author’s enthusiasm for the universality of the MCP. But part of me can’t help wondering: isn’t this the idea of APIs in general? Replace MCP with REST and does that really change anything in the article? Or even an Operating System API? POSIX, anyone? Programs? Unix pipes? Yes, MCP is far simpler/universal than any of those things ended up being — but maybe the solution is to build simpler software on good fundamental abstractions rather than rebuilding the abstractions every time we want to do something new.
Jonovono
MCP is not REST. In your comparison, its more that MCP is a protocol for discovering REST endpoints at runtime and letting users configure what REST endpoints should be used at runtime.

Say i'm building a app and I want my users to be able to play spotify songs. Yea, i'll hit the spotify api. But now, say i've launched my app, and I want my users to be able to play a song from sonofm when they hit play. Alright, now I gotta open up the code and do some if statements hard code the sonofm api and ship a new version, show some update messages.

MCP is literally just a way to make this extensible so instead of hardcoding this in, it can be configured at runtime

Too
That only works if you let the LLM do the interpretation of the MCP descriptions, in the case of TFA the idea was to use MCP without LLM, which is essentially same as any old API.
emporas
>Alright, now I gotta open up the code and do some if statements hard code the sonofm api and ship a new version, show some update messages.

You will need to do that anyway. Easier discovery of the API doesn't say much.

The user might want a complicated functionality, which combines several API calls, and more code for filtering/sorting/searching of that information locally. If you let the LLM to write the code by itself, it might take 20 minutes and millions of wasted tokens of the LLM going back and forth in the code to implement the functionality. No user is going to find that acceptable.

layer8
HATEOAS was supposed to be that.

https://en.wikipedia.org/wiki/HATEOAS

mort96
Wait was it? HATEOAS is all about hypermedia, which means there must be a human in the loop being presented the rendered hypermedia. MCP seems like it's meant to be for machine<->machine communication, not human<->machine
Jonovono
heh, there was a good convo about HATEOAS and MCP on HN awhile back:

* https://news.ycombinator.com/item?id=43307225

* https://www.ondr.sh/blog/ai-web

kvdveer
The main difference between MCP and Rest is that MCP is self described from the very start. REST may have OpenAPI, but it is a later addon, and we haven't quite standardised on using it. The first step of exposing an MCP is describing it, for Rest is is an optional step that's often omitted.
Szpadel
isn't also SOAP self described?
kerng
When I read about MCP the first time and saw that it requires a "tools/list" API reminded me of COM/DCOM/ActiveX from Microsoft, it had things like QueryInterface and IDispatch. And I'm sure that wasn't the first time someone came up with dynamic runtime discovery of APIs a server offers.

Interestingly, ActiveX was quite the security nightmare for very similar reasons actually, and we had to deal with infamous "DLL Hell". So, history repeats itself.

souldeux
And gRPC with reflection, yeah?
xg15
Is it "self-described" in the sense I can get a list of endpoints or methods, with a human- (or LLM-) readable description for each - or does it supply actual schemata that I could also use with non-AI clients?

(Even if only the former, it would of course be a huge step forward, as I could have the LLM generate schemata. Also, at least, everyone is standardizing on a base protocol now, and a way to pass command names, arguments, results, etc. That's already a huge step forward in contrast to arbitrary Rest+JSON or even HTTP APIs)

Spivak
For each tool you get the human description as well as a JSON schema for the parameters needed to call the function.
light_hue_1
But you're describing it in a way that is useless to anything but an LLM. It would have been much better if the description language had been more formalized.
Majromax
> It would have been much better if the description language had been more formalized.

To speculate about this, perhaps the informality is the point. A full formal specification of something is somewhere between daunting and Sisyphean, and we're more likely to see supposedly formal documentation that nonetheless is incomplete or contains gaps to be filled with background knowledge or common sense.

A mandatory but informal specification in plain language might be just the trick, particularly since vibe-APIing encourages rapid iteration and experimentation.

0x696C6961
The description includes an input and output json schema.
adverbly
Apis do not need to necessarily tell you everything about themselves. Anyone who has used poorly documented or fully undocumented apis knows exactly what I'm talking about here.

Obviously, for http apis you might often see something like an open API specification or graphql which both typically allow an api to describe itself. But this is not commonly a thing for non-http, which is something that mcp supports.

MCP might be the first standard for self-described apis across all protocols(I might be misusing protocols here but not sure what the word technically should be. I think the MCP spec calls it transport but I might be wrong there), making it slightly more universal.

I think the author is wrong to discount the importance of an llm as an interface here though. I do think the majority of mcp clients will be llms. An API might get you 90% of the way there but if the llm gets you 99.9% by handling that last bit of plumbing it's going to go mainstream.

caust1c
In my mind the only thing novel about MCP is requiring the schema is provided as part of the protocol. Like, sure it's convenient that the shape of the requests/response wrappers are all the same, that certainly helps with management using libraries that can wrap dynamic types in static types, but everyone was already doing that with APIs already we just didn't agree on what that envelope's shape should be. BUT, with the requirement that schema be provided with the protocol, and the carrot of AI models seamlessly consuming it, that was enough of an impetus.
marcosdumay
> the only thing novel about MCP is requiring the schema is provided as part of the protocol

You mean, like OpenAPI, gRPC, SOAP, and CORBA?

doug_durham
Where is the mandatory human readable prose description of the purpose of the tool in any of those specs. It isn't. Also the simplicity of JSON interface descriptions is key.
sneak
You can’t connect to a gRPC endpoint and ask to download the client protobuf, but yes.
jampa
I don't want to sound like a skeptic, but I see way more people talking about how awesome MCP is rather than people building cool things with it. Reminds me of blockchain hype.

MCP seems like a more "in-between" step until the AI models get better. I imagine in 2 years, instead of using an MCP, we will point to the tool's documentation or OpenAPI, and the AI can ingest the whole context without the middle layer.

qsort
Regardless of how good a model gets, it can't do much if it doesn't have access to deterministic tools and information about the state of the world. And that's before you take into account security: you can't have a model running arbitrary requests against production, that's psychotic.

I don't have a high opinion of MCP and the hype it's generating is ridicolous, but the problem it supposedly solves is real. If it can work as an excuse to have providers expose an API for their functionality like the article hopes, that's exciting for developers.

ramoz
> Regardless of how good a model gets

I don't think this is true.

My Claude Code can:

- open a browser, debug a ui, or navigate to any website

- write a script to interact with any type of accessible api

All without MCP.

Within a year I expect there to be legitimate "computer use" agents. I expect agent sdks to take over llm apis as defacto abstractions for models, and MCP will have limited use isolated to certain platforms - but with that caveat that an MCP-equipped agent performs worse than a native computer-use agent.

anon7000
> open a browser, debug a ui, or navigate to any website

I mean, that’s just saying the same thing — at the end of the day, there’s are underlying deterministic systems that it uses

mtkd
It's very different to blockchain hype

I had similar skepticism initially, but I would recommend you dip toe in water on it before making judgement

The conversational/voice AI tech now dropping + the current LLMs + MCP/tools/functions to mix in vendor APIs and private data/services etc. really feels like a new frontier

It's not 100% but it's close enough for a lot of usecases now and going to change a lot of ways we build apps going forward

djhn
Is there anything new that’s come out in conversational/voice? Sesame Maya and Miles were kind of impressive demos, but that’s still in ’research preview’. Kyutai presented really a cool low latency open model, but I feel like we’re still closer to Siri than actually usable voice interfaces.
mtkd
It's moving very fast:

https://elevenlabs.io/

https://layercode.com/ (https://x.com/uselayercode has demos)

Have you used the live mode on the Gemini App (or stream on AI Studio)?

moooo99
Probably my judgement is a bit fogged. But if I get asked about building AI into our apps just one more time I am absolutely going to drop my job and switch careers
mtkd
That's likely because OG devs have been seeing the hallucination stuff, unpredicability etc. and questioning how that fits with their carefully curated perfect system

What blocked me initially was watching NDA'd demos a year or two back from a couple of big software vendors on how Agents were going to transform enterprise ... what they were showing was a complete non-starter to anyone who had worked in a corporate because of security, compliance, HR, silos etc. so I dismissed it

This MCP stuff solves that, it gives you (the enterprise) control in your own walled garden, whilst getting the gains from LLMs, voice etc. ... the sum of the parts is massive

It more likely wraps existing apps than integrates directly with them, the legacy systems becoming data or function providers (I know you've heard that before ... but so far this feels different when you work with it)

mindwok
Rule of thumb: the companies building the models are not selling hype. Or at least the hype is mostly justified. Everyone else, treat with extreme skepticism.
bryancoxwell
But this whole post is about using MCP sans AI
iLoveOncall
MCP without AI is just APIs.

MCP is already a useless layer between AIs and APIs, using it when you don't even have GenAI is simply idiotic.

The only redeeming quality of MCP is actually that it has pushed software vendors to expose APIs to users, but just use those directly...

ricardobeat
And that’s the whole point - it’s APIs we did not have. Now app developers are encouraged to have a public, user friendly, fully functional API made for individual use, instead of locking them behind enterprise contracts and crippling usage limits.
ashwinsundar
I had a use case - I wanted to know what the congresspeople from my state have done this week. This information is surprisingly hard to just get from the news. I learned about MCP a few months ago and thought that it might be a cool way to interact with the congress.gov API.

I made this MCP server so that you could chat with real-time data coming from the API - https://github.com/AshwinSundar/congress_gov_mcp. I’ve actually started using it more to find out, well, what the US Congress is actually up to!

alex-moon
I always say this whenever anyone asks about whether something is "just hype". One day I will write a blog post on it. Long story short: every piece of new tech is "just hype" until the surrounding ecosystem is built for it. Trains are just hype until you cover the country in railway lines. Telephony is just hype until everyone has a telephone. Email is just hype until everyone has a personal computer (and a reason to sit in front of it every day).

Typically, in these kinds of developments there are two key things that need to appear at the same time: 1. Ubiquitous hardware, so e.g. everyone buys a car, or a TV, or a toaster. 2. An "interface" (whether that's a protocol or a UI or an API or a design standard) which is hyper low cognitive load for the user e.g. the flush button on a toilet is probably the best example I've ever seen, but the same can be said for the accelerator + brake + steering wheel combo, or indeed in digital/online it's CSV for me, and you can also say the same about HTTP like this article does.

Obviously these two factors feed into each other in a kind of feedback loop. That is basically what the role of "hype" is, to catalyse that loop.

alangpierce
> What if you just... removed the AI part?

Maybe I'm not fully understanding the approach, but it seems like if you started relying on third-party MCP servers without the AI layer in the middle, you'd quickly run into backcompat issues. Since MCP servers assume they're being called by an AI, they have the right to make breaking changes to the tools, input schemas, and output formats without notice.

mkagenius
Yes! Once the first integration is done. It will be static unless someone manually changes it.

Maybe the author is okay with that and just want new APIs (for his toaster).

sureglymop
I've thought of this as well but in reality, aren't MCP servers mostly just clients for pre existing APIs?

For example, the Kagi MCP server interacts with the Kagi API. Wouldn't you have a better experience just using that API directly then?

On another note, as the number of python interpreters running on your system increases with the number of MCP servers, does anyone think there will be "hosted" offerings that just provide a sort of "bridge" running all your MCP servers?

mkagenius
My understanding is MCP = original APIs + 1 more API

The additional API is /list-tools

And all the clients consume the /list-tools first and then rest of the APIs depending on which tool they want to call.

sureglymop
Yes. But in order to do that you run the MCP server for that API locally. Is it really worth doing that just to have the additional /list-tools, when it is otherwise basically just a bridge/proxy?
falcor84
From my perspective, remote MCP servers are gradually becoming the norm for external services.
mkagenius
Not quite sure I get what you mean by 'MCP server for that API locally'.

Locally you just need a consumer/client, isn't?

graerg
This has been my take, and maybe I'm missing something, but my thinking has been that in an ideal case there's an existing API with an OpenAPI spec you can just wrap with your FastMCP instantiation. This seemed neat, but while I was trying to do authenticated requests and tinkering with it with Goose I ended up just having Goose do curl commands against the existing API routes and I suspect with a sufficiently well documented OpenAPI spec, isn't MCP kinda moot?

On the other hand, in the absence of an existing API, you can implement your MCP server to just [do the thing] itself, and maybe that's where the author sees things trending.

gdecaso
[dead]
thiht
Can someone help me find an actual explanation of what MCP does? The official MCP documentation completely fails at explaining how it works and what it does. For example the quick start for server developers[1] doesn’t actually explain anything. Sure in the Python examples they add @mcp annotations but WHAT DOES IT DO? I feel like I’m going crazy reading their docs because there’s nothing of substance in there.

Let’s assume I want to write an MCP HTTP server without a library, just an HTTP handler, how do I do it? What’s its schema? If I want to call an MCP server from curl what endpoint do I call? Can someone help me find where this is documented?

[1]: https://modelcontextprotocol.io/quickstart/server

troupo
MCP is a server that exposes API endpoints (new vibe term is "tools")

MCP clients can query these endpoints (new vibe term is "invoke tools")

That is almost the entirety of it.

The difference with traditional API endpoints is: they are geared towards LLMs, so LLMs can ask servers to list "tools" and can call these tools at will during execution.

It's a vibe-coded spec for an extremely hype-based space.

thiht
Yes I understand that, but how do I write these endpoints myself without using magic @mcp annotations?

After like an hour of searching I finally found the Lifecycle page: https://modelcontextprotocol.io/specification/2025-06-18/bas... and I think it contains the answers I’m looking for. But I feel this should be roughly explained in the first introduction.

Agree that most of the pages feel LLM generated, and borderline unreadable

edgolub
The reason why they do not expose the underlying server schema, is because you aren't supposed to write your own MCP server from zero, in the same way you aren't supposed to write your own GraphQL Server from zero.

Yes, technically you could, but you are "supposed" to just use a library that builds the actual endpoints based on the schema for the version of MCP you are using. And only worry about building your tools, to expose them to a LLM, so it can be consumed (LLM function calling, but with lots of abstractions to make it more developer friendly)

troupo
I would use Elixir and ash_ai :)) https://youtu.be/PSrzruaby1M?si=EEEQtQPOSMaLFJcM

(Sorry, I know this isn't really a helpful answer)

neoden
So much scepticism in the comments. I spent last week implementing an MCP server and I must say that "well-designed" is probably an overstatement. One of the principles behind MCP is that "an MCP server should be very easy to implement". I don't know, maybe it's a skill issue but it's not that easy at all. But what is important imo, is that so many eyes are looking in one direction right now. That means, it has good chances to have all the problems to be solved very quickly. And second, often it's so hard to gather a critical mass of attention around something to create an ecosystem but this is happening right now. I wish all the participants patience and luck)
newtwilly
It's pretty easy if you just use the MCP Python library. You just put an annotation on a function and there's your tool. I was able to do it and it works great without me knowing anything about MCP. Maybe it's a different story if you actually need to know the protocol and implement more for yourself
neoden
Yes, I am using their Python SDK. But you can't just add MCP to your existing API server if it's not ready to async Python. Probably, you would need to deploy it as a separate server and make server-to-server to your API. Making authentication work with your corporate IAM provider is a path of trial and error — not all MCP hosts implement it the same way so you need to compare behaviours of multiple apps to decide if it's your setup that fails or bugs in VS Code or something like that. I haven't even started to think about the ability of a server to message back to the client to communicate with LLM, AFAIK modern clients don't support such a scenario yet, at least don't support it well.

So yes, adding a tool is trivial, adding an MCP server to your existing application might require some non-trivial work of probably unnecessary complexity.

klabb3
> One of the principles behind MCP is that "an MCP server should be very easy to implement".

I’m not familiar with the details but I would imagine that it’s more like:

”An MCP server which re-exposes an existing public/semi-public API should be easy to implement, with as few changes as possible to the original endpoint”

At least that’s the only way I can imagine getting traction.

mattmanser
We've done it before, it hasn't worked before and it's only a matter of years if not months before apps starting locking down the endpoints so ONLY chatgpt/claude/etc. servers can use them.

Interoperability means user portability. And no tech bro firm wants user portability, they want lock in and monopoly.

inheritedwisdom
Lowering the bar to integrate and communicate is what has historically allowed technology to reach critical mass and enabled adoption. MCP is an evolution in that respect and shouldn’t be disregarded.

We had a non technical team member write an agent to clean up a file share. There are hundreds of programming languages, libraries, and apis that enabled that before MCP but now people don’t even have to think about it. Is it performant no, is it the “best” implementation absolutely not. Did it create enormous value in a novel way that was not possible with the resources, time, technology we had before 100%. And that’s the point.

citizenpaul
>non technical team member write an agent to clean up a file share

This has to be BS(or you think its true) unless it was like 1000 files. In my entire career I've seen countless crazy file shares that are barely functional chaos. In nearly ever single "cleanup" attempt I've tried to get literally ANYONE from the relevant department to help with little success. That is just for ME to do the work FOR THEM. I just need context from them. I've on countless occasion had to go to senior management to force someone to simply sit with me for an hour to go over the schema they want to try to implement. SO I CAN DO IT FOR THEM and they don't want to do it and literally seemed incapable of doing so when forced to. COUNTLESS Times. This is how I know AI is being shilled HARD.

If this is true then I bet you anything in about 3-6 months you guys are going to be recovering this file system from backups. There is absolutely no way it was done correctly and no one has bothered to notice yet. I'll accept your downvote for now.

Cleaning up a file share is 50% politics, 20% updating procedures, 20% training and 10% technical. I've seen companies go code red and practically grind to a halt over a months long planned file share change. I've seen them rolled back after months of work. I've seen this fracture the files shares into insane duplication(or more) because despite the fact it was coordinated, senior managers did not as much as inform their department(but attended meetings and signed off on things) and now its too late to go back because some departments converted and some did not. I've seen helpdesk staff go home "sick" because they could not take the volume of calls and abuse from angry staff afterwards.

Yes I have trauma on this subject. I will walk out of a job before ever doing a file share reorg again.

You'll roll it out in phases? LOL

You'll run it in parallel? LOL

You'll do some <SUPER SMART> thing? LOL.

bravesoul2
This is well written and fun. Thanks OP!

Now I am excited by MCP and would be all in except security.

Security is a huge issue.

Forget AI and imagine a system where you call APIs and you get both data and JS. And that JS executes at global scope with full access to other APIs. And so do all the other MCP servers. Furthermore the MCP server may go to arbitrary Web pages and download JS. And that JS e.g. from a strangers Github issue or Web search gets executes with full API privileges.

    <cute animal interject> This isn't something MCP can fix. It is built into the dice rolling nature of LLMs. Turning predictions into privileged executions. And those dice can be loaded by any MCP server.
Or imagine surfing the Web using a 2001 browser with no protections against cross domain scripting. Then having a page where you choose what init scripts to run and then it cascades from there. You are logged into your bank at the time!

This is what worries me. It's not USBC. It's sort of USBC but where you are ordering all your peripherals from Amazon, Ali express and Temu and the house is made of tinder.