Show HN: An MCP Server for Understanding AWS Costs

14 points by StratusBen 12 hours ago

Hey all - I work at Vantage, a FinOps platform.

I know AI is peak hype right now. But it has definitely changed some of our dev workflows already. So we wanted to find a way to let our customers experiment with how they can use AI to make their cloud cost management work more productive.

The MCP Server acts as a connector between LLMs (right now only Claude, Cursor support it but ChatGPT and Google Gemini coming soon) and your cost and usage data on Vantage which supports 20+ cloud infra providers including AWS, Datadog, Mongo, etc. (You have to have a Vantage account to use it since it's using the Vantage API)

Video demo: https://www.youtube.com/watch?v=n0VP2NlUvRU

Repo: https://github.com/vantage-sh/vantage-mcp-server

It's really impressive how capable the latest-gen models are with an MCP server and an API. So far we have found it useful for:

Ad-Hoc questions: "What's our non-prod cloud spend per engineer if we have 25 engineers"

Action plans: "Find unallocated spend and look for clues how it should be tagged"

Multi-tool workflows: "Find recent cost spikes that look like they could have come from eng changes and look for GitHub PR's merged around the same time" (using it in combination with the GitHub MCP)

Thought I'd share, let me know if you have questions.

andrenotgiant 12 hours ago

What's the difference between connecting an LLM to the data through Vantage vs directly to the AWS cost and usage API's?

  • StratusBen 11 hours ago

    A few things.

    The biggest is giving the LLM context. On Vantage we have a primitive called a "Cost Report" that you can think of as being a set of filters. So you can create a cost report for a particular environment (production vs staging) or by service (front-end service vs back-end service). When you ask questions to the LLM, it will take the context into account versus just looking at all of the raw usage in your account.

    Most of our customers will create these filters, define reports, and organize them into folders and the LLM takes that context into account which can be helpful for asking questions.

    Lastly, we support more providers beyond AWS so if you wanted to merge in other associated costs like Datadog, Temporal, Clickhouse, etc.

cat-whisperer 12 hours ago

This is going to different, as resources end up getting intertwined? or is there a way to standardize it?