AWS AgentCore versus MCP Orchestrator


AWS released Bedrock AgentCore in October 2025. It is a serious product. The Runtime is a microVM-per-session execution environment with up to 8-hour session limits, consumption-based pricing that does not charge for I/O wait, and per-session isolation that handles concurrency without the developer thinking about it. The Gateway service centralizes MCP tool routing, supports REST APIs and Lambda functions and existing MCP servers as targets behind one endpoint, and decouples inbound authentication from target systems through configurable identity providers. The rest of the suite — Memory, Code Interpreter, Browser, Identity, Observability, Policy, Evaluations, Payments, Agent Registry — fills out what a managed agent platform needs to look like in 2026.

It is a good product. AWS has done good work here.

It also is not the answer to the enterprise MCP problem.

Hosting MCP servers is not the same as governing MCP servers

When I describe Magertron’s MCP Orchestrator to AWS-aligned engineers, the first response is usually some version of “isn’t this just AgentCore?” And the answer is no, but the question is fair, because both products talk about MCP servers and both products involve gateways and both products run things. The surface vocabulary overlaps. The actual problems they solve are different layers of the stack.

AgentCore is a managed runtime substrate. You bring a container, AgentCore runs it. You bring an MCP server, AgentCore hosts it. You bring REST APIs, AgentCore wraps them as MCP tools through Gateway. The premise is that the agents and tools you want to run are yours, you know about them, you registered them with AWS, and AWS will run them well. That premise is correct for greenfield agent builds inside AWS-committed organizations. AgentCore is the right answer for that buyer.

MCP Orchestrator is a Kubernetes-native governance plane for MCP servers. The premise is the inverse: in any enterprise of meaningful size, the MCP servers you most need to govern are the ones you didn’t register with anyone. They are the ones some developer wrapped around their team’s internal REST API in 30 minutes and pointed Claude Desktop at. They are running on dev VMs, under desks, in development namespaces of clusters operations never deployed to. They are shadow infrastructure. They have credentials. And the way you find out about them is not by asking AWS to list its hosted runtimes.

The structural limit on managed hosting

AgentCore Gateway recently added support for existing MCP servers as a target type, including private MCP servers maintained by customers. That is a good capability. The bound on it is exactly the bound any managed service has: you have to know about the server and configure it as a target.

The actual distribution of MCP servers in a 5,000-person enterprise looks like this. A small minority — call it ten percent on a good day — are MCP servers that platform engineering or the AI platform team built deliberately, with review, with security signoff, with intent to be managed. These belong on AgentCore (if you are an AWS shop). They are exactly the workload AgentCore was built for.

The other ninety percent are shadow MCP servers. They were not built by the AI platform team. They were built by application teams, by data engineers, by individual developers who needed Claude or Cursor or Kiro to talk to one internal system one time, and the MCP wrapper was the fastest path. Nobody filed a ticket. Nobody updated a Confluence page. Nobody told AWS. They are not, and will never be, configured as AgentCore Gateway targets, because the people who run them do not know AgentCore Gateway exists. They might not even know what MCP is, formally. They just know “the thing that makes Claude talk to our API works now.”

This is not a critique of AWS. It is a recognition that a managed hosting service inventories what its customers register with it. It cannot inventory what its customers do not know to register, because that is a categorically different operation. It is endpoint inspection, egress traffic analysis, config-file enumeration, and source repo correlation, and a hyperscaler’s hosting service does not do any of those things.

What a governance plane actually does

Inventory is the table stakes. Discover what is running, across MCP host config files, endpoint package managers, egress proxies, and source repositories. Aggregate, dedupe, correlate. Produce a single, current, queryable list of every MCP server an agent could talk to in your environment.

After inventory, the harder work: triage. Which discovered servers are sanctioned, which are tolerated, which are deprecated, which are actively dangerous. Federate identity and access policy across them — not because they were built with federated identity in mind, but because policy is now imposed on them retroactively. Watch behavior. An MCP server that historically returned thirty-eight tool definitions and now returns forty-two has either been updated or compromised, and you should be told either way.

Adopt the ones that matter into a managed runtime. This is where MCP Orchestrator and AgentCore stop being alternatives and start being complementary. A governance plane that finds an unmanaged MCP server in shadow infrastructure has three options for what to do with it: leave it where it is and monitor, kill it, or migrate it onto managed runtime. The managed runtime is fine as AgentCore for AWS-shop customers. It is fine as a customer-managed Kubernetes pod under MCP Orchestrator for sovereignty-constrained customers. It is fine as both for organizations with mixed-environment deployments. The federation problem is the same regardless of where the runtime lives.

When AgentCore is the right answer

I have to be careful here, because the temptation when you build a competing product is to spend the rest of the blog explaining why your product is better. Most of the time it is not — it is different.

AgentCore is the right answer if your organization is committed to AWS and your bottleneck is time to first agent in production. The closest equivalent on MCP Orchestrator — set up k3s, deploy the Helm chart, register your first MCP server — takes a couple of hours and a Kubernetes operator who knows what they are doing. AgentCore takes a container and an AWS account.

AgentCore is the right answer if you need Memory, Code Interpreter, Browser, or Payments as managed services. MCP Orchestrator does not have these and will not be building them. AWS does, and they integrate naturally with the rest of the AWS data plane.

AgentCore is the right answer at the low-to-moderate volume end of the curve where consumption pricing is unambiguously cheaper than fixed compute. There is a crossover point — well-documented by third-party AgentCore cost analyses — where session-based pricing flips to being more expensive than dedicated capacity, and the location of that crossover depends on session length, concurrency, and resource shape. For most agent builds in the first eighteen months, AgentCore is on the favorable side of that crossover.

When MCP Orchestrator is the right answer

Sovereignty. If your data cannot live on AWS infrastructure — defense, regulated healthcare with data residency mandates, financial services in jurisdictions with strict data localization, EU customers with GDPR boundaries that AWS region selection does not fully resolve — AgentCore is not on the table. The MCP servers run where the data runs. MCP Orchestrator runs in your cluster, in your VPC, in your colocation rack, in your air-gapped enclave.

Multi-cloud or non-cloud. AgentCore is an AWS product. If your environment is Azure-first, GCP-first, multi-cloud-by-policy, or running primarily on-premises, the conversation ends at the cloud vendor question. MCP Orchestrator runs anywhere Kubernetes runs, which is everywhere.

Existing Kubernetes investment. Most enterprises of meaningful size already operate Kubernetes. They have Prometheus dashboards, Argo pipelines, OPA policies, service meshes, observability stacks. Adding another runtime substrate is a platform decision they will resist. Adding workloads to the Kubernetes they already operate is a Tuesday. MCP Orchestrator is the latter.

Federation. The governance work I described above — discovering shadow MCP across MDM, endpoint, egress, and source — does not happen inside AgentCore because AgentCore is hosting infrastructure. It happens at a layer above the runtime, sitting between the enterprise’s existing security and platform tooling and the agent fleet. That layer is what MCP Orchestrator is becoming.

The framing that helps the buyer

The mistake the next twenty MCP infrastructure startups will make is positioning themselves as AgentCore competitors. AWS has more sales coverage, more marketing budget, and more existing customer relationships than any of us. Head-on competition on AWS’s home field is not a winning strategy and is not honest, because for the buyer who is already inside AWS and just wants to run agents, AgentCore probably is the right answer.

The framing that is honest and that is winnable is: AgentCore is a source. Your governance plane sits above it.

A mature enterprise MCP posture twelve months from now will have managed MCP servers running in AgentCore for the AWS-side workloads, managed MCP servers running on customer-operated Kubernetes for the on-prem and sovereignty-constrained workloads, and a federation layer above both that does inventory, governance, and policy across the entire estate. The federation layer treats AgentCore Gateway as a source of truth about what AWS hosts and treats endpoint config-file enumeration as a source of truth about what employees are running. Both are inputs. Neither, alone, is the answer.

MCP Orchestrator is being built as that layer. AgentCore is being built as one of its most important sources.

That is the relationship. It is not adversarial. It is architectural.

“We use AgentCore” is not an MCP governance strategy.

“We govern every MCP server in our environment, including the ones in AgentCore, on-prem, and on employee laptops” is the beginning of one.