AWS API Gateway for Modern API Development
Before you write a single line of business logic, a new API requires a lot of setup that has nothing to do with what the API is actually for. Routing, load balancing, SSL termination, authentication middleware, rate limiting, logging pipelines, versioning, deployment configuration - all of it has to exist before your API can do anything useful. AWS API Gateway connects all these pieces and also allows you to terminate requests at many different kinds of AWS infrastructure like Lambda, ECS, load balancers, and more.
With API Gateway most of the infrastructure layer becomes a configuration problem instead of an engineering problem. We use it regularly at Absolute Ops when helping customers build or modernize APIs, and it's worth understanding both what it does well and where it runs into limits.
Although out of the scope of this post, it's also worth noting that the AWS API Gateway is much cheaper than Azure's implementation, and is more robust to boot.

What you're not building
The clearest way to explain API Gateway's value is to list what you don't have to set up yourself.
Authentication is the big one. Instead of writing token validation middleware that every service has to implement, you pick an auth model - IAM, Cognito, a Lambda authorizer, custom JWT, API keys, mTLS - and configure it at the gateway level. Every route behind it inherits that protection without each service needing its own auth code.
Rate limiting and throttling work the same way. Burst limits, per-client quotas, usage plans, these are configuration, not code. Building this yourself typically means a custom middleware layer or an advanced load balancer setup. Here it's a few settings.
Caching, staging, and canary deployments are also built in. You can run dev, test, and prod as separate stages with their own configuration, route a percentage of traffic to a new deployment while the rest stays on the current version, and promote or roll back without touching infrastructure.
Logging and tracing come for free through CloudWatch and X-Ray - request logs, latency distributions, error breakdowns by stage, integration latency. There's no log pipeline to build.
None of this is magic. It's just that AWS has already built the plumbing, and you're configuring it rather than writing it.

Where it fits in your architecture
API Gateway will integrate with nearly anything you're running on AWS. Lambda is the most common pairing - you define an endpoint, point it at a function, and you have an API with no servers, no containers, no scaling configuration, and no idle cost. For startups and SaaS backends, this combination is hard to beat on operational simplicity.
For teams running containers or VMs, it works equally well as a managed entry point sitting in front of an Application Load Balancer, an ECS or EKS service, or EC2 instances. You can also use it to proxy third-party APIs without exposing your backend directly, which comes up more often than you'd expect when integrating external services.
The practical result is that whether your logic runs in a Lambda function, a container, or a VM that's been running for five years, you can put the same managed API surface in front of it.

A concrete example
Consider a mid-sized SaaS platform with a handful of surfaces: customer-facing APIs, an internal admin API, billing and usage tracking, and some async workflows. The natural AWS-native architecture for this looks roughly like:
- API Gateway as the unified entry point for all API traffic
- Cognito handling user authentication
- Lambda functions for the API business logic
- Step Functions for anything that involves multi-step workflows
- DynamoDB or Aurora Serverless for data
- WAF sitting in front for API protection
- CloudWatch for metrics and logs
The notable thing about this stack is what's missing: no API servers to patch, no custom auth middleware, no log pipeline, no rate limiting code, no scaling configuration. The engineering effort goes into the business logic, not the infrastructure around it.

Where it doesn't fit
API Gateway isn't the right answer everywhere, and it's worth knowing when to reach for something else.
The biggest question to ask is whether you actually need an API Gateway to start with. If you just need authentication and have one or two endpoints, you might be better off avoiding the added complexity. Load balancers and app integration to an SSO provider will take you a long way without much hassle.
At very high throughput you should carefully consider your options. API Gateway's pricing is per-request, which is attractive at moderate volumes but adds up fast at scale. Cost modeling matters before you commit.
Payload size is a hard limit: requests cap out at 10 MB (or 128kb for websockets). If you're building file upload endpoints or handling large data transfers, you'll need a different path, typically pre-signed S3 URLs.
There are layers of request caps, including an account wide limit. Make sure you are monitoring your usage so you don't get unexpected 429s.
The actual tradeoff
API Gateway's value is straightforward: it trades some flexibility and per-request cost for a significant reduction in the infrastructure you have to build and operate. For most API workloads - especially greenfield development, serverless architectures, and teams that would rather ship features than maintain gateway middleware - that tradeoff is clearly worth it.
For high-throughput, cost-sensitive, or highly customized use cases, the calculus changes. The right answer depends on what you're building and what your traffic looks like.
If you're evaluating whether API Gateway is the right fit for your architecture, or trying to modernize an existing API layer, we're happy to walk through the specifics. Get in touch.