What is AWS Transit Gateway?
AWS Transit Gateway (TGW) is a regional network transit hub that connects VPCs, VPN connections, and AWS Direct Connect gateways through a single managed service. Before TGW, connecting N VPCs required up to N*(N-1)/2 individual peering connections — a mesh that becomes operationally unmanageable past five or six VPCs. With Transit Gateway, every VPC attaches once to the hub, and the TGW handles routing between them.
TGW also simplifies hybrid connectivity. Instead of terminating a VPN or Direct Connect on every VPC, you terminate it once on the TGW and route from there. For multi-account AWS environments, this makes the network account pattern viable: one account owns the TGW, and spoke accounts attach their VPCs to it.
Core Concepts
Attachments
An attachment is a logical connection between the TGW and a resource. The supported attachment types are:
- VPC attachment — one ENI per Availability Zone, placed in a designated subnet (typically a /28 "TGW subnet") in each AZ
- VPN attachment — site-to-site VPN, supports two tunnels with BGP or static routing
- Direct Connect Gateway attachment — connects the TGW to a DXGW for hybrid connectivity
- TGW Peering attachment — cross-region TGW-to-TGW connection (static routes only, no BGP)
- Connect attachment — used with SD-WAN appliances via GRE tunnel over an underlying VPC or Direct Connect attachment
Route Tables
TGW has its own route tables, entirely separate from VPC route tables. An attachment's outbound routing decision is made by the TGW route table it is associated with. CIDRs from attachments are populated into route tables via propagation.
Association — each attachment is associated with exactly one TGW route table. When traffic leaves that attachment into the TGW, the TGW looks up the destination in the associated route table.
Propagation — an attachment can propagate its CIDR into one or more TGW route tables. This is how other attachments learn how to reach it. You can also add static routes manually.
By default, if you leave DefaultRouteTableAssociation and DefaultRouteTablePropagation enabled, all attachments share a single route table — every VPC can reach every other VPC. Disabling these defaults and building explicit route tables is the recommended approach for any multi-environment setup.
Setting Up a Transit Gateway (CLI)
# Create TGW with explicit defaults disabled
aws ec2 create-transit-gateway \
--description "Production TGW" \
--options AmazonSideAsn=64512,\
AutoAcceptSharedAttachments=disable,\
DefaultRouteTableAssociation=disable,\
DefaultRouteTablePropagation=disable,\
VpnEcmpSupport=enable,\
DnsSupport=enable
# Attach a VPC — one subnet per AZ (use dedicated /28 TGW subnets)
aws ec2 create-transit-gateway-vpc-attachment \
--transit-gateway-id tgw-xxxxx \
--vpc-id vpc-aaaaa \
--subnet-ids subnet-111 subnet-222 subnet-333
# Create a named route table
aws ec2 create-transit-gateway-route-table \
--transit-gateway-id tgw-xxxxx \
--tag-specifications 'ResourceType=transit-gateway-route-table,Tags=[{Key=Name,Value=prod-rt}]'
# Associate an attachment with the route table
aws ec2 associate-transit-gateway-route-table \
--transit-gateway-route-table-id tgw-rtb-xxxxx \
--transit-gateway-attachment-id tgw-attach-xxxxx
# Enable CIDR propagation from the attachment into this route table
aws ec2 enable-transit-gateway-route-table-propagation \
--transit-gateway-route-table-id tgw-rtb-xxxxx \
--transit-gateway-attachment-id tgw-attach-xxxxx
# Add a blackhole route to explicitly block a CIDR
aws ec2 create-transit-gateway-route \
--transit-gateway-route-table-id tgw-rtb-xxxxx \
--destination-cidr-block 10.0.0.0/8 \
--blackhole
A few operational notes: TGW attachments are eventually consistent — allow a few minutes after creation before the attachment state becomes available. Subnet selection for VPC attachments matters for AZ-level traffic symmetry; mismatch between the AZ your instance is in and the AZ of the TGW ENI adds cross-AZ data transfer charges.
Route Table Design Patterns
Single Route Table
All attachments share one route table. Every VPC can reach every other VPC and any connected VPN or Direct Connect. Appropriate for small, homogeneous environments where isolation is not a requirement — a fully shared dev environment, or a single-team multi-AZ setup.
Not appropriate once you have any environment boundary (prod vs nonprod, regulated vs non-regulated, internet-facing vs internal).
Segmented Route Tables (Recommended)
This is the right architecture for multi-environment, multi-account deployments. Define one route table per security domain:
prod-rt— associated to prod VPC attachments. Propagates prod CIDRs and shared-services CIDRs only. No nonprod routes.nonprod-rt— associated to dev/staging VPC attachments. Propagates nonprod CIDRs and shared-services CIDRs. No prod routes.shared-rt— associated to the shared-services VPC. Propagates all CIDRs so it can reach everywhere (DNS, monitoring, tooling live here).on-prem-rt— associated to VPN/Direct Connect attachments. Contains explicit static routes for what on-prem is allowed to reach. Blackhole everything else.
Blackhole routes are a critical part of this pattern. A blackhole route in a TGW route table drops traffic matching that CIDR even if a more-specific propagated route would otherwise match. Use them to enforce hard boundaries — for example, block all of 10.0.0.0/8 in on-prem-rt and add back only the specific CIDRs on-prem should see.
Terraform Example
resource "aws_ec2_transit_gateway" "main" {
description = "Production TGW"
amazon_side_asn = 64512
auto_accept_shared_attachments = "disable"
default_route_table_association = "disable"
default_route_table_propagation = "disable"
dns_support = "enable"
tags = { Name = "prod-tgw" }
}
resource "aws_ec2_transit_gateway_vpc_attachment" "prod" {
transit_gateway_id = aws_ec2_transit_gateway.main.id
vpc_id = aws_vpc.prod.id
subnet_ids = aws_subnet.prod_tgw[*].id
transit_gateway_default_route_table_association = false
transit_gateway_default_route_table_propagation = false
tags = { Name = "prod-vpc-attach" }
}
resource "aws_ec2_transit_gateway_route_table" "prod" {
transit_gateway_id = aws_ec2_transit_gateway.main.id
tags = { Name = "prod-rt" }
}
resource "aws_ec2_transit_gateway_route_table_association" "prod" {
transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.prod.id
transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.prod.id
}
resource "aws_ec2_transit_gateway_route_table_propagation" "prod" {
transit_gateway_attachment_id = aws_ec2_transit_gateway_vpc_attachment.prod.id
transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.prod.id
}
# Blackhole route blocking nonprod ranges from prod-rt
resource "aws_ec2_transit_gateway_route" "prod_blackhole_nonprod" {
transit_gateway_route_table_id = aws_ec2_transit_gateway_route_table.prod.id
destination_cidr_block = "10.100.0.0/16" # nonprod CIDR
blackhole = true
}
Set default_route_table_association = false and default_route_table_propagation = false at TGW creation — you cannot retroactively disable these without re-creating the gateway.
Cross-Region Transit Gateway Peering
TGWs in different regions connect via TGW Peering attachments. Unlike VPN or BGP peering, TGW-to-TGW peering uses static routes only — there is no BGP session between TGWs. Each TGW must have a static route pointing to the peer TGW attachment for the remote CIDR.
# In the initiating region
aws ec2 create-transit-gateway-peering-attachment \
--transit-gateway-id tgw-xxxxx \
--peer-transit-gateway-id tgw-yyyyy \
--peer-account-id 111111111111 \
--peer-region eu-west-1
# In the accepting region
aws ec2 accept-transit-gateway-peering-attachment \
--transit-gateway-attachment-id tgw-attach-zzzzz
# Add static route in us-east-1's route table pointing to eu-west-1 CIDR
aws ec2 create-transit-gateway-route \
--transit-gateway-route-table-id tgw-rtb-us \
--destination-cidr-block 10.200.0.0/16 \
--transit-gateway-attachment-id tgw-attach-zzzzz
Use consistent ASN and CIDR allocation strategies across regions. Cross-region TGW peering data transfer costs $0.02/GB — same as cross-region VPC peering. There is no bandwidth limit on the peering attachment itself, but cross-region traffic is billed at AWS inter-region rates.
Cross-Account Sharing with AWS RAM
The standard pattern for multi-account networking is: one network account owns the TGW, all other accounts attach their VPCs to it. AWS Resource Access Manager (RAM) enables this.
# In the TGW owner account: share the TGW to specific accounts
aws ram create-resource-share \
--name "prod-tgw-share" \
--resource-arns arn:aws:ec2:us-east-1:111111111111:transit-gateway/tgw-xxxxx \
--principals 222222222222 333333333333
# In a spoke account: create a VPC attachment to the shared TGW
aws ec2 create-transit-gateway-vpc-attachment \
--transit-gateway-id tgw-xxxxx \
--vpc-id vpc-bbbbb \
--subnet-ids subnet-444 subnet-555
Crucially, route table management stays with the TGW owner account. The spoke account cannot modify TGW route tables — they can only create and delete their own attachments. The network account team controls which route tables attachments are associated with and what propagations are enabled.
You can also share TGWs via AWS Organizations, using the organization ARN as the principal — this auto-accepts shares for any account within the org.
Centralized Egress Pattern
A common architecture routes all VPC internet egress through a dedicated egress VPC instead of deploying NAT gateways in every VPC:
- Each VPC has a default route (
0.0.0.0/0) pointing to the TGW attachment - The TGW has a static route for
0.0.0.0/0in each VPC's associated route table pointing to the egress VPC attachment - The egress VPC has NAT gateways (or AWS Network Firewall) in each AZ
- The egress VPC routes outbound traffic to its Internet Gateway
This pattern consolidates NAT gateway costs — one set of NAT gateways for the entire environment rather than one set per VPC. It also gives you a single egress point for firewall policy and DLP inspection. The tradeoff is that all egress traffic crosses the TGW, adding $0.02/GB data processing cost, which partially offsets the NAT savings for high-volume workloads.
Pricing
| Component | Cost |
|---|---|
| VPC attachment | $0.05/hour ($36.50/month) |
| VPN attachment | $0.05/hour ($36.50/month) |
| TGW Peering attachment | $0.05/hour ($36.50/month) |
| Data processed | $0.02/GB |
| Cross-region peering data | $0.02/GB + inter-region rates |
For a typical environment with 10 VPC attachments and 1 VPN attachment:
11 attachments × $0.05/hr × 730 hr = $401.50/month
+ data processing at $0.02/GB
Data processing cost dominates at scale — a VPC transferring 100 TB/month through the TGW adds $2,000/month in processing fees on top of attachment cost. For east-west traffic that stays within a region, there is no additional data transfer charge beyond TGW processing.
Auditing TGW Connectivity with VizCon
With multiple VPC attachments, multiple route tables, segmented propagations, and blackhole routes, auditing your TGW routing policy by hand means cross-referencing CLI output across every attachment and route table. It is easy to miss a misconfigured propagation that lets a nonprod VPC reach a prod database, or a missing blackhole that exposes internal services to an on-prem subnet you didn't intend.
VizCon auto-discovers all TGW attachments across your accounts, renders the full hub-and-spoke topology, and surfaces each attachment's routing — associations, propagations, and static routes — in a single diagram. When you're debugging a routing issue or validating a new environment's isolation, you can see the entire TGW fabric at a glance rather than tracing it through a series of aws ec2 describe-transit-gateway-route-tables calls.


