What is AWS Direct Connect?
AWS Direct Connect is a dedicated private network connection between your on-premises infrastructure and AWS. Instead of routing traffic over the public internet, Direct Connect uses a physical Ethernet connection to an AWS Direct Connect location — a colocation facility where your network equipment can cross-connect directly to AWS hardware.
The result is a network path that bypasses the public internet entirely. Traffic takes a deterministic route with consistent latency, predictable bandwidth, and data transfer costs that are substantially lower than internet egress rates at scale.
Direct Connect vs. Site-to-Site VPN
The two are not always competing options — they are often deployed together — but here is how they compare:
| Factor | Direct Connect | Site-to-Site VPN |
|---|---|---|
| Latency | Consistent, low | Variable (internet path) |
| Bandwidth | 1 Gbps, 10 Gbps, 100 Gbps | Up to 1.25 Gbps per tunnel |
| Data transfer cost | $0.02–0.09/GB (cheaper than internet) | Standard internet egress rates |
| Setup time | Weeks (physical provisioning) | Minutes |
| Reliability | High (no public internet) | Depends on ISP |
| SLA | 99.99% (with redundant configuration) | No AWS SLA |
| Encryption | Not encrypted by default (use MACsec or VPN over DX) | IPsec encrypted |
| Best for | Hybrid cloud, large or sustained data transfer | Backup connectivity, remote offices |
One important nuance: Direct Connect is not encrypted in transit by default. For compliance requirements, you can layer IPsec VPN over the Direct Connect path, or use MACsec (Layer 2 encryption) on dedicated 10G/100G connections.
Core Concepts
DX Locations
A DX location is an AWS-owned or partner colocation facility where AWS installs its Direct Connect equipment. Examples include Equinix NY5 in New York, Interxion FRA6 in Frankfurt, and Equinix SY3 in Sydney. To use Direct Connect, your equipment (or your colocation provider's equipment) must be able to cross-connect to AWS gear in one of these locations.
AWS publishes the full list of DX locations by region. Not every AWS region has a DX location in-region — some regions are served via locations in adjacent cities.
Dedicated vs. Hosted Connections
Dedicated connection — A 1G, 10G, or 100G port provisioned directly by AWS on a device in a DX location. Your router sits in the same facility and you run a cross-connect to AWS. Provisioning takes days to weeks because it involves physical infrastructure. You own the entire port and can create multiple Virtual Interfaces on it.
Hosted connection — Bandwidth from 50 Mbps to 10G, provisioned through an AWS Direct Connect Partner. The partner has existing physical infrastructure at the DX location and resells capacity to you. Provisioning is faster (hours to days), and sub-1G bandwidths are only available this way. You get a single Virtual Interface per hosted connection.
Choose dedicated when you need maximum control, 100G bandwidth, MACsec, or want to partition the port across multiple VIFs. Choose hosted when speed of provisioning or sub-1G bandwidth matters more than full control.
Virtual Interfaces (VIFs)
A Virtual Interface is the logical layer on top of the physical connection. It carries traffic via BGP and a tagged VLAN. There are three types:
Private VIF — connects to a single VPC via a Virtual Private Gateway (VGW) attached to that VPC, or via a Direct Connect Gateway (recommended for multi-VPC access). Traffic reaches private IP addresses within the VPC.
Public VIF — accesses AWS public endpoints (S3, DynamoDB, EC2 public IP addresses) over Direct Connect rather than the public internet. Useful for data migration workloads where you want to push data to S3 without internet charges. Does not bypass internet egress pricing for S3 in all cases — check the current pricing details for your region.
Transit VIF — connects to a Direct Connect Gateway, which then attaches to Transit Gateways. This is the recommended pattern for any multi-VPC or multi-region architecture.
Direct Connect Gateway
The Direct Connect Gateway (DXGW) is a global resource that decouples the physical DX connection from the VPCs you want to reach. Without a DXGW, each Private VIF connects to one VGW in one VPC in one region. With a DXGW, one Transit VIF on the DX connection reaches VPCs across multiple regions and accounts.
The standard enterprise architecture:
On-prem router
└── DX Location (cross-connect)
└── DX Connection (dedicated/hosted)
└── Transit VIF (BGP, VLAN 100)
└── Direct Connect Gateway (global)
├── TGW us-east-1 → VPCs (prod, nonprod, shared)
└── TGW eu-west-1 → VPCs (prod, shared)
Each regional TGW associates with the DXGW. On-prem routes propagate through the DXGW into the TGW route tables (subject to route filtering you configure). VPC CIDRs propagate back through the TGW to the DXGW and down to your on-prem router.
Limits to know: one DXGW can be associated with up to 20 TGWs. A DXGW cannot be used to route traffic between TGWs in different regions (it's not a transit path between AWS regions — use TGW Peering for that).
BGP Routing
Direct Connect runs BGP for route exchange. You configure a BGP session on your on-prem router with:
- Your ASN: a public ASN if you have one, or a private ASN (64512–65534 range) if not
- AWS ASN: 64512 by default, or a custom ASN you configure on the TGW or VGW
- MD5 auth key: optional but recommended for BGP session security
Your router advertises your on-prem CIDRs to AWS. AWS advertises VPC CIDRs (and optionally AWS public prefixes on a Public VIF) to you.
Route filtering is essential. Never advertise a default route (0.0.0.0/0) from on-prem to AWS over Direct Connect unless you intentionally want AWS traffic to hairpin through your on-prem network. Use prefix lists to control exactly which CIDRs you advertise in each direction.
BGP communities: AWS uses well-known BGP communities to influence routing over Direct Connect. For example, 7224:9300 marks routes that AWS propagates only to the local region; 7224:9100 marks routes propagated globally. You can use these on your router to influence which routes you accept.
Redundancy Patterns
AWS defines two SLA tiers based on redundancy configuration:
99.99% SLA: two or more dedicated connections at two or more DX locations. Protects against device failure and facility failure. This is the configuration AWS recommends for production workloads.
99.9% SLA: one or more connections at a single DX location. Protects against device failure only. A facility outage at that location takes down the connection.
DX + VPN backup: run Direct Connect as primary and a Site-to-Site VPN as cold standby. Use BGP AS_PATH prepending on the VPN to make it a lower-preference path. When the DX path fails, BGP converges to the VPN automatically.
On-prem router
├── DX Connection (primary, BGP preference 200)
└── Site-to-Site VPN (backup, AS_PATH prepended 3x, BGP preference 100)
This is a cost-effective redundancy pattern when 99.9% SLA is acceptable and you want internet-path failover without paying for a second DX port.
Setting Up a Virtual Interface (CLI)
# List available connections
aws directconnect describe-connections
# Create a Transit VIF (pointing to a Direct Connect Gateway)
aws directconnect create-transit-virtual-interface \
--connection-id dxcon-xxxxxxxx \
--new-transit-virtual-interface \
virtualInterfaceName=prod-transit-vif,\
vlan=100,\
asn=65000,\
mtu=8500,\
authKey=your-bgp-md5-key,\
amazonAddress=169.254.100.1/30,\
customerAddress=169.254.100.2/30,\
directConnectGatewayId=xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
# Associate the DXGW with a Transit Gateway
aws directconnect create-direct-connect-gateway-association \
--direct-connect-gateway-id xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx \
--gateway-id tgw-xxxxx \
--add-allowed-prefixes-to-direct-connect-gateway \
cidr=10.0.0.0/8
# Create a Private VIF (pointing directly to a VGW, simpler but less scalable)
aws directconnect create-private-virtual-interface \
--connection-id dxcon-xxxxxxxx \
--new-private-virtual-interface \
virtualInterfaceName=legacy-private-vif,\
vlan=200,\
asn=65000,\
authKey=your-bgp-auth-key,\
amazonAddress=169.254.101.1/30,\
customerAddress=169.254.101.2/30,\
virtualGatewayId=vgw-xxxxx
MTU: you can set jumbo frames (MTU 8500) on Transit VIFs and Private VIFs — useful for workloads that benefit from larger frames. The default is 1500.
Terraform Example
# Direct Connect Gateway
resource "aws_dx_gateway" "main" {
name = "prod-dxgw"
amazon_side_asn = "64512"
}
# Associate DXGW with a Transit Gateway
resource "aws_dx_gateway_association" "tgw_us_east" {
dx_gateway_id = aws_dx_gateway.main.id
associated_gateway_id = aws_ec2_transit_gateway.main.id
allowed_prefixes = [
"10.0.0.0/8",
"172.16.0.0/12",
]
}
# Transit VIF (created by the DX connection owner or partner)
resource "aws_dx_transit_virtual_interface" "prod" {
connection_id = "dxcon-xxxxxxxx"
name = "prod-transit-vif"
vlan = 100
address_family = "ipv4"
bgp_asn = 65000
bgp_auth_key = var.bgp_auth_key
amazon_address = "169.254.100.1/30"
customer_address = "169.254.100.2/30"
mtu = 8500
dx_gateway_id = aws_dx_gateway.main.id
}
Data Transfer Costs
At sufficient volume, Direct Connect data transfer is substantially cheaper than internet egress:
| Path | Approximate cost |
|---|---|
| Internet egress (from AWS) | $0.09/GB (us-east-1) |
| DX out to on-prem (dedicated) | $0.02/GB (us-east-1) |
| S3 to on-prem via DX | $0.00 (no S3 egress charge over DX) |
| DX port (1G dedicated) | $0.30/hour (~$219/month) |
| DX port (10G dedicated) | $2.25/hour (~$1,642/month) |
Break-even analysis: at $0.07/GB savings on egress traffic and a 1G port costing $219/month, you break even at roughly 3 TB/month transferred out. At 10 TB/month, you save ~$480/month net of port cost. At 100 TB/month, DX pays for itself several times over.
For inbound transfer (data sent from on-prem to AWS), there is no egress charge on either path — DX does not save money on inbound data, only outbound.
Operational Considerations
Letter of Authorization (LOA): for dedicated connections, AWS issues an LOA-CFA document authorizing your cross-connect in the DX location. You submit this to the colocation facility to have the physical cable run. This is the step that takes the most calendar time.
Hosted connections and partner notifications: when a partner provisions a hosted connection for you, AWS sends you an email to accept it in the console. The connection does not activate until you accept.
BFD (Bidirectional Forwarding Detection): enable BFD on your BGP sessions for fast failover. Without BFD, BGP holddown timers (typically 90 seconds) determine how quickly traffic reroutes on link failure. With BFD, failover can occur in under a second.
Monitoring: CloudWatch metrics for Direct Connect include ConnectionBpsIngress, ConnectionBpsEgress, ConnectionPpsIngress, ConnectionPpsEgress, and ConnectionState. Set alarms on ConnectionState to detect unexpected drops.
Auditing Hybrid Connectivity with VizCon
Direct Connect sits at the edge of your AWS network, and its routing ripples through your DXGW, TGW route tables, VGW associations, and VPC route tables. Auditing that the right on-prem CIDRs can reach the right VPCs — and that no unintended subnets are exposed — requires tracing a routing path through four or five layers of AWS resources.
VizCon auto-discovers your DX connections, VIFs, DXGW associations, and TGW attachments, and renders the full hybrid topology in one diagram. You can see which VPCs are reachable from on-prem, which TGW route tables carry on-prem prefixes, and where the allowed prefix filters on your DXGW associations are drawing the boundary. For teams managing multiple Direct Connect connections across multiple regions, that single topology view eliminates a significant amount of manual verification during audits and incident response.



