Why RDS Connection Refused Happens
AWS RDS connection errors have gotten complicated with all the conflicting advice flying around. As someone who spent four years debugging AWS infrastructure professionally, I learned everything there is to know about this particular error. Today, I will share it all with you.
But what is the “connection refused” error, really? In essence, it’s your application failing to reach your RDS instance — with almost zero useful context in the message itself. But it’s much more than that. There are three distinct failure modes hiding underneath that single line of text: security group inbound rules blocking your traffic, network routing that never actually connects your client to the instance, and the RDS instance itself being stopped or unavailable.
Most people start guessing at fixes immediately. Wrong. Diagnose in order. This article walks through each cause with exact console paths and CLI commands — so you know which one broke your setup and how to fix it without burning another hour on Reddit threads.
Check Your Security Group Inbound Rules First
Start here. Security groups are stateful firewalls. A misconfigured inbound rule accounts for roughly 60% of RDS connection problems I’ve encountered — maybe more.
Open the AWS Management Console, navigate to RDS, find your instance, scroll to “Security group rules.” Click the security group name. That jumps you to the EC2 Security Groups dashboard. You need an inbound rule matching three things: TCP protocol, the correct port (3306 for MySQL, 5432 for PostgreSQL, 1433 for SQL Server), and a source that includes your client IP or CIDR block.
Connecting from an EC2 instance? The source should be either that instance’s security group ID — formatted like sg-0abc123def456ghi — or the CIDR range of its subnet. Connecting from a laptop? Your public IP plus /32. That’s just that one IP, nothing else.
Verify it from the CLI too:
aws ec2 describe-security-groups \
--group-ids sg-0your-rds-sg-id \
--region us-east-1
Look for the IpPermissions block. Confirm an entry exists with FromPort matching your database port and IpRange or UserIdGroupPair matching your client source.
Probably should have opened with this section, honestly — but here’s a mistake I made early on. I hard-coded my home IP into a security group rule, then spent an afternoon furious that the connection worked fine at home but refused from the office. My ISP had quietly changed my IP overnight. Don’t make my mistake. If you’re testing locally, use a /16 CIDR covering multiple IPs, or just plan to update the rule when you switch networks. Better option: use an EC2 instance inside the same VPC as your source. The security group rule stays static. Problem disappears.
Verify Subnet and VPC Routing for Your RDS Instance
Fixed the security group? Still getting refused? Now you’re checking whether your RDS instance and your client can physically reach each other at the network layer.
Open the RDS console, click your instance, note the VPC and subnet listed under “Connectivity & security.” Then figure out whether that subnet is public or private. Go to VPC > Subnets, find your subnet by ID or name, click the “Route Table” tab at the bottom.
A public subnet has a route table entry pointing 0.0.0.0/0 to an Internet Gateway. A private subnet does not. If your RDS instance is private — which is the correct security posture — your client needs to be inside the same VPC, a peered VPC with proper route table entries, or tunneling in through a bastion host or VPN. That’s what makes private subnets endearing to us infrastructure folks: they eliminate a whole class of exposure. But they do require the right network plumbing.
If your client is an EC2 instance, verify the VPC IDs match between client and RDS. Different VPCs? Confirm VPC peering exists and that both route tables have entries pointing traffic toward the peering connection.
One more thing people consistently overlook: DNS settings on the VPC itself. Go to VPC > Your VPCs, select your VPC, click “Edit DNS attributes.” Both “DNS hostnames” and “DNS resolution” need to be enabled. Without them, an endpoint like mydb.c9akciq32.us-east-1.rds.amazonaws.com won’t resolve to anything — you’ll get a host-not-found error instead of connection refused. I’m flagging it here because I’ve watched teams fix the security group and routing perfectly, then spend another hour stuck on DNS.
Confirm the RDS Instance Is Actually Available
Security group looks right. Routing looks right. Still failing. Check whether the instance is actually running.
In the RDS console, your instance should show “available.” If it shows “creating,” “modifying,” “rebooting,” or “storage-optimization-in-progress” — it’s not accepting connections yet. Wait. Retry when it returns to available.
Check from the CLI:
aws rds describe-db-instances \
--db-instance-identifier your-db-name \
--region us-east-1 \
--query 'DBInstances[0].[DBInstanceIdentifier,DBInstanceStatus,Endpoint.Address]'
Anything other than “available” in that status field means the instance isn’t taking new connections yet.
Multi-AZ setup? There’s an additional scenario worth knowing. The instance status reads available, but the endpoint DNS is mid-failover to the standby replica. During that window — usually 60 to 120 seconds — DNS propagation hasn’t finished resolving to the new primary. The endpoint URL stays identical. AWS handles the IP swap behind the scenes. But the latency is real. Wait and retry. CloudWatch shows you whether a failover happened; check the RDS dashboard under recent events.
Test Connectivity End to End and Validate the Fix
Before you celebrate, confirm the fix actually works. Fully. End to end.
From the machine that was failing, run a port reachability test. On Linux or macOS:
nc -zv your-db.c9akciq32.us-east-1.rds.amazonaws.com 3306
Swap in your actual RDS endpoint and your database port. Success looks like: Connection to your-db.c9akciq32.us-east-1.rds.amazonaws.com port 3306 [tcp/mysql] succeeded!
On Windows, telnet works the same way. Open port gives you a blank screen or a connection accepted message. Timeout or refusal means security group or routing is still misconfigured somewhere.
Port is reachable? Now try the actual database connection — MySQL CLI, a Python script with SQLAlchemy, psql, whatever your stack uses. Port reachable but database connection failing means the problem shifted. It’s authentication or database config now, not networking. Different fix entirely.
Then check CloudWatch. Open the RDS dashboard, go to Metrics, find “DatabaseConnections.” If that metric jumps above zero after your fix, the connection is live. Stays at zero — something is still wrong.
One final gotcha worth flagging — and this one is rarer but real. Security groups correct, connectivity tests passing, connections still failing. Check your Network ACLs. Security groups are stateful and default to permissive. Network ACLs are stateless and sometimes block return traffic silently. Navigate to VPC > Network ACLs, select the ACL attached to your RDS subnet, verify that both inbound and outbound rules permit your database port on TCP. I’m apparently someone who hit this on a production environment at 11pm — and no tool I know works well for spotting it quickly. So check it explicitly. Don’t make my mistake.
So, without further ado — you’ve worked through all three failure modes in diagnostic order. Your connection should be working.
Stay in the loop
Get the latest team aws updates delivered to your inbox.