RDS Connectivity Issue with EKS Cluster - ruby-on-rails

I have an RDS PSQL Database which is open to all connections as shown in the screenshot attached below and it is also in the same VPC as my EKS Cluster which is also visible.
I am running a rails app in my EKS cluster and trying to create a Database and have already set my RDS Cluster end-point in Environment Variables.
Command I am using: kubectl exec -it pod/app-b65785bd5-r8mpj -- bundle exec rails db:create
Both EKS CLuster and RDS are in the same VPC vpc-0f9737b08c3269c4d and I also White-listed the EKS Cluster IP Address in the Security Group of RDS Cluster Database following is the error I am getting.
Error Screenshot
RDS Database Screenshot
RDS Security Group In-bound Rules
RDS Security Group Out-bound Rules

Given that your RDS hostname resolved to 10.0.2.75 most likely your VPC CIDR range is 10/16. Modify inbound rules in your DB SG to allow traffic from 10.0.0.0/16

Related

Problems with Docker swarm manager on Google Cloud and Oracle Cloud VPC

My scenario has 7 nodes, 4 running in AWS (each one in a different account), 1 running in LINODE, 1 running in Google Cloud and 1 running in Oracle Cloud. Every node is using external IP, and I checked firewall ports into the provider and ensure that is disabled on the VM. I also edited the hosts files in each node to ensure that they will be reachable, all they are pinging ok.
All machines running in AWS and Linode can join the SWARM both as a worker or as a manager, but the machines running in the Google Cloud and Oracle, just can join as a worker.
Using one AWS node as Leader, I got the following error messages...
docker node ls on leader
trying join node from Oracle
trying join node from Google Cloud
At last, I tried to make the Google Cloud node as a leader into a new SWARM, and tried to join the Linode and Oracle Nodes into it and got the following error message
trying to join o a new swarm
In this last attempt, the node that I tried to add says that he is into a swarm but when I run a docker node ls into the Leader, no new nodes are added...
Anyone already used Google Cloud or Oracle to run dockers and swarm can help me to figure out what more I need configure or what port or protocol more I need to allow. I already tried to permit all traffic from the nodes IP... in theory, everything would be allowed...
My best regards,
Leonardo Lima
Google Cloud Platform handles implied Firewall rules and also have default Ingress rules added once a new VPC is created. If you don't explicitly allow the Ingress traffic to specific ports in the node/nodes within the VPC, connection will timeout. Therefore, you need to allow the traffic to the node through the manager port (2377) from 0.0.0.0/0 (any source). So, these are the networking configurations that we need to review before understand why you can't connect to your node as Manager.

Not able to connect aws redis with ec2 on same VPC

I have created AWS ElastiCache redis server and configured with EC2 instance, but somehow I am not able to connect Redis via EC2 instance.
I have assigned the same security group as well.
here is my configuration
Error I am facing is Redis::CannotConnectError: Error connecting to Redis on some-prod.dhgdjw.0001.usw2.cache.amazonaws.com:6379 (Redis::TimeoutError)
Any help will be highly appreciated.
As you mention the same security group where you do not need such a setting. You Redis SG should be different it just suggestion as you need it on your local VPC.
Verify Few things...
Go you ElastiCache Dashboard
Select the Redis and click on the cluster then click on modify
then you will see security group attach with this Cluster You can attach one or many group with this cluster.
click on the edit icon and verify does it allow trafic on 6379 from 10.0.0.1/16 If instance and redis in same VPC, if not then allow public IP of instance in redis SG.
and you can allow also your public IP to check if it accessible. Install redis-client and try this command.
redis-cli -h some-prod.dhgdjw.0001.usw2.cache.amazonaws.com ping
PONG
If ping pong working its mean SG allow traffic.

Kubernetes container connection to RDS instance in separate VPC

I have a Kubernetes cluster running in Amazon EC2 inside its own VPC, and I'm trying to get Dockerized services to connect to an RDS database (which is in a different VPC). I've figured out the peering and routing table entries so I can do this from the minion machines:
ubuntu#minion1:~$ psql -h <rds-instance-name>
Password:
So that's all working. The problem is that when I try to make that connection from inside a Kubernetes-managed container, I get a timeout:
ubuntu#pod-1234:~$ psql -h <rds-instance-name>
…
To get the minion to connect, I configured a peering connection, set up the routing tables from the Kubernetes VPC so that 10.0.0.0/16 (the CIDR for the RDS VPC) maps to the peering connection, and updated the RDS instance's security group to allow traffic to port 5432 from the address range 172.20.0.0/16 (the CIDR for the Kubernetes VPC).
With the help of Kelsey Hightower, I solved the problem. It turns out it was a Docker routing issue. I've written up the details in a blog post, but the bottom line is to alter the minions' routing table like so:
$ sudo iptables -t nat -I POSTROUTING -d <RDS-IP-ADDRESS>/32 -o eth0 -j MASQUERADE
Did you modify the source/destination checks as well?
Since your instance will be sending and receiving traffic for IPs other than the one assigned by your subnet, you'll need to disable source/destination checks.
See the image:
https://coreos.com/assets/images/media/aws-src-dst-check.png

Connecting rails with storage services on a coreos cluster

How can I communicate a Rails application with a Postgres DB on a CoreOS cluster? Obviously I can't hardcode storage locations under database.yml
Can you store and retrieve it from etcd? You can read from etcd over the docker0 bridge: http://coreos.com/docs/distributed-configuration/getting-started-with-etcd/#reading-and-writing-from-inside-a-container

What is the best way to connect to a remote database server that can only be accessed from a different ec2 instance?

How would I go about connecting to a database that can only be accessed through an ssh tunnel to an ec2 instance. The current route would be:
My ubuntu laptop -> ec2 instance -> postgres database server
I have complete control over the ec2 instance.
I only have access to port 5432 of the remote database server via the ec2 instance. It lives on a different server.
I have been accessing the database using the terminal but would prefer to be lazy and use something like pgAdmin or RazorSQL. I am assuming I can do an ssh tunnel to my ec2 instance, then some sort of port forward to the database server but I haven’t been able to get beyond the ssh tunnel.
A double hop ssh tunnel will not work because I don’t have ssh access to the DB server.
Thanks!
You want to do something like this - where ec2-dbserver is your database server (inside EC2), and ec2-host is the host that you can ssh2.
You should then be able to point pgadmin-III to localhost:5432
ssh -L 5432:ec2-dbserver:5432 user#ec2-host

Resources