Accessing redis from a docker container - docker

Simple question - I hope.
I am trying to communicate to a redis instance running on Amazon elastic cache from a docker container on an amazon ec2 machine. I can connect without problems to the redis instance from the ec2 machine host (which would seem to indicate that this is not a security/ports problem) but I get a timeout when trying to connect from within the docker container. What am I missing?

Related

Ssh into amazon lightsail container service running container

I'm totally stumped on how to approach this.
I have two running containers on amazon lightsails container service. But I have no idea on how to access them with SSH.
Is this even possible? I need the commandline to check some stuff in the running container.
On the container I want to access I have an open port 22.

How can I establish a VPN connection for a Docker container running in AWS Batch/Fargate?

I have a Dockerised Python script managed by AWS Batch/Fargate (triggered by EventBridge) which reads from a DB requiring an OpenVPN connection (since it's not within same VPC as the Docker container) - how can I do this?
I found a Docker image for OpenVPN, but the documentation instructs me to use the --net argument with docker run to indicate the VPN through which traffic should flow, but I don't think I can do this within the AWS stack since it seems to spin up the container behind the scenes?
I'd be super grateful for any help on this, thanks all!

Connect to AWS DocumentDB from within a docker container

B"H
I have a docker container on EC2 attempting to connect to DocumentDB. DocuementDB needs to be within the vpc network.
When attempting to connect to DocumentDB in a none host mode the connection fails, but when I (hack and) mount the container to use host network mode it does work. But for simple deployments and replicating my containers it's a problem.
Any idea how to connect to DocumentDB (without ssh tunneling) from within docker hosted on EC2?
If I understand correctly, you are running the container in none networking mode. None means you want to disable all the networking for your container. Most frequent used modes are either bridge or host.
You can also refer the below post which talks how to run docker container in ECS and connect to documentdb.
https://aws.amazon.com/blogs/database/deploy-a-containerized-application-with-amazon-ecs-and-connect-to-amazon-documentdb-securely/

Local Docker connection to Kubernetes Cluster

I want to connect a docker container running locally to a service running on a Kubernetes cluster. To do so I have exposed a service through reserving some static IP addresses.
I have also saved those IP addresses in local DNS, in the /etc/hosts/ file:
123.123.123.12 host1
456.456.456.45 host2
I want to link my container to that such that all the traffic is routed to those addresses so that it can be processed by the cluster. I am using the link feature in the docker container but it isn't working.
I want to connect directly using IP? How should I do this?
There's no difference doing this if the client is or isn't in Docker. However you have the service exposed from Kubernetes, you'd make the same connection to it from a process running on an external host or from a process running in a Docker container on that host.
Say, as in the example in the Kubernetes documentation, you're running a NodePort service that's accessible on port 31496 on every node in the cluster, and you're trying to connect to it from outside the cluster. Maybe as in the question 123.123.123.12 is some node in the cluster. A typical setup would be to get the location of the service from an environment variable (JavaScript process.env.THE_SERVICE_URL; Ruby ENV['THE_SERVICE_URL']; Python os.environ['THE_SERVICE_URL']; ...).
When you're developing, you could set that variable in your local shell:
export THE_SERVICE_URL=http://123.123.123.12:31496
cd here && ./kubernetes_client_script.py
When you go to deploy your application, you can set the same environment variable:
docker run -e THE_SERVICE_URL=http://123.123.123.12:31496 me:k8s-client

ECS Service other than HTTP keeps restarting

I installed Nginx ECS Docker container service through AWS ECS, which is running without any issue. However, every other container services such as centos, ubuntu, mongodb or postgres installed through AWS ECS keeps restarting (de-registering, re-registering or in pending state) in a loop. Is there a way to install these container services using AWS ECS without any issue on AMI Optimized Linux? Also, is there a way to register Docker containers in AWS ECS that was manually pulled and ran from Docker Hub?
Usually if a container is restarting over and over again its because its not passing the health check that you setup. MongoDB for example does not use the HTTP protocol so if you set it up as a service in ECS with an HTTP healthcheck it will not be able to pass the healthcheck and will get killed off by ECS for failing to pass the healthcheck.
My recommendation would be to launch such services without using a healthcheck, either as standalone tasks, or with your own healthcheck mechanism.
If the service you are trying to run does in fact have an HTTP interface and its still not passing the healthcheck and its getting killed then you should do some debugging to verify that the instance has the right security group rules to accept traffic from the load balancer. Additionally you should verify that the ports you define in your task definition match up with the port of the healthcheck.

Resources