ECS Service other than HTTP keeps restarting - docker

I installed Nginx ECS Docker container service through AWS ECS, which is running without any issue. However, every other container services such as centos, ubuntu, mongodb or postgres installed through AWS ECS keeps restarting (de-registering, re-registering or in pending state) in a loop. Is there a way to install these container services using AWS ECS without any issue on AMI Optimized Linux? Also, is there a way to register Docker containers in AWS ECS that was manually pulled and ran from Docker Hub?

Usually if a container is restarting over and over again its because its not passing the health check that you setup. MongoDB for example does not use the HTTP protocol so if you set it up as a service in ECS with an HTTP healthcheck it will not be able to pass the healthcheck and will get killed off by ECS for failing to pass the healthcheck.
My recommendation would be to launch such services without using a healthcheck, either as standalone tasks, or with your own healthcheck mechanism.
If the service you are trying to run does in fact have an HTTP interface and its still not passing the healthcheck and its getting killed then you should do some debugging to verify that the instance has the right security group rules to accept traffic from the load balancer. Additionally you should verify that the ports you define in your task definition match up with the port of the healthcheck.

Related

How can I establish a VPN connection for a Docker container running in AWS Batch/Fargate?

I have a Dockerised Python script managed by AWS Batch/Fargate (triggered by EventBridge) which reads from a DB requiring an OpenVPN connection (since it's not within same VPC as the Docker container) - how can I do this?
I found a Docker image for OpenVPN, but the documentation instructs me to use the --net argument with docker run to indicate the VPN through which traffic should flow, but I don't think I can do this within the AWS stack since it seems to spin up the container behind the scenes?
I'd be super grateful for any help on this, thanks all!

Not able to retrieve AWS_CONTAINER_CREDENTIALS_RELATIVE_URI inside the container from fargate task

I'm running a docker container in Fargate ECS Task.
And my docker container, I have enabled ssh server, so that I can login to container directly, if I have to debug something. Which is working fine, so I can ssh my task ip, check and investigate my issues.
But, now I noticed I have an issue while accessing any AWS service via ssh inside the container, => when I logged in container via ssh I found configuration files such as ~/.aws/credentials, ~/.aws/config are missing and I can't issue any cli commands e.g. check the caller-identity. which supposed to be my task arn.
But the strange, is if I connect this same task to an ECS instance, I don't have any such issues. I can see my task arn and all rest of services. So, the ecs task agent just working fine.
So, coming back to ssh, connectivity I notice, i'm getting 404 page not found from curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI. So, how can I make this possible that ECS Instance access and ssh access have same capability? if I can access AWS_CONTAINER_CREDENTIALS_RELATIVE_URI in my ssh then I think everything will be changed.

Run new docker container (service) from another container on some command

Does exist any way to do this:
run one service (container) with main application - server (flask application);
server allows to run another services, them are also flask applications;
but I want to run each new service in separate container ?
For example, I have endpoint /services/{id}/run at the server, each id is some service id. Docker image is the same for all services, each service is running on separate port.
I would like something like this:
request to server - <host>//services/<id>/run -> application at server make some magic command/send message to somewhere -> service with id starts in new container.
I know that at least locally I can use docker-in-docker or simply mount docker socket in container and work with docker inside this container. But I would like to find way to work across multiple machines (each service can run on another machine).
For Kubernetes: I know how create and run pods and deployments, but I can't find how to run new container on command from another container. Can I somehow communicate with k8s from container to run new container?
Generally:
can I run new container from another without docker-in-docker and mounting docker socket;
can I do it with/without Kubernetes?.
Thanks for advance.
I've compiled all of the links that were in the comments under the question. I would advise taking a look into them:
Docker:
StackOverflow control Docker from another container.
The link explaining the security considerations is not working but I've managed to get it with the Webarchive: Don't expose the Docker socket (not even to a container)
Exposing dockerd API
Docker Engine Security
Kubernetes:
Access Clusters Using the Kubernetes API
Kubeflow in the spite of machine learning deployments

How to run one-off Docker containers locally (triggered from within a container)

When hosting applications on Heroku I often trigger one-off dynos via the Heroku API from within the code to do heavy lifting in the background. I recently set up some stacks on AWS and followed a similar pattern by using AWS ECS run task.
I am not using long running queue workers for this as hardware resources vary heavily according to the specific task and usually the workload occurs in peaks.
For local development, I usually skipped this topic by either executing the background tasks within the running container or triggering the background command manually from the console. What would be a good approach for running one-off containers locally?
ECS supports scheduled tasks, if you know when your peaks are planned for you can use scheduled tasks to launch fargate containers on a schedule.
If you don't, what we did was write a small API Gateway -> Lambda function that basically dynamically launches fargate containers with a few variables defined in the POST to the API Gateway endpoint like CPU/Mem/port etc...Or pre-create task definitions and just pass the task def to the api, which is another option if you know what the majority of your "settings" should be for the container.
You can simply call ECS RunTask API call from inside the container.
All you need is to setup ECS Task role to have runtask permissions and to have either aws cli or any aws sdk in container to call runtask call.
you can pass docker socket as a volume
volumes:
- /var/run/docker.sock:/var/run/docker.sock
After it you can run docker commands inside the container and they will be executed by docker on the host machine.
In particular you can run
docker run ...
or
docker start ...
(may be you will have to install docker in your container via commands in Dockerfile)

Unable to run a health check on a docker image deployed to Pivotal Cloud Foundry

I'm unable to run a health check other than process on a docker image deployed to Pivotal Cloud Foundry.
I can deploy fine with health-check-type=process, but that isn't terribly useful. Once the container is up and running I can access the health check http endpoint at /nuxeo/runningstatus, but PCF doesn't seem to be able to check that endpoint, presumably because I'm deploying a pre-built docker container rather that an app via source or jar.
I've modified the timeout to be something way longer than it needs to be, so that isn't the problem. Is there some other way of monitoring dockers deployed to PCF?
The problem was the docker container exposed two ports, one on which the healthcheck endpoint was accessible and another that could be used for debugging. PCF always chose to try to run the health check against the debug port.
There is no way to specify, for PCF, a port for the health check to run against. It chooses among the exposed ports and for a reason I don't know always chose the the one intended for debugging.
I tried reordering the ports in the Dockerfile, but that had no effect. Ultimately I just removed the debug port from being exposed in the Docker file and things worked as expected.

Resources