Rancher container starts with {"NetworkDisabled": true} by default - jenkins

I use Jenkins cloud deployed by aws rancher. When I try to inspect docker network settings I see {"NetworkDisabled": true} by default. I am not able to figure out how to enable it.
My container always starts with {"NetworkDisabled": true}. Because of this, my selenium tests are not able to connect to the chrome container.
Is there anyway to make this parameter false and where should I update the config?

Related

Ssh into amazon lightsail container service running container

I'm totally stumped on how to approach this.
I have two running containers on amazon lightsails container service. But I have no idea on how to access them with SSH.
Is this even possible? I need the commandline to check some stuff in the running container.
On the container I want to access I have an open port 22.

Enabling Kubernetes on Docker Desktop breaks access to external service

I'm using docker desktop for mac.
I have built a docker image for a Node.js app that connects to an external MongoDB database via URI (the db is running on an AWS instance that I'm connected to over vpn). This works fine - I run the container and the app can connect to the database. Happy days.
Then...
I enable Kubernetes on docker desktop. I apply a deployment.yml to run the container but this deployment fails when trying to connect to the db. From my app's logs (I'm using mongoose):
MongooseServerSelectionError: connect EHOSTUNREACH [MY DB IP] +30005ms
Interestingly...
I can now no longer connect to the db by running my docker container either. I get the same error.
I have to disable kubernetes, restart docker desktop (twice), prune my previous container and network, and re-run my container. Then it will work again.
As soon as I enable kubernetes again, the db becomes unreachable again.
Any ideas why this is and/or how to fix it?
So the issue for us turned out to be an IP range clash. Exactly the same as described in this SO question:
Change Kubernetes docker-for-desktop cluster network ip
Unfortunately, like this user, we haven't been able to find a solution

Not able to retrieve AWS_CONTAINER_CREDENTIALS_RELATIVE_URI inside the container from fargate task

I'm running a docker container in Fargate ECS Task.
And my docker container, I have enabled ssh server, so that I can login to container directly, if I have to debug something. Which is working fine, so I can ssh my task ip, check and investigate my issues.
But, now I noticed I have an issue while accessing any AWS service via ssh inside the container, => when I logged in container via ssh I found configuration files such as ~/.aws/credentials, ~/.aws/config are missing and I can't issue any cli commands e.g. check the caller-identity. which supposed to be my task arn.
But the strange, is if I connect this same task to an ECS instance, I don't have any such issues. I can see my task arn and all rest of services. So, the ecs task agent just working fine.
So, coming back to ssh, connectivity I notice, i'm getting 404 page not found from curl 169.254.170.2$AWS_CONTAINER_CREDENTIALS_RELATIVE_URI. So, how can I make this possible that ECS Instance access and ssh access have same capability? if I can access AWS_CONTAINER_CREDENTIALS_RELATIVE_URI in my ssh then I think everything will be changed.

ECS Service other than HTTP keeps restarting

I installed Nginx ECS Docker container service through AWS ECS, which is running without any issue. However, every other container services such as centos, ubuntu, mongodb or postgres installed through AWS ECS keeps restarting (de-registering, re-registering or in pending state) in a loop. Is there a way to install these container services using AWS ECS without any issue on AMI Optimized Linux? Also, is there a way to register Docker containers in AWS ECS that was manually pulled and ran from Docker Hub?
Usually if a container is restarting over and over again its because its not passing the health check that you setup. MongoDB for example does not use the HTTP protocol so if you set it up as a service in ECS with an HTTP healthcheck it will not be able to pass the healthcheck and will get killed off by ECS for failing to pass the healthcheck.
My recommendation would be to launch such services without using a healthcheck, either as standalone tasks, or with your own healthcheck mechanism.
If the service you are trying to run does in fact have an HTTP interface and its still not passing the healthcheck and its getting killed then you should do some debugging to verify that the instance has the right security group rules to accept traffic from the load balancer. Additionally you should verify that the ports you define in your task definition match up with the port of the healthcheck.

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

Resources