Kubernetes networking internet-pod - docker

I have an application that the management wants to migrated from Docker to K8s.
This application is like a blackbox from our point of view, and expect to communicate with a database over port 5993 using the gRPC protocol.
In Docker I simply execute:
docker run -p 5993:5993 ...
And everything works as expected. I am a newby in K8s and my question is how can I move this setup in K8s properly?
I've lost a lot of time reading about port-forward, Services, NodePort but I do not understand what is the correct approach and which solution has to be used.

As you can write docker run command; presumed you can construct a docker compose for the blackbox system, you can then start with kompose to convert your docker compose to k8s manifest. Note the converted spec is not always 100% runnable but it will give you an head start in this case.

Related

Use docker image in another docker image

I have two docker images:
CLI tool
Webserver
The CLI tool is a very heavy docker file which takes hours to compile. I am trying to call the CLI tool from the webserver, but not sure how to go from here. Is there a way to make the command created in 1 available in 2?
At this point I tried working with volumes, but no luck. Thanks!
The design of Docker sort-of assumes that containers communicate through a network, not through the command line. So the cleanest solution is to create a simple microservice that wraps the CLI tool and can be called through HTTP.
As a quick and dirty hack, you could also use sshd as such a microservice without writing any code.
An alternative that doesn't involve the network is to make the socket of the Docker daemon available in the webserver container using a bind mount:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Then you should be able to communicate with the host daemon from within the container, provided that you have installed the docker command line tool in the image. However, note that this makes your application strongly dependent on Docker, which might not be ideal. Also note that it essentially gives the container root access to the host system!
(Note that this is different from Docker-in-Docker, which is running a second Docker daemon inside a container and is generally not recommended except for specialized use cases.)

Does docker-compose configuration cover 100% of the docker CLI?

Trying to figure out the difference between docker and docker-compose, it looks like the docker-compose CLI effectively provides a means of running the docker CLI indirectly via configuration (What is the difference between docker and docker-compose).
Is there anything that you can do with the docker CLI that COULDN'T be specified in docker-compose.yml?
The docker CLI offers more options to you (e.g. docker history to inspect an image's history, just to name one) than the docker-compose.yml. But the latter is meant for a very different purpose, namely making the deployment of multi-container applications easier.
So, to my knowledge, if we just look at the aspects of starting and configuring containers, you can do everything with docker-compose that "plain" docker can do, but in a much more comfortable way.

Docker swarm get deployment status

After running docker stack deploy to deploy some services to swarm is there a way to programmatically test if all containers started correctly?
The purpose would be to verify in a staging CI/CD pipeline that the containers are actually running and didn't fail on startup. Restart is disabled via restart_policy.
I was looking at docker stack services, is the replicas column useful for this purpose?
$ docker stack services --format "{{.ID}} {{.Replicas}}" my-stack-name
lxoksqmag0qb 0/1
ovqqnya8ato4 0/1
Yes, there are ways to do it, but it's manual and you'd have to be pretty comfortable with docker cli. Docker does not provide an easy built-in way to verify that docker stack deploy succeeded. There is an open issue about it.
Fortunately for us, community has created a few tools that implement docker's shortcomings in this regard. Some of the most notable ones:
https://github.com/issuu/sure-deploy
https://github.com/sudo-bmitch/docker-stack-wait
https://github.com/ubirak/docker-php
Issuu, authors of sure-deploy, have a very good article describing this issue.
Typically in CI/CD I see everyone using docker or docker-compose. A container runs the same in docker as it does docker swarm with respects to "does this container work by itself as intended".
That being said, if you still wanted to do integration testing in a multi-tier solution with swarm, you could do various things in automation. Note this would all be done on a single node swarm to make testing easier (docker events doesn't pull node events from all nodes, so tracking a single node is much easier for ci/cd):
Have something monitoring docker events, e.g. docker events -f service=<service-name> to ensure containers aren't dying.
always have healthchecks in your containers. They are the #1 way to ensure your app is healthy (at the container level) and you'll see them succeed or fail in docker events. You can put them in Dockerfiles, service create commands, and stack/compose files. Here's some great examples.
You could attach another container to the same network to test your services remotely 1-by-1 using tasks. with reverse DNS. This will avoid the VIP and let you talk to a specific replica(s).
You might get some stuff out of docker inspect <service-id or task-id>
Another solution might be to use docker service scale - it will not return until service is converged to specified amount of replicas or will timeout.
export STACK=devstack # swarm stack name
export SERVICE_APP=yourservice # service name
export SCALE_APP=2 # desired amount of replicas
docker stack deploy $STACK --with-registry-auth
docker service scale ${STACK}_${SERVICE_APP}=${SCALE_APP}
One drawback of that method is that you need to provide service names and their replica counts (but these can be extracted from compose spec file using jq).
Also, in my use case I had to specify timeout by prepending timeout command, i.e. timeout 60 docker service scale, because docker service scale was waiting its own timeout even if some containers failed, which could potentially slow down continuous delivery pipelines
References
Docker CLI: docker service scale
jq - command-line JSON processor
GNU Coreutils: timeout command
you can call this for every service. it returns when converged. (all ok)
docker service update STACK_SERVICENAME

Docker in docker and docker compose block one port for no reason

Right now I am setting up an application that has a deployment based upon docker images.
I use gitlab ci to:
Test each service
Build each service
Dockerize each image (create docker container)
Run integration tests (start docker compose that starts all services on special ports, run integration tests)
Stop prod images and run new images
I did this for each service, but I ran into an issue.
When I start my docker container for integration tests then it is setup within a gitlab ci task. For each task a docker based runner is used. I also mount my host docker socket to be able to use docker in docker.
So my gradle docker image is started by the gitlab runner. Then docker will be installed and all images will be started using docker compose.
One microservice listens to port 10004. Within the docker compose file there is a 11004:10004 port mapping.
My integration tests try to connect to port 11004. But this does not work right now.
When I attach to the image that run docker compose while it tries to execute the integration test then I am not able to do it manually by calling
wget ip: port
I just get the message connected and waiting for response. Either my tests can connect successfully. My service does not log any message about a new connection.
When I execute this wget command within  my host shell then it works.
It's a public ip and within my container I can also connect to other ports using telnet and wget. Just one port of one service is broken when I try to connect from my docker in docker instance.
When I do not use docker compose then it works. Docker compose seems to setup a special default network that does something weird.
Setting network to host also works...
So did anyone also make such an experience when using docker compose?
The same setup works flawless in docker for mac, but my server runs on Debian 8.
My solution for now is to use a shell runner to avoid docker in docker issues. It works there as well.
So docker in docker combined with docker compose seems to have an ugly bug.
I'm writing while I am sitting in the subway but I hope describing my issue is also sufficient to talk about experiences. I don't think we need some sourcecode to find bad configurations because it works without docker in docker and on Mac.
I figured out that docker in docker has still some weird behaviors. I fixed my issue by adding a new gitlab ci runner that is a shell runner. Therefore docker-compose is run on my host and everything works flawless.
I can reuse the same runner for starting docker images in production as I do for integration testing. So the easy fix has another benefit for me.
The result is a best practice to avoid pitfalls:
Only use docker in docker when there is a real need.
For example to make sure fast io communication between your host docker image and your docker image of interest.
Have fun using docker (in docker (in docker)) :]

Bootstrapping docker deamon

In the official Kubernetes multinode Docker guide , it is mentioned that you need to another Docker instance:
A bootstrap Docker instance which is used to start etcd and flanneld, on which the Kubernetes components depend
So what is a bootstrap instance and how do you make sure that keeps running on restarts ?
The documentation gives a detailed explanation as to the purpose of the bootstrap instance of Docker:
This guide uses a pattern of running two instances of the Docker
daemon: 1) A bootstrap Docker instance which is used to start etcd and
flanneld, on which the Kubernetes components depend 2) A main Docker
instance which is used for the Kubernetes infrastructure and user’s
scheduled containers
This pattern is necessary because the flannel daemon is responsible
for setting up and managing the network that interconnects all of the
Docker containers created by Kubernetes. To achieve this, it must run
outside of the main Docker daemon. However, it is still useful to use
containers for deployment and management, so we create a simpler
bootstrap daemon to achieve this.
In summary the special bootstrap docker daemon runs the bits that kubernetes depends on, freeing up the the normal docker daemon to be managed by kubernetes. This is a trick that leverages the fact that both etcd and flanneld can be run as containers. Alternatively one would have to set them up locally as services.
As for ensuring the bootstrapping docker daemon survives a restart, the answer lies within the code. Here's where it's being called when running the master.sh script.
https://github.com/kubernetes/kube-deploy/blob/master/docker-multinode/master.sh#L36
https://github.com/kubernetes/kube-deploy/blob/master/docker-multinode/docker-bootstrap.sh#L20
So the code attempts to setup a service for the extra docker daemon process.

Resources