Containarize spring booot microservices - docker

For a project i'm trying to put my microservices inside a container.
Right now I can succesfully put a jar file inside a docker container and run it.
I know how docker images and containers work. But Im very new on microservices, a friend of me asked to put his spring boot microservices in a docker environment. Right now this is my plan.
Put every microservice inside 1 container , manage them with docker compose so that you can run and config them at the same time. And maybe later put some high availibility in it with docker compose scale or try something out with Docker swarm.
My question now is how do you put one service inside a container. Do you create a jar /war file from a service put that inside a container with the expose port you are working with inside your service ?
For my testjar file (a simple hello world i found online) i used this dockerfile
FROM openjdk:8
ADD /jarfiles/test.jar test.jar
EXPOSE 8080
ENTRYPOINT ["java", "-jar" , "test.jar"]

You need to convert your spring boot application into docker image. Conversion of boot application can be converted using docker maven plugin or you can use docker command for this.
Using docker you need dockerfile that have the steps for creating docker image.
Once your image is ready you can run that docker image on docker engine, hence image after run will be a docker container. That is basically the virtualization.
There set of docker commands for running/creating docker images.
Install docker engine and start docker engine using
service docker start
And then use all docker commands

Related

How to start docker daemon in my custom Docker image?

I am trying to create my custom docker image which I will use in my GitLab build pipeline. (Following this guide as I would like to configure my GitLab runners over AWS Fargate https://docs.gitlab.com/runner/configuration/runner_autoscale_aws_fargate/).
One of the prerequisites is to create your own custom docker image that has everything that's needed for the build pipeline to execute.
I would need to add a docker to my docker image.
I am able to install docker, however, I do not understand how to start the docker service as the error I am getting is
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
each time docker command is used.
I tried to add in my startup.sh script used as a docker entrypoint to start docker using rc-service(alpine-based image) or systemctl (amazon linux 2) but without any luck.
Any help is appreciated. Thanks.
For running docker in docker you need to configure docker image with docker-dind service to build docker. But it is limited and requires sudo priviledges, I do recommend to use kaniko, it is very easy to configure, does not require anything more than kaniko executor image.
https://docs.gitlab.com/ee/ci/docker/using_kaniko.html
If really need to use DinD (docker in docker), just go to:
https://docs.gitlab.com/ee/ci/docker/using_docker_build.html
Kaniko is simplest and safe way to run docker build

How to write a file to the host in advance and then start the Docker container?

My task is to deploy a third-party OSRM service on Amazon ECS Fargate.
For OSRM docker at startup, you need to transfer a file containing geodata.
The problem is that Amazon ECS Fargate does not provide access to the host file system and does not provide the ability to attach files and folders during container deployments.
Therefore, I would like to create an intermediate image that, when building, saved the file with geodata, and when starting the container, it would use it when defining volumes.
Thanks!
As I understand it, Amazon ECS is a plain container orchestrator and does not implement docker swarm, so things like docker configs are off the cards.
However, you should be able to do something like this :-
ID=$(docker create --name my-osrm osrm-base-image)
docker cp ./file.ext $ID:/path/in/container
docker start $ID
The solution turned out to be quite simple.
For this Dockerfile, I created an image on my local machine and hosted it on DockerHub:
FROM osrm/osrm-backend:latest
COPY data /data
ENTRYPOINT ["osrm-routed","--algorithm","mld","/data/andorra-latest.osm.pbf"]
After that, without any settings and volumes, I launched this image in AWS ECS

Azure Function in Docker container

I'm Running HTTP Azure function V2 inside a docker container, I used dockerfile to build my container and it's running but have many doubts
Why AzureFunction docker file is different from .netcore web project docker file, There is no ENTRYPOINT how it is running?
When we are using HTTP Trigger function in docker Linux container, Is it running through some webServer or self-host? I believe it self hosted. am I correct?
The relevant base Dockerfile should be this one: https://github.com/Azure/azure-functions-docker/blob/master/host/2.0/alpine/amd64/dotnet.Dockerfile
As you can see there, the WebHost is getting started - which also should answer your second question: Yes, it's a selfhost
CMD [ "/azure-functions-host/Microsoft.Azure.WebJobs.Script.WebHost" ]

Docker inside docker with gitlab-ci.yml

I have created a gitlab runner.
I have choosen docker executor and an ubuntu default image.
I have put this at the top of my .gitlab-ci.yml file:
image: microsoft/dotnet:latest
I was thinking that gitlab-ci will load ubuntu image by default if there are no "images" directive in .gitlab-ci.yml file.
But, there is something strange: I am wondering now if gitlab-ci is not creating an ubuntu container and then creating a dotnet container inside the ubuntu container.
Here is a very ugly test i have done on gitlab server: I have removed /usr/bin/docker file and i have replaced it by a script which logs arguments.
This is very strange because jobs still working and i have nothing in my log file....
Thanks
Ubuntu image is indeed used if you didn't specify image but you did and your jobs should be run on the dotnet container without ever spinning up the ubuntu.
Your test behaves the way it does because docker is the client while dockerd is the deamon that gitlab runner actually calls.
If you want to check what's going on you should rather call docker ps to get a list of running containers.

Start service using systemctl inside docker container

In my Dockerfile I am trying to install multiple services and want to have them all start up automatically when I launch the container.
One among the services is mysql and when I launch the container I don't see the mysql service starting up. When I try to start manually, I get the error:
Failed to get D-Bus connection: Operation not permitted
Dockerfile:
FROM centos:7
RUN yum -y install mariadb mariadb-server
COPY start.sh start.sh
CMD ["/bin/bash", "start.sh"]
My start.sh file:
service mariadb start
Docker build:
docker build --tag="pbellamk/mariadb" .
Docker run:
docker run -it -d --privileged=true pbellamk/mariadb bash
I have checked the centos:systemd image and that doesn't help too. How do I launch the container with the services started using systemctl/service commands.
When you do docker run with bash as the command, the init system (e.g. SystemD) doesn’t get started (nor does your start script, since the command you pass overrides the CMD in the Dockerfile). Try to change the command you use to /sbin/init, start the container in daemon mode with -d, and then look around in a shell using docker exec -it <container id> sh.
Docker is designed around the idea of a single service/process per container. Although it definitely supports running multiple processes in a container and in no way stops you from doing that, you will run into areas eventually where multiple services in a container doesn't quite map to what Docker or external tools expect. Things like moving to scaling of services, or using Docker swarm across hosts only support the concept of one service per container.
Docker Compose allows you to compose multiple containers into a single definition, which means you can use more of the standard, prebuilt containers (httpd, mariadb) rather than building your own. Compose definitions map to Docker Swarm services fairly easily. Also look at Kubernetes and Marathon/Mesos for managing groups of containers as a service.
Process management in Docker
It's possible to run systemd in a container but it requires --privileged access to the host and the /sys/fs/cgroup volume mounted so may not be the best fit for most use cases.
The s6-overlay project provides a more docker friendly process management system using s6.
It's fairly rare you actually need ssh access into a container, but if that's a hard requirement then you are going to be stuck building your own containers and using a process manager.
You can avoid running a systemd daemon inside a docker container altogether. You can even avoid to write a special start.sh script - that is another benefit when using the docker-systemctl-replacement script.
The docker systemctl.py can parse the normal *.service files to know how to start and stop services. You can register it as the CMD of an image in which case it will look for all the systemctl-enabled services - those will be started and stopped in the correct order.
The current testsuite includes testcases for the LAMP stack including centos, so it should run fine specifically in your setup.
I found this project:
https://github.com/defn/docker-systemd
which can be used to create an image based on the stock ubuntu image but with systemd and multiuser mode.
My use case is the first one mentioned in its Readme. I use it to test the installer script of my application that is installed as a systemd service. The installer creates a systemd service then enables and starts it. I need CI tests for the installer. The test should create the installer, install the application on an ubuntu, and connect to the service from outside.
Without systemd the installer would fail, and it would be much more difficult to write the test with vagrant. So, there are valid use cases for systemd in docker.

Resources