Logging to host from Docker container does not work - docker

I have a Microservices based application and the services work fine if I deploy them on a host machine. But now, I'd like to learn Docker, so I started to use containers on a linux based machine. Here is a sample Docker file, it is really simple:
FROM openjdk:11-jdk-slim
MAINTAINER BeszterceKK
COPY ./tao-elszamolas-config.jar /usr/src/taoelszamolas/tao-elszamolas-config.jar
WORKDIR /usr/src/taoelszamolas
ENV SPRING_PROFILES_ACTIVE prod
EXPOSE 9001
ENTRYPOINT ["java", "-jar", "tao-elszamolas-config.jar", "-Dlog4j.configurationFile=file:/tao-elszamolas/services/tao-config/log4j2- prod.xml", "-DlogFileLocation=/tao-elszamolas/logs"]
My problem is that, I try to write my Spring boot application log to the host machine. This is why I use data volumes. At the end this is the command how I run the container:
docker run -d --name=tao-elszamolas-config-server --publish=9001:9001 -v /tao-elszamolas/logs:/tao-elszamolas/logs -v /tao-elszamolas/services/tao-config/log4j2-prod.xml:/tao-elszamolas/services/tao-config/log4j2-prod.xml tao-elszamolas-config:latest
But on a longer term all of the services will go under "docker-compose". This is just for the test, something like a proof of concept.
First question is, why it is not writing the log to the right place. (In one of the volumes defined.) That is what I set in the Log4j2 config xml. If I use the config XML on local without Docker everything works fine. When I log into the container, then I can see the mounted volumes and I can "cd" into it. And I also can do this:
touch something.txt
So the file will be created and can be seen both from container and host machine. What am I doing wrong? I think, the application can pick up the log config, because when I just set an internal folder as the location of the log file, it logs the stuff inside the container.
And I also set the permissions of the whole volume (and its children) to 777 temporarily to test out if the permissions were the problem. But not. Any help would be very much appreciated!
My second question, is there any good web based tool on linux where I can manage my containers. Start them, stop then, etc... I googled it out and found some but not sure which one is the best and free for basic needs, and which one is enough secure.

UPDATE:
Managed to resolve this problem after spending couple of nights with this.
I had multiple problems. First of all, the order of the system properties in the Dockerfile ENTRYPOINT section wasn't quite right. The
-Dsomething=something
must be before the "-jar". Otherwise it is not working in Docker. I haven't found any official documentation stating that, but this is how it is working for me. So the right ENDPOINT definition looks like this:
ENTRYPOINT ["java", "-DlogFileLocation=/tao-elszamolas/logs", "-jar", "tao-elszamolas-config.jar"]
Secondly, when I mounted some folders up to the container with docker run command like this:
-v /tao-elszamolas/logs:/tao-elszamolas/logs
then the log file wasn't written, if the folder in the Docker container doesn't exist by default. But if I create that folder at some point before the ENTRYPOINT in the Dockerfile, then the logging is fine, the system writes its logs to the host machine. I also didn't find any documentation stating these facts, but this is my experience.

Just to provide some steps for verification:
Both Log4j and spring boot in general, should not be aware of any docker-related things, like volumes, mapped folders and so forth.
Instead, configure the logging of the application as if it works without docker at all, so if you want a local file - make sure that the application indeed produces the logging file in a folder of your choice.
The next step would be mapping the folder with volumes in docker / docker-compose.
But first please validate the first step:
docker ps // to see the container id
docker exec -it <CONTAINER_ID> bash
// now check the logging file from within the docker container itself even without volumes
If the logging file does not exist its a java issue and you should configure logging properly. If not - it's a docker issue.

You have a space in your entrypoint after log4j2- and before prod.xml:
ENTRYPOINT ["java", "-jar", "tao-elszamolas-config.jar", "-Dlog4j.configurationFile=file:/tao-elszamolas/services/tao-config/log4j2- prod.xml", "-DlogFileLocation=/tao-elszamolas/logs"]
It might be a problem.

Related

Create a Dockerfile from NiFi Docker Container

I'm pretty new to using Docker. I'm needing to deploy a NiFi instance through my employer, but the internal service we need to use requires a Dockerfile, not an image.
The service we're using requires the Dockerfile because each time the repository we're using is updated, the service is pointed to the Dockerfile and initiates the build process from it, then runs/operates the container.
I've already set up the NiFi flow to how it needs to operate, I'm just unsure of how to get a Dockerfile from an already existing container (or if that is even possible?)
I was looking into this myself, apparently there is no real way to do it, but you can inspect the docker container and pretty much get all the commands used to create the container except the OS used which is easy to find, you can spawn a bash into the container and do something like sudo uname -a, which you can just take and make your own docker image with. Usually you can find it on github, though.
docker inspect <image>
or you can do it through the docker desktop UI
You can use the Dockerfile that is in NiFi source code, see in this directory: https://github.com/apache/nifi/tree/main/nifi-docker/dockerhub

Docker backup container with startup parameters

Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/

Extracting log files inside Docker container as non-root

I've been playing with dockers for a while now, and I had a problem extracting log files directory of a service that runs within container.
My Dockerfile looks like this:
ENV HOME=/software/service
ENV LOGS=$HOME/logs
COPY Service.jar $HOME/Service.jar
WORKDIR HOME
CMD java -jar Service.jar
I created a stub service for this that all he does is creating logfile name log.log inside LOGS environment variable and writes to it every 2 seconds.
What I wanted to achieve is to backup the log.log file inside my docker linux host. After some reading about multiple options I came across 2 popular solutions for persisting data:
Using volumes with the docker run -v options
Creating a data container that holds the data
Option 2 will not help much here since I want to view the logs inside my linux host machine so I chose option 1.
The problem with option 1 is it's creating the logs with a root permissions, which means I have to log into root to be able to delete these logs, something which can cause problem when not everyone should have root user and deleting logs is something that happens commonly.
So I read a little more and find many "work arounds" for this problem, one was mounting my /etc/group and /etc/passwd files inside the docker and use -u option and others were similar to this.
My main question is, is there any convenient and standard solution for this issue, extract the logfiles with/without -v option while letting entire group permission to rwx it.
Thanks!
Since you want the logs to be in your host you need some kind of volume sharing and the "-v" flag is definitely the simplest thing you can do.
As per the permissions issue, I see two options:
the -u flag with passwd/group bindmounting that you mentioned
passing the desired username and group IDs as environment variables and make the daemon running in the docker machine chown the file upon creation (but of course this is not always possible).
I think option 1, while tricky, is the easiest to apply.
Another option is to simply copy the logs to the host:
docker cp <container-name>:/software/service/logs .

Sharing files between container and host

I'm running a docker container with a volume /var/my_folder. The data there is persistent: When I close the container it is still there.
But also want to have the data available on my host, because I want to work on code with an IDE, which is not installed in my container.
So how can I have a folder /var/my_folder on my host machine which is also available in my container?
I'm working on Linux Mint.
I appreciate your help.
Thanks. :)
Link : Manage data in containers
The basic run command you want is ...
docker run -dt --name containerName -v /path/on/host:/path/in/container
The problem is that mounting the volume will, (for your purposes), overwrite the volume in the container
the best way to overcome this is to create the files (inside the container) that you want to share AFTER mounting.
The ENTRYPOINT command is executed on docker run. Therefore, if your files are generated as part of your entrypoint script AND not as part of your build THEN they will be available from the host machine once mounted.
The solution is therefore, to run the commands that creates the files in the ENTRYPOINT script.
Failing this, during the build copy the files to another directory and then COPY them back in your ENTRYPOINT script.

Is it possible to use a "blank" docker container without any install on it?

I'm new to Docker and I think having understood that Docker is a Software virtualization tool (by opposition to OS virtualization). I understand, by this image, that Docker provides a very blank environment with a given file structure and is executing on the kernel Host. What we need to do is to put our application and its dependencies (with no OS) to have a very light portable container of our app.
But it seems there is a dark side of Docker : each Dockerfile begins with a "FROM ".
I saw this and this but I'm not sure to understand. It sounds that Docker is near an kind of simplified OS virtualizer.
I was interesting in the advantage of images size. But if we have to install an OS on each image my "portable" application will be quite heavy quickly.
Is there really no way to use a "blank image" ?
You can start with FROM scratch which is an empty filesystem.
Please see the section on Creating a Base Image if you'd like to spin up your own minimal root file system.
You might be surprised how many dependencies your application actually has on the root file system, and in the end, it is usually more efficient to use one of the standard root file systems in your FROM statement, as Charles Duffy commented above.
empty/Dockerfile
FROM scratch
WORKDIR /
build and check size
docker build empty/ -t empty
docker images | grep empty
This may be a bit too late. But I just had a use case where I needed to create a bare bone container that I could launch as part of multi-container docker-compose and get into it afterwards via /bin/bash. Keep in mind, a docker container must run a service and the container will be in existence only for as long as the service is running. So, I created this container with just python in it. I copied a 2 line python script that just makes it sleep. Here's what I did.
1. Create the python script wait_service.py with the following code:
import time
time.sleep(1000)
2. Create the Dockerfile with just the following lines:
FROM python:2.7
RUN mkdir -p /test
WORKDIR /test
COPY wait_service.py /test/
CMD python wait_service.py
3. Build and run the container. Using the container id, I could then get inside it. Please adjust the sleep time based on how long you want to keep this container.
Your application haveto have some underlying OS, without, there is no way for it to start..
I think the most basic one in the docker index is busybox, so a FROM busybox will give you a very minimal setup.
Docker is also using a lot of caching for each of its layers. So every docker container that uses FROM centos:centos7 at the top will only use 1 single set of minimal centos7 image.
The base images are very minimalistic, so it is nothing to worry about..

Resources