I have setup my Keycloak identification server by running a .yml file that uses the docker image jboss/keycloak:9.0.0.
Now I want get inside the container and modify some files in order to make some testing.
Unfortunately, after I got inside the running container, I realized that some very basic UNIX commands like sudo or vi (and many many more) aren't found (as well as commands like apt-get or yum which I used to download command packages and failed).
According to this question, it seems that the underlying OS of the container (Redhat Universal Base Image) uses the command microdnf to manage software, but unfortunately when I tried to use this command to do any action I got the following message:
error: Failed to create: /var/cache/yum/metadata
Could you please propose any workaround for my case? I just need to use a text editor command like vi, and root privileges for my user (so commands like sudo, su, or chmod). Thanks in advance.
If you still, for some reason, want to exec in to the container try adding --user root to you docker exec command.
Just exec:ing in to the container without the --user will do so as user jboss, that user seems to have less privileges.
It looks like you are trying to use approach from non docker (old school) world in the docker world. That's not right. Usually, you don't have need to go to the container and edit any config file there - that change will be very likely lost (it depends on the container configuration). Containers are configured via environment variables or volumes usually.
Example how to use TLS certificates: Keycloak Docker HTTPS required
https://hub.docker.com/r/jboss/keycloak/ is also good starting point to check available environment variable, which may help you achieve what you need. For example PROXY_ADDRESS_FORWARDING=true enable option, when you can run Keycloak container behind a loadbalancer without you touching any config file.
I would say also adding own config files on the build is not the best option - you will have to maintain your own image. Just use volumes and "override" default config file(s) in the container with your own config file(s) from the host OS file system, e.g.:
-v /host-os-path/my-custom-standalone-ha.xml:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml
Related
I am creating a docker container with Ubuntu:16.04 image using python docker package. I am passing tty as True and detach as True to the client.containers.run() function. The container starts with /sbin/init process. The container is created successfully. But the problem is, the login prompt on my host machine is replaced with the container login prompt on my host machine console. As a result, I am not able to the login on the machine on the console. SSH connection to the machine work fine.
This happens even when I run my python script after connecting SSH to the machine. I tried different options like setting tty to False, setting stdout to False, setting the environment variable TERM to xterm in the container, but nothing help.
It would be really great if someone can suggest a solution for this problem.
My script is very simple:
import docker
client = docker.from_env()
container = client.containers.run('ubuntu:16.04', '/sbin/init', privileged=True,
detach=True, tty=True, stdin_open=True, stdout=False, stderr=False,
environment=['TERM=xterm'])
I am not using any dockerfile.
I have been able to figure out that this problem happens when I start container in privileged mode. If I do this, the /sbin/init process launches /sbin/agetty processes which causes /dev/tty to be attached to the container. I need to figure out a way to start /sbin/init in such a way that it does not create /sbin/agetty processes.
/sbin/init in Ubuntu is a service called systemd. If you look at the linked page it does a ton of things – configures various kernel parameters, mounts filesystems, configures the network, launches getty process, .... Many of these things require changing host-global settings, and if you launch a container with --privileged you're allowing systemd to do that.
I'd give two key recommendations on this command:
Don't run systemd in Docker. If you really need a multi-process init system, supervisord is popular, but prefer single-process containers. If you know you need some init(8) (process ID 1 has some responsibilities) then tini is another popular option.
Don't directly run bare Linux distribution images. Whatever software you're trying to run, it's almost assuredly not in an alpine or ubuntu image. Build a custom image that has the software you need and run that; you should set up its CMD correctly so that you can docker run the image without any manual setup.
Also remember that the ability to run any Docker command at all implies unrestricted root-level access over the host. You're seeing some of that here where a --privileged container is taking over the host's console; it's also very very easy to read and edit files like the host's /etc/shadow and /etc/sudoers. There's nothing technically wrong with the kind of script you're showing, but you need to be extremely careful with standard security concerns.
I have a Microservices based application and the services work fine if I deploy them on a host machine. But now, I'd like to learn Docker, so I started to use containers on a linux based machine. Here is a sample Docker file, it is really simple:
FROM openjdk:11-jdk-slim
MAINTAINER BeszterceKK
COPY ./tao-elszamolas-config.jar /usr/src/taoelszamolas/tao-elszamolas-config.jar
WORKDIR /usr/src/taoelszamolas
ENV SPRING_PROFILES_ACTIVE prod
EXPOSE 9001
ENTRYPOINT ["java", "-jar", "tao-elszamolas-config.jar", "-Dlog4j.configurationFile=file:/tao-elszamolas/services/tao-config/log4j2- prod.xml", "-DlogFileLocation=/tao-elszamolas/logs"]
My problem is that, I try to write my Spring boot application log to the host machine. This is why I use data volumes. At the end this is the command how I run the container:
docker run -d --name=tao-elszamolas-config-server --publish=9001:9001 -v /tao-elszamolas/logs:/tao-elszamolas/logs -v /tao-elszamolas/services/tao-config/log4j2-prod.xml:/tao-elszamolas/services/tao-config/log4j2-prod.xml tao-elszamolas-config:latest
But on a longer term all of the services will go under "docker-compose". This is just for the test, something like a proof of concept.
First question is, why it is not writing the log to the right place. (In one of the volumes defined.) That is what I set in the Log4j2 config xml. If I use the config XML on local without Docker everything works fine. When I log into the container, then I can see the mounted volumes and I can "cd" into it. And I also can do this:
touch something.txt
So the file will be created and can be seen both from container and host machine. What am I doing wrong? I think, the application can pick up the log config, because when I just set an internal folder as the location of the log file, it logs the stuff inside the container.
And I also set the permissions of the whole volume (and its children) to 777 temporarily to test out if the permissions were the problem. But not. Any help would be very much appreciated!
My second question, is there any good web based tool on linux where I can manage my containers. Start them, stop then, etc... I googled it out and found some but not sure which one is the best and free for basic needs, and which one is enough secure.
UPDATE:
Managed to resolve this problem after spending couple of nights with this.
I had multiple problems. First of all, the order of the system properties in the Dockerfile ENTRYPOINT section wasn't quite right. The
-Dsomething=something
must be before the "-jar". Otherwise it is not working in Docker. I haven't found any official documentation stating that, but this is how it is working for me. So the right ENDPOINT definition looks like this:
ENTRYPOINT ["java", "-DlogFileLocation=/tao-elszamolas/logs", "-jar", "tao-elszamolas-config.jar"]
Secondly, when I mounted some folders up to the container with docker run command like this:
-v /tao-elszamolas/logs:/tao-elszamolas/logs
then the log file wasn't written, if the folder in the Docker container doesn't exist by default. But if I create that folder at some point before the ENTRYPOINT in the Dockerfile, then the logging is fine, the system writes its logs to the host machine. I also didn't find any documentation stating these facts, but this is my experience.
Just to provide some steps for verification:
Both Log4j and spring boot in general, should not be aware of any docker-related things, like volumes, mapped folders and so forth.
Instead, configure the logging of the application as if it works without docker at all, so if you want a local file - make sure that the application indeed produces the logging file in a folder of your choice.
The next step would be mapping the folder with volumes in docker / docker-compose.
But first please validate the first step:
docker ps // to see the container id
docker exec -it <CONTAINER_ID> bash
// now check the logging file from within the docker container itself even without volumes
If the logging file does not exist its a java issue and you should configure logging properly. If not - it's a docker issue.
You have a space in your entrypoint after log4j2- and before prod.xml:
ENTRYPOINT ["java", "-jar", "tao-elszamolas-config.jar", "-Dlog4j.configurationFile=file:/tao-elszamolas/services/tao-config/log4j2- prod.xml", "-DlogFileLocation=/tao-elszamolas/logs"]
It might be a problem.
how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/
Im facing the same problem since months now and i dont have an adequate solution.
Im running several Containers based on different images. Some of them were started using portainer with some arguments and volumes. Some of them were started using the CLI and docker start with some arguments and parameters.
Now all these settings are stored somewhere. Because if i stop and retart such a container, everything works well again. but, if i do a commit, backup it with tar and load it on a different system and do a docker start, it has lost all of its settings.
The procedure as described here: https://linuxconfig.org/docker-container-backup-and-recovery does not work in my case.
Now im thinking about to write an own web application which will create me some docker compose files based on my setting rather than to just do a docker start with the correct params. This web application should also take care of the volumes (just folders) and do a incremental backup of them with borg to a remote server.
But actually this is only an idea. Is there a way to "extract" a docker compose file of a running containter? So that i can redeploy a container 1:1 to an other server and just have to run docker run mycontainer and it will have the same settings?
Or do i have to write my web app? Or have i missed some page on google and there is already such a solution?
Thank you!
To see the current configuration of a container, you can use:
docker container inspect $container_id
You can then use those configurations to run your container on another machine. There is no easy import/export of these settings to start another container that I'm aware of.
Most people use a docker-compose.yml to define how they want a container run. They also build images with a Dockerfile and transfer them with a registry server rather than a save/load.
The docker-compose.yml can be used with docker-compose or docker stack deploy and allows the configuration of the container to be documented as a configuration file that is tracked in version control, rather than error prone user entered settings. Running containers by hand or starting them with a GUI is useful for a quick test or debugging, but not for reproducibility.
You would like to backup the instance but the commands you're providing are to backup the image. I'd suggest to update your Dockerfile to solve the issue. In case you really want to go down the saving the instance current status, you should use the docker export and docker import commands.
Reference:
https://docs.docker.com/engine/reference/commandline/import/
https://docs.docker.com/engine/reference/commandline/export/
NOTE: the docker export does not export the content of the volumes anyway, I suggest you to should refer to https://docs.docker.com/engine/admin/volumes/volumes/
I'm using docker compose to create basic environment for my websites (at the moment only locally so I don't care about security issues). At the moment I'm using 3 different containers"
for nginx
for php
for mysql
I can obviously log in to any container to run commands. For example I can ssh to php container to verify PHP version or run PHP script but the question is - is it possible to have such configuration that I could run commands from all containers running for example one SSH container?
For example I would like to run commands like this:
php -v
nginx restart
mysql
after logging to one common SSH for all services.
Is it possible at all? I know there is exec command so I could add before each command name of container but it won't be flexible enough to use and in case of more containers it would be more and more difficult.
So the question is - is it possible at all and if yes, how could it be achieved?
Your question was:
Is it possible at all?
and the answer is:
No
This is due to the two restrictions you are giving in combination. Your first restrictions is:
Use SSH not Exec
It is definitly possible to have an SSH daemon running in each container and setup the security so that you can run ssh commands in e.g. a passwordless mode
see e.g. Passwordless SSH login
Your second restriction is:
one common SSH for all services
and this would now be the tricky part. You'd have to:
create one common ssh server in e.g. one special container for this purpose or using one of the containers
create communication to or between containers
make sure that the ssh server knows which command is for which container
All in all this would be so complicated in comparison to a simple bash or python script that can do the same with exec commands that in all the "no" is IMHO a better answer than trying to solve the academic problem of "might there be some tricky/fancy solution of doing this".