Docker - running commands from all containers - docker

I'm using docker compose to create basic environment for my websites (at the moment only locally so I don't care about security issues). At the moment I'm using 3 different containers"
for nginx
for php
for mysql
I can obviously log in to any container to run commands. For example I can ssh to php container to verify PHP version or run PHP script but the question is - is it possible to have such configuration that I could run commands from all containers running for example one SSH container?
For example I would like to run commands like this:
php -v
nginx restart
mysql
after logging to one common SSH for all services.
Is it possible at all? I know there is exec command so I could add before each command name of container but it won't be flexible enough to use and in case of more containers it would be more and more difficult.
So the question is - is it possible at all and if yes, how could it be achieved?

Your question was:
Is it possible at all?
and the answer is:
No
This is due to the two restrictions you are giving in combination. Your first restrictions is:
Use SSH not Exec
It is definitly possible to have an SSH daemon running in each container and setup the security so that you can run ssh commands in e.g. a passwordless mode
see e.g. Passwordless SSH login
Your second restriction is:
one common SSH for all services
and this would now be the tricky part. You'd have to:
create one common ssh server in e.g. one special container for this purpose or using one of the containers
create communication to or between containers
make sure that the ssh server knows which command is for which container
All in all this would be so complicated in comparison to a simple bash or python script that can do the same with exec commands that in all the "no" is IMHO a better answer than trying to solve the academic problem of "might there be some tricky/fancy solution of doing this".

Related

Keycloak Docker image basic unix commands not available

I have setup my Keycloak identification server by running a .yml file that uses the docker image jboss/keycloak:9.0.0.
Now I want get inside the container and modify some files in order to make some testing.
Unfortunately, after I got inside the running container, I realized that some very basic UNIX commands like sudo or vi (and many many more) aren't found (as well as commands like apt-get or yum which I used to download command packages and failed).
According to this question, it seems that the underlying OS of the container (Redhat Universal Base Image) uses the command microdnf to manage software, but unfortunately when I tried to use this command to do any action I got the following message:
error: Failed to create: /var/cache/yum/metadata
Could you please propose any workaround for my case? I just need to use a text editor command like vi, and root privileges for my user (so commands like sudo, su, or chmod). Thanks in advance.
If you still, for some reason, want to exec in to the container try adding --user root to you docker exec command.
Just exec:ing in to the container without the --user will do so as user jboss, that user seems to have less privileges.
It looks like you are trying to use approach from non docker (old school) world in the docker world. That's not right. Usually, you don't have need to go to the container and edit any config file there - that change will be very likely lost (it depends on the container configuration). Containers are configured via environment variables or volumes usually.
Example how to use TLS certificates: Keycloak Docker HTTPS required
https://hub.docker.com/r/jboss/keycloak/ is also good starting point to check available environment variable, which may help you achieve what you need. For example PROXY_ADDRESS_FORWARDING=true enable option, when you can run Keycloak container behind a loadbalancer without you touching any config file.
I would say also adding own config files on the build is not the best option - you will have to maintain your own image. Just use volumes and "override" default config file(s) in the container with your own config file(s) from the host OS file system, e.g.:
-v /host-os-path/my-custom-standalone-ha.xml:/opt/jboss/keycloak/standalone/configuration/standalone-ha.xml

Detach container from host console

I am creating a docker container with Ubuntu:16.04 image using python docker package. I am passing tty as True and detach as True to the client.containers.run() function. The container starts with /sbin/init process. The container is created successfully. But the problem is, the login prompt on my host machine is replaced with the container login prompt on my host machine console. As a result, I am not able to the login on the machine on the console. SSH connection to the machine work fine.
This happens even when I run my python script after connecting SSH to the machine. I tried different options like setting tty to False, setting stdout to False, setting the environment variable TERM to xterm in the container, but nothing help.
It would be really great if someone can suggest a solution for this problem.
My script is very simple:
import docker
client = docker.from_env()
container = client.containers.run('ubuntu:16.04', '/sbin/init', privileged=True,
detach=True, tty=True, stdin_open=True, stdout=False, stderr=False,
environment=['TERM=xterm'])
I am not using any dockerfile.
I have been able to figure out that this problem happens when I start container in privileged mode. If I do this, the /sbin/init process launches /sbin/agetty processes which causes /dev/tty to be attached to the container. I need to figure out a way to start /sbin/init in such a way that it does not create /sbin/agetty processes.
/sbin/init in Ubuntu is a service called systemd. If you look at the linked page it does a ton of things – configures various kernel parameters, mounts filesystems, configures the network, launches getty process, .... Many of these things require changing host-global settings, and if you launch a container with --privileged you're allowing systemd to do that.
I'd give two key recommendations on this command:
Don't run systemd in Docker. If you really need a multi-process init system, supervisord is popular, but prefer single-process containers. If you know you need some init(8) (process ID 1 has some responsibilities) then tini is another popular option.
Don't directly run bare Linux distribution images. Whatever software you're trying to run, it's almost assuredly not in an alpine or ubuntu image. Build a custom image that has the software you need and run that; you should set up its CMD correctly so that you can docker run the image without any manual setup.
Also remember that the ability to run any Docker command at all implies unrestricted root-level access over the host. You're seeing some of that here where a --privileged container is taking over the host's console; it's also very very easy to read and edit files like the host's /etc/shadow and /etc/sudoers. There's nothing technically wrong with the kind of script you're showing, but you need to be extremely careful with standard security concerns.

how to configure docker containers proxy?

how to configure docker containers proxy ?
First of all,
I tried to use the way that setted '/etc/systemd/system/docker.service.d/http-proxy.conf' (https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) and it really works for docker daemon, but it doesn't work for docker containers, it seems this way just take effect for some command like 'docker pull'
Secondary,
I have a lot of docker containers, I don't want to use 'docker run -e http_proxy=xxx... ' command every time when I start a container.
So I guess if there is such a way automatically load the global configuration file when the container starts, I googled it and got it to set the file '~/.docker/config.json'(How to configure docker container proxy?, this way still does not work for me.
(
my host machine system is centos7, here is my docker -v:
Docker version 1.13.1, build 6e3bb8e/1.13.1
)
I feel that it may be related to my docker version or the docker started by the systemd service, so ~/.docker/config.json does not take effect.
Finally ,
I just hope that modifying configuration files will allow all my containers to automatically configure environment variables when it start (that is auto set environment variables 'http_proxy=http://HostIP:8118 https_proxy=http://HostIP:8118' when a container start, like Dockerfile param ENV) . I want to know if there is such a way? And if this way can be realised I can make the container use the host's proxy, after all, my host's agent is working properly.
But I was wrong, I tried to run a container,then set http_proxy=http://HostIP:8118 and https_proxy=http://HostIP:8118, but when I use the command 'wget facebook.com' and I got 'Connecting to HostIP:8118... failed: No route to host.', But, the host machine(centos7) can successfully execute the wget, And I can successfully ping the host in the container. I don't know why it might be related to firewalls and the 8118 port.
It is Over,
OMG.. I have no other way, can anyone help me?
==============================
ps:
You can see from the screenshot below, I actually want to install goa and goagen but report an error, maybe because of network reasons, I want to open the agent to try, so...only have the above problem.
1.my go docker container
enter image description here
go docker wget
2.my host
my host wget
You need version 17.07 or more recent to automatically pass the proxy to containers you start using the config.json file. The 1.13 releases are long out of support.
This is well documented from docker:
https://docs.docker.com/network/proxy/

Deploy Rails app using capstrano from inside a docker container

I managed to dockerize my rails application for development and it works great. Before this I had a deploy setup using Capistrano. Now I would like to try and deploy using the same Capistrano but executed from within the docker container. My question is can i use the same ssh key from my host machine or should I generate a new key inside the container? The last option does not sound good to me since it would have to be recreated when the container gets destroyed. I am aware that in the long run I would probably be better off setting the production server to run docker and install through docker machine but so far I just like to keep the setup I have already on production.
Anyone else have tried this?
I think you should link your devices ssh key into the container (as long as the container is not accessible from network). Additionally to your argument, you can more easily share your image with others as they are able to just link their key themselves.
You can mount your SSH key into the container at runtime.
docker run -v /path/to/host/ssh-key:/path/to/container/ssh-key <image> <command>
ssh-key will be available in the container at /path/to/container/ssh-key

Docker and jenkins

I am working with docker and jenkins, and I'm trying to do two main tasks :
Control and manage docker images and containers (run/start/stop) with jenkins.
Set up a development environment in a docker image then build and test my application which is in the container using jenkins.
While I was surfing the net I found many solutions :
Run jenkins as container and link it with other containers.
Run jenkins as service and use the jenkins plugins provided to support docker.
Run jenkins inside the container which contain the development environment.
So my question is what is the best solution or you can suggest an other approach.
One more question I heard about running a container inside a container. Is it a good practice or better avoid it ?
To run Jenkins as a containerized service is not a difficult task. There are many images out there that allow you to do just that. It took me just a couple minutes to make Jenkins 2.0-beta-1 run in a container, compiling from source (image can be found here). Particularity I like this approach, you just have to make sure to use a data volume or a data container as jenkins_home to make your data persist.
Things become a little bit trickier when you want to use this Jenkins - in a container - to build and manage containers itself. To achieve that, you need to implement something called docker-in-docker, because you'll need a docker daemon and client available inside the Jenkins container.
There is a very good tutorial explaining how to do it: Docker in Docker with Jenkins and Supervisord.
Basically, you will need to make the two processes (Jenkins and Docker) run in the container, using something like supervisord. It's doable and proclaims to have good isolation, etc... But can be really tricky, because the docker daemon itself has some dependencies, that need to be present inside the container as well. So, only using supervisord and running both processes is not enough, you'll need to make use of the DIND project itself to make it work... AND you'll need to run the container in privileged mode... AND you'll need to deal with some strange DNS problems...
For my personal taste, it sounded too much workarounds to make something simple work and having two services running inside one container seems to break docker good practices and the principle of separation of concerns, something I'd like to avoid.
My opinion got even stronger when I read this: Using Docker-in-Docker for your CI or testing environment? Think twice. It's worth to mention that this last post is from the DIND author himself, so he deserves some attention.
My final solution is: run Jenkins as a containerized service, yes, but consider the docker daemon as part of the provisioning of the underlying server, even because your docker cache and images are data that you'll probably want to persist and they are fully owned and controlled by the daemon.
With this setup, all you need to do is mount the docker daemon socket in your Jenkins image (which also needs the docker client, but not the service):
$ docker run -p 8080:8080 -v /var/run/docker.sock:/var/run/docker.sock -v local/folder/with/jenkins_home:/var/jenkins_home namespace/my-jenkins-image
Or with a docker-compose volumes directive:
---
version: '2'
services:
jenkins:
image: namespace/my-jenkins-image
ports:
- '8080:8080'
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- local/folder/with/jenkins_home:/var/jenkins_home
# other services ...

Resources