Shared memory across docker containers - docker

If Websphere MQ is used as a XA(Distributed Transaction) Transaction Manager using Java MQ classes, not JTA, the Java application and the WMQ, both need to reside on the same host machine. I have been told this is because shared memory is used as Inter Process Communication mechanism. The Java application and the Websphere MQ both need access to shared memory to make XA work.
If we deploy WMQ in a docker container and keep our Java application in another docker container, both on the same host, will we be able to use the WMQ as a XA coordinator?
Will we have to use certain special configuration of the container to get it working? Can we allow the two containers to use common shared memory?
Regards,
Yash

You can use common IPC name spaces via the --ipc option for run and create
docker run -d --name=wmq wmq
docker run -d --ipc=container:wmq app
Or a less secure host ipc
docker run -d --ipc=host wmq
docker run -d --ipc=host app
I'm not sure of MQ's explicit support for either setup for XA but IBM do support MQ in Docker.

Related

Docker: network between linux and window container

I have a window container with an asp.net webapi app (not core) and second (linux) container with an sql server.
In linux container I have created new network:
docker network create budget-app-network
and created container:
docker run -d --name budget-db -p 11433:1433 --network budget-app-network --network-alias mssql budget-db
When I want to enable the window container by using:
docker run -d --name budget-app -p 888:80 --network budget-app-network budget-app
I got an error says:
docker: Error response from daemon: network budget-app-network not found.
I can't find how to connect the web api to the database. How can I make to communication? I believe it would work if I would have an two linux or two windows containers and not mixed them.
Background
When you are running Windows and Linux container on a windows host, you have two docker engines running. One engine is running natively on windows and is running the windows containers, and one inside a virtual machine (Hyerp-V) running the linux continuers. This is discussed in the following thread on github
Solution options
Because they are running on separate hosts, you need to manage the network in the same manner.
The easiest approach is to allow the containers to communicate trough the published ports, tough the windows host (routing the traffic trough the public IP of the host).
Also, you can use docker-compose as described in this post and allow docker-compose to create the network bridge between the VM containers and the windows contaierns.
Finally you have option to create a swarm by installing a linux and windows VMs (Hyper-V) and create a mixed-OS swarm. This is the most complicated option and the drawback is that you will have additional overhead from the additional windows machine running in hyper-v. The details are described in Microsfot's documentation

Scenarios for Docker daemon and client / CLI on separate boxes?

What would be some use case for keeping Docker clients or CLI and Docker daemon on separate machines?
Why would you keep the two separate?
You should never run the two separately. The only exception is with very heavily managed docker-machine setups where you're confident that Docker has set up all of the required security controls. Even then, I'd only use that for a local VM when necessary (as part of Docker Toolbox; to demonstrate a Swarm setup) and use more purpose-built tools to provision cloud resources.
Consider this Docker command:
docker run --rm -v /:/host busybox vi /host/etc/shadow
Anyone who can run this command can change any host user's password to anything of their choosing, and easily take over the whole system. There are probably more direct ways to root the host. The only requirement to run this command is that you have access and permissions to access the Docker socket.
This means: anyone who can access the Docker socket can trivially root the host. If it's network accessible, anyone who can reach port 2375 on your system can take it over.
This isn't an acceptable security position for the mild convenience of not needing to ssh to a remote server to run docker commands. The various common system-automation tools (Ansible, Chef, Salt Stack) all can invoke Docker as required, and using one of these tools is almost certainly preferable to trying to configure TLS for Docker.
If you run into a tutorial or other setup advising you to start the Docker daemon with a -H option to publish the Docker socket over the network (even just to the local system) be aware that it's a massive security vulnerability, equivalent to disabling your root password.
(I hinted above that it's possible to use TLS encryption on the network socket. This is a tricky setup, and it involves sharing around a TLS client certificate that has root-equivalent power over the host. I wouldn't recommend trying it; ssh to the target system or use an automation tool to manage it instead.)

Access Docker daemon on Host without knowing Host OS

I use docker-compose to spin up a few containers as part of an application I'm developing. One of the containers needs to start a docker swarm service on the host machine. On Docker for Windows and Docker for Mac, I can connect to the host docker daemon using the REST Api by using the "host.docker.internal" DNS name and this works great. However, if I run the same compose file on linux, "host.docker.internal" does not work (yet, seems it may be coming in the next version of docker). To make matters worse, on Linux I can use network mode of "host" to work around the issue but that isn't supported on Windows or Mac.
How can I either:
Create a docker-compose file or structure a containerized application to be slightly different based on the host platform (windows|mac|linux) without having to create multiple docker-compose.yml files or different application code?
Access the host docker daemon in a consistent way regardless of the host OS?
If it matters, the container that is accessing the docker daemon of the host is using the docker python sdk and making api calls to docker over tcp without TLS (this is used for development only).
Update w/ Solution Detail
For a little more background, there's a web application (aspnet core/C#) that allows users to upload a zip file. The zip file contains, among other things, an exported docker image file. There's also an nginx container in front of all of this to allow for ssl termination and load balancing. The web application pulls out the docker image, then using the docker daemon's http api, loads the image, re-tags the image, then pushes it to a private docker repository (which is running somewhere on the developer's network, external to docker). After that, it posts a message to a message queue where a separate python application uses the python docker library to deploy the docker image to a docker swarm.
For development purposes, the applications all run as containers and thus need to interact with docker running on the host machine as a stand alone swarm node. SoftwareEngineer's answer lead me down the right path. I mapped the docker socket from the host into the web application container at first but ran into a limitation of .net core that won't be resolved until .net 5 which is that there's no clean way of doing http over a unix socket.
I worked around that issue by eventually realizing that nginx can reverse proxy http traffic to a unix socket. I setup all containers (including the dynamically loaded swarm service from the zips) to be part of an overlay network to give them all access to each other and allowing me to hit an http endpoint to control the host machine's docker/swarm daemon over http.
The last hurdle I ran into was that nginx couldn't write to the mapped in /var/run/docker.sock file so I modified nginx.conf to allow it to run as root within the container.
As far as I can tell, the docker socket is available at the path /var/run/docker.sock on all systems. I have personally verified this with a recent Linux distro (Ubuntu), Windows 10 Pro running Docker for Windows (2.2.0) with both WSL2 (Ubuntu and Alpine) and the windows cmd (cli) and powershell. From memory, it works with OSX too, and I used to do the same thing in WSL1.
Mapping this into a container is achieved on any terminal with the -v, --volume, or --mount flags. So,
docker container run -v /var/run/docker.sock:/var/run/docker.sock
Mounts the socket into an identical path within the container. This means that you can access the socket using the standard docker client (docker) from within the container with no extra configuration. Using this path inside a Linux container is recommended because the standard location and is likely to be less confusing to anyone maintaining your code in the future (including yourself).

Sensu-Client inside Docker container

I created a customize Docker image based on ubuntu 14.04 with the Sensu-Client package inside.
Everything's went fine but now I'm wondering how can I trigger the checks to run from the hosts machine.
For example, I want to be able to check the processes that are running on the host machine and not only the ones running inside the container.
Thanks
It depends on what checks you want to run. A lot of system-level checks work fine if you run sensu container with --net=host and --privileged flags.
--net=host not just allows you to see the same hostname and IP as host system, but also all the tcp connections and interface metric will match for container and host.
--privileged gives container full access to system metrics like hdd, memory, cpu.
Tricky thing is checking external process metrics, as docker isolates it even from privileged container, but you can share host's root filesystem as docker volume ( -v /:/host) and patch check to use chroot or use /host/proc instead of /proc.
Long story short, some checks will just work, for others you need to patch or develop your own way, but sensu in docker is one possible way.
an unprivileged docker container cannot check processes outside of it's container because docker uses kernel namespaces to isolate it from all other processes running on the host. This is by design: docker security documentation
If you would like to run a super privileged docker container that has this namespace disabled you can run:
docker run -it --rm --privileged --pid=host alpine /bin/sh
Doing so removes an important security layer that docker provides and should be avoided if possible. Once in the container, try running ps auxf and you will see all processes on the host.
I don't think this is possible right now.
If the processes in the host instance are running inside docker, you can mount the socket and get the status from the sensu container
Add a sensu-client to the host machine? You might want to split it out so you have granulation between problems in the containers VS problems with your hosts
Else - You would have to set up some way to report from the inside - Either using something low level (system calls etc) or set up something from outside to catch the the calls and report back status.
HTHs
Most if not all sensu plugins hardcode the path to the proc files. One option is to mount the host proc files to a different path inside of the docker container and modify the sensu plugins to support this other location.
This is my base docker container that supports modifying the sensu plugins proc file location.
https://github.com/sstarcher/docker-sensu

How to monitor java application memory usage in Docker

I run the java web application on tomcat in the Docker container.
Is there any way to monitor the memory usage of the java application? I try to use jconsole with the process id of the docker, but it tells me Invalidate process id
I also enable JMX in tomcat, but don't know how to bind to it. I can use visualvm from my local to bind the host machine, but can not find way to bind to the docker inner the host.
Is there any good way to achieve this?
Thanks
To connect to a java process running in a docker container running in boot2docker with visualvm you can try the following:
Start your java process using the following options:
java -Dcom.sun.management.jmxremote.port=<port> \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.rmi.port=<port> \
-Djava.rmi.server.hostname=<boot2docker_ip> \
<Main>
You need to run your image with --expose <port> -p <port>:<port>.
Then "Add JMX Connection" in visualvm with <boot2docker_ip>:<port>.
It shouldn't be much different without boot2docker.
To monitor it's usage, you need to get it's real Process ID. If you are running tomcat directly in the container, then it should be:
DOCKER_ROOT_PROC=`(docker inspect -f "{{ .State.Pid }}" my_container)`
If you are using something like Phusion's baseimage, then your java process will be a child of that process. To see the hierarchy use:
pstree $DOCKER_ROOT_PROC
Once you have that, you can write your script using
ps -o pid,cmd --no-headers --ppid $DOCKER_ROOT_PROC
In your script recursively to find the java process you want to monitor (with some Regular Expression filtering, of course). Then finally you can use this to get your java application's memory usage in kilobytes:
ps -o vsz -p $JAVAPROCESS
I don't know if this can be used with jconsole, but it is a way of monitoring the memory usage.
To monitor docker containers I recommend Google's cAdvisor project. That way you have a general solution to monitor docker containers. Just run your app, whatever that is, in a docker container, and check things like cpu and memory usage. Here you have an http API as well as a web ui.
I tried the Pierre's answer (also answered here) but no way.
At the end I could connect using a SSH tunnel.
cAdvisor mentioned above will not help with monitoring Tomcat running inside the container. You may want to take a look at SPM Client docker container, which does exactly that! It has the agents for monitoring a number of different applications running in Docker - Elasticsearch, Solr, Tomcat, MySQL, and so on: https://github.com/sematext/docker-spm-client
For the memory usage monitoring of your application in Docker, you can also launch an ejstatd inside your Docker container (calling mvn -Djava.rmi.server.hostname=$HOST_HOSTNAME exec:java -Dexec.args="-pr 2222 -ph 2223 -pv 2224" & from the ejstatd folder before launching your main container process), exposing those 3 ports to the Docker host using docker run -e HOST_HOSTNAME=$HOSTNAME -p 2222:2222 -p 2223:2223 -p 2224:2224 myimage.
Then you will be able to connect to this special jstatd daemon using JVisualVM for example, adding a "Remote Host" specifying your Docker hostname as "Host name" and adding a "Custom jstatd Connections" (in the "Advanced Settings") by setting "2222" to "Port".
Disclaimer: I'm the author of this open source tool.

Resources