I run the java web application on tomcat in the Docker container.
Is there any way to monitor the memory usage of the java application? I try to use jconsole with the process id of the docker, but it tells me Invalidate process id
I also enable JMX in tomcat, but don't know how to bind to it. I can use visualvm from my local to bind the host machine, but can not find way to bind to the docker inner the host.
Is there any good way to achieve this?
Thanks
To connect to a java process running in a docker container running in boot2docker with visualvm you can try the following:
Start your java process using the following options:
java -Dcom.sun.management.jmxremote.port=<port> \
-Dcom.sun.management.jmxremote.authenticate=false \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.rmi.port=<port> \
-Djava.rmi.server.hostname=<boot2docker_ip> \
<Main>
You need to run your image with --expose <port> -p <port>:<port>.
Then "Add JMX Connection" in visualvm with <boot2docker_ip>:<port>.
It shouldn't be much different without boot2docker.
To monitor it's usage, you need to get it's real Process ID. If you are running tomcat directly in the container, then it should be:
DOCKER_ROOT_PROC=`(docker inspect -f "{{ .State.Pid }}" my_container)`
If you are using something like Phusion's baseimage, then your java process will be a child of that process. To see the hierarchy use:
pstree $DOCKER_ROOT_PROC
Once you have that, you can write your script using
ps -o pid,cmd --no-headers --ppid $DOCKER_ROOT_PROC
In your script recursively to find the java process you want to monitor (with some Regular Expression filtering, of course). Then finally you can use this to get your java application's memory usage in kilobytes:
ps -o vsz -p $JAVAPROCESS
I don't know if this can be used with jconsole, but it is a way of monitoring the memory usage.
To monitor docker containers I recommend Google's cAdvisor project. That way you have a general solution to monitor docker containers. Just run your app, whatever that is, in a docker container, and check things like cpu and memory usage. Here you have an http API as well as a web ui.
I tried the Pierre's answer (also answered here) but no way.
At the end I could connect using a SSH tunnel.
cAdvisor mentioned above will not help with monitoring Tomcat running inside the container. You may want to take a look at SPM Client docker container, which does exactly that! It has the agents for monitoring a number of different applications running in Docker - Elasticsearch, Solr, Tomcat, MySQL, and so on: https://github.com/sematext/docker-spm-client
For the memory usage monitoring of your application in Docker, you can also launch an ejstatd inside your Docker container (calling mvn -Djava.rmi.server.hostname=$HOST_HOSTNAME exec:java -Dexec.args="-pr 2222 -ph 2223 -pv 2224" & from the ejstatd folder before launching your main container process), exposing those 3 ports to the Docker host using docker run -e HOST_HOSTNAME=$HOSTNAME -p 2222:2222 -p 2223:2223 -p 2224:2224 myimage.
Then you will be able to connect to this special jstatd daemon using JVisualVM for example, adding a "Remote Host" specifying your Docker hostname as "Host name" and adding a "Custom jstatd Connections" (in the "Advanced Settings") by setting "2222" to "Port".
Disclaimer: I'm the author of this open source tool.
Related
I want to use Zabbix to monitor my server (just one so far). In order to keep things neat, I've decided to run it in Docker containers. I just have doubts about the usage of the agent in a container. As far as I understand it should be able to monitor the host itself. But containers are usually isolated. So what's the point to run the agent in the container?
And if there is a reason to do so, should the network mode for agent's container be "host"?
Intro:
I've just done a fully Dockerized Zabbix 6.2 installation using Zabbix's GitHub Docker-Compose repo. My experience was that the Docker install was the better path, but other's might of course have different views.
Although it looks really daunting- there's a lot of components in it- Zabbix's Docker-Compose repo is the quickest and least painful way to fire-up a Zabbix installation; much easier to setup than a manual config.
I used their repo to configure an all-singing-all-dancing Zabbix infrastructure on a Raspberry Pi4 with 8GB RAM using a 64bit ARM version of Ubuntu 20.04 LTS. It would have taken ages to get the same results with a manual config.
There was one issue regarding connectivity problems I note at the end however. But once you get past that it's plug-n-chug.
Configuration:
Below is a very general outline of the process of configuring Zabbix using their Docker-Compose repo.
Server Infrastructure
The basic form of raising the components is:
docker-compose -f docker-compose_v3_ubuntu_pgsql_latest.yaml --profile all up -d
NOTE: 172.16.238.3 is the default IP of the Zabbix Server in my testing- it should be yours as well- but validate the IP.
Agents:
Starting an Agent is as simple as:
docker run --add-host=zabbix-server:172.16.238.3 -p 10050:10050 -d --privileged --name myHost-zabbix-agent -e ZBX_SERVER_HOST="zabbix-server" -e ZBX_PASSIVE_ALLOW="true" zabbix/zabbix-agent:ubuntu-6.0-latest
Just change "myHost-zabbix-agent" and add the new Zabbix Agent in the Web interface.
To get the IP of a new Zabbix agent raised with the above command:
docker ps
Then get the random id for it and:
docker exec -u root -it (random ID for agent from docker ps) bash
Once inside the container, reveal it's IP with:
hostname -I
Use this IP for the Agent's interface in the Zabbix server's web interface. As you've rightly remarked, since the agent runs in a container, it's isolated and the default IP pf 127.0.0.1 won't work: you need a routable IP for the Zabbix Server to reach the Agent on.
Then move on to the next host, changing the hostname in the docker run command above, get the Ip and add it in the Zabbix Server's web interface.
Conclusion:
Nothing stopping you from tailoring the configuration- Zabbix has made it very tweakable- but using Zabbix's Docker-Compose GitHub repo enables you to get some decent monitoring in place quickly with little effort and reduces the grunt work to the bare minimum; important if you have a lot of hosts.
There was one issue with configuring Agents' connectivity- Docker inserted an iptables rule which broke connectivity by NAT'ing the traffic, but I documented how to get around the problem here:
Dockerized Zabbix: Server Can't Connect to the Agents by IP
Hope this saves you some cycles-
I'm using an Nginx docker container as base image for an application. I'm redirecting Nginx logs to syslog but I'm not sure what is the best way to have the busybox syslogd started. It all works if I start it manually, I just need it to run as a daemon automatically when the container runs.
Seeing that nginx is in init.d I tried this in my Dockerfile:
RUN ln -s /bin/busybox syslogd /etc/init.d/syslogd || :
But syslogd still didn't run on start-up. Since the documentation says that only one [CMD] is allowed I have the following hack:
FROM nginx:mainline-alpine
CMD nginx & busybox syslogd -n
This works, locally at least, but I'm wondering what is the proper solution. By default the container already symlinks log files to stdout and stderr but I don't want to use docker's syslog logging driver, because the application will be deployed to Kubernetes so I need a self-contained solution, that will work in the pod. Thank you!
Have your container log to stdout, but collect the logs elsewhere.
One option is to configure Docker itself to send container logs to syslog:
docker run --log-driver=syslog --log-opt syslog-address=udp://... ... nginx
Since the Docker daemon itself is configuring this, the syslog-address needs to be something that can be reached from the host. If you're running syslogd in a separate container, this option needs to point at a published port.
Another option is to use the standard Docker JSON-format logging, but use another tool to forward the logs to somewhere else. This has the downside of needing an additional tool, but the upside of docker logs working unmodified. Fluentd is a prominent open-source option. (Logstash is another, but doesn't seem to directly have a Docker integration.)
I have a Server which has RHEL OS. I am creating a docker container inside the Server with the RHEL image as well.
My goal is to login to the docker container with a separate IP address as it was a VM.
So if the IP of the Server is 192.168.1.10 and the IP of the container inside the server is 192.168.1.15, I want to log in to both 192.168.1.10 and 192.168.1.15 as it was a separate VM. How can I achieve that?
Thanks for your help in advance.
Short answer: you’ll need to start the container running sshd. It could be as simple as adding /usr/sbin/sshd to the run command.
Longer answer: this is not really the way docker is supposed to be used. What you probably really want is a fully functional system, with sshd started via systemd. This is a multi process fat container, and generally not considered a best practice.
Options are
Use docker exec command
Use docker attach command
Start/setup sshd inside the container itself [not recommended though].
Below link details this process nicely:
https://phoenixnap.com/kb/how-to-ssh-into-docker-container
Note: This isn't my link. I found it via browsing internet.
If Websphere MQ is used as a XA(Distributed Transaction) Transaction Manager using Java MQ classes, not JTA, the Java application and the WMQ, both need to reside on the same host machine. I have been told this is because shared memory is used as Inter Process Communication mechanism. The Java application and the Websphere MQ both need access to shared memory to make XA work.
If we deploy WMQ in a docker container and keep our Java application in another docker container, both on the same host, will we be able to use the WMQ as a XA coordinator?
Will we have to use certain special configuration of the container to get it working? Can we allow the two containers to use common shared memory?
Regards,
Yash
You can use common IPC name spaces via the --ipc option for run and create
docker run -d --name=wmq wmq
docker run -d --ipc=container:wmq app
Or a less secure host ipc
docker run -d --ipc=host wmq
docker run -d --ipc=host app
I'm not sure of MQ's explicit support for either setup for XA but IBM do support MQ in Docker.
I created a customize Docker image based on ubuntu 14.04 with the Sensu-Client package inside.
Everything's went fine but now I'm wondering how can I trigger the checks to run from the hosts machine.
For example, I want to be able to check the processes that are running on the host machine and not only the ones running inside the container.
Thanks
It depends on what checks you want to run. A lot of system-level checks work fine if you run sensu container with --net=host and --privileged flags.
--net=host not just allows you to see the same hostname and IP as host system, but also all the tcp connections and interface metric will match for container and host.
--privileged gives container full access to system metrics like hdd, memory, cpu.
Tricky thing is checking external process metrics, as docker isolates it even from privileged container, but you can share host's root filesystem as docker volume ( -v /:/host) and patch check to use chroot or use /host/proc instead of /proc.
Long story short, some checks will just work, for others you need to patch or develop your own way, but sensu in docker is one possible way.
an unprivileged docker container cannot check processes outside of it's container because docker uses kernel namespaces to isolate it from all other processes running on the host. This is by design: docker security documentation
If you would like to run a super privileged docker container that has this namespace disabled you can run:
docker run -it --rm --privileged --pid=host alpine /bin/sh
Doing so removes an important security layer that docker provides and should be avoided if possible. Once in the container, try running ps auxf and you will see all processes on the host.
I don't think this is possible right now.
If the processes in the host instance are running inside docker, you can mount the socket and get the status from the sensu container
Add a sensu-client to the host machine? You might want to split it out so you have granulation between problems in the containers VS problems with your hosts
Else - You would have to set up some way to report from the inside - Either using something low level (system calls etc) or set up something from outside to catch the the calls and report back status.
HTHs
Most if not all sensu plugins hardcode the path to the proc files. One option is to mount the host proc files to a different path inside of the docker container and modify the sensu plugins to support this other location.
This is my base docker container that supports modifying the sensu plugins proc file location.
https://github.com/sstarcher/docker-sensu