I have simple Docker image built over ubuntu with dummy laravel php application.
I use supervisord to start nginx and php-fpm.
So far so good, everything works fine in my local machine or in any place where docker executable exists.
I'm trying to run same image in the Jelastic environment but I'm getting supervisord errors:
2017-01-21 14:34:30,283 INFO exited: cron (exit status 1; not expected)
2017-01-21 14:34:30,333 INFO exited: fpm (exit status 78; not expected)
2017-01-21 14:34:32,336 INFO spawned: 'cron' with pid 1216
2017-01-21 14:34:32,338 INFO spawned: 'fpm' with pid 1217
2017-01-21 14:34:32,341 INFO exited: cron (exit status 1; not expected)
2017-01-21 14:34:32,386 INFO exited: fpm (exit status 78; not expected)
I've contacted support and they told me that cron and php-fpm are already running because of systemd, so they logged in to my node, fixed something and now everything is running.
I'm wondering how this aligns with "Native Docker™© support"™ tagline everywhere in the documentation.
Anyways, I've setup new sample app for support investigation (image – https://hub.docker.com/r/rozhok/jelastic-laravel-docker/ sources – https://github.com/rozhok/jelastic-laravel-docker), tried to deploy it and everything worked fine all of sudden.
So my questions is:
How to avoid supervisord and systemd clashes when deploying to Jelastic. Remember, I want to have same image for all my environments, and I don't want to prepare "special" images for Jelastic.
Is there any other caveats we should know about? Maybe Docker support should be described a bit more.
It seems that you need to add only one extra line to your Dockerfile:
RUN systemctl disable php-fpm
that will disable php-fpm from being spawned from your systemd process, your process will be definitely started by supervisor and that will make the image to be compatible for both Jelastic and your docker host running on your local machine.
Related
I have downloaded Karate Chrome docker image from Docker Hub. When I am starting karate chrome docker with below command, I am seeing chrome is not starting when I spin up the container.
docker run --name karate --rm -p 9222:9222 -p 5900:5900 -e KARATE_SOCAT_START=true --cap-add=SYS_ADMIN ptrthomas/karate-chrome
I am able to spin up a container on my local Macbook but when I am trying to spin up the same image on my organization Macbook, it is throwing me an error. Do you know what it could be? Is it because of VPN? I disconnected VPN and tried but same response
Expected Result: Able to spin up a container with Chrome.
Actual Result: Not able to open Chrome.
Below are some logs.
INFO reaped unknown pid 2764(exit status 0)
INFO reaped unknown pid 2783(terminated by SIGTRAP)
INFO reaped success: chrome entered RUNNING state,process has stayed up for > than 1 seconds
INFO reaped unknown pid 2824(exit status 0)
INFO exited: chrome (terminate by SIGTRAP; not expected)
INFO reaped unknown pid 2810(exit status 0)
INFO reaped unknown pid 2815(exit status 0)
INFO reaped spawned: 'chrome' with pid 2828
INFO reaped unknown pid 2801(terminated by SIGPIE)
INFO reaped unknown pid 2802(terminated by SIGTRAP)
These logs keeps flowing up and everytime it is changing the port number.
Most likely because the Docker container is not yet ready for Apple silicon / ARM / M1 / M2.
You should be able to build your own Docker image looking at the source. We welcome contributions to make this work across all kinds of hardware.
I'm trying to create a docker container with systemd enabled and install auditd on it.
I'm using the standard centos/systemd image provided in dockerhub.
But when I'm trying to start audit, it fails.
Here is the list of commands that I have done to create and get into the docker container:
docker run -d --rm --privileged --name systemd -v /sys/fs/cgroup:/sys/fs/cgroup:ro centos/systemd
docker exec -it systemd bash
Now, inside the docker container:
yum install audit
systemctl start auditd
I'm receiving the following error:
Job for auditd.service failed because the control process exited with error code. See "systemctl status auditd.service" and "journalctl -xe" for details.
Then I run:
systemctl status auditd.service
And I'm getting this info:
auditd[182]: Error sending status request (Operation not permitted)
auditd[182]: Error sending enable request (Operation not permitted)
auditd[182]: Unable to set initial audit startup state to 'enable', exiting
auditd[182]: The audit daemon is exiting.
auditd[181]: Cannot daemonize (Success)
auditd[181]: The audit daemon is exiting.
systemd[1]: auditd.service: control process exited, code=exited status=1
systemd[1]: Failed to start Security Auditing Service.
systemd[1]: Unit auditd.service entered failed state.
systemd[1]: auditd.service failed.
Do you guys have any ideas on why this is happening?
Thank you.
See this discussion:
At the moment, auditd can be used inside a container only for aggregating
logs from other systems. It cannot be used to get events relevant to the
container or the host OS. If you want to aggregate only, then set
local_events=no in auditd.conf.
Container support is still under development.
Also see this:
local_events
This yes/no keyword specifies whether or not to include local events. Normally you want local events so the default value is yes. Cases where you would set this to no is when you want to aggregate events only from the network. At the moment, this is useful if the audit daemon is running in a container. This option can only be set once at daemon start up. Reloading the config file has no effect.
So at least at Date: Thu, 19 Jul 2018 14:53:32 -0400, this feature not support, had to wait.
Does anyone know how to pass custom Kafka configuration options to the landoop/fast-data-dev docker image?
Since there is no way to puss a custom config file and/or config params, what I've tried so far was to mount my own server.properties config file into /opt/confluent/etc/kafka by adding the following into my docker compose file
landoop:
hostname: 'landoop'
image: 'landoop/fast-data-dev:latest'
expose:
- '3030'
ports:
- '3030:3030'
environment:
- RUNTESTS=0
- RUN_AS_ROOT=1
volumes:
- ./docker/landoop/tmp:/tmp
- ./docker/landoop/opt/confluent/etc/kafka:/opt/confluent/etc/kafka
however, this causes Kafka to throw the following logs:
landoop_1 | 2017-09-28 11:53:03,886 INFO exited: broker (exit status 1; not expected)
landoop_1 | 2017-09-28 11:53:04,749 INFO spawned: 'broker' with pid 281
landoop_1 | 2017-09-28 11:53:05,851 INFO success: broker entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
landoop_1 | 2017-09-28 11:53:11,867 INFO exited: rest-proxy (exit status 1; not expected)
landoop_1 | 2017-09-28 11:53:12,604 INFO spawned: 'rest-proxy' with pid 314
landoop_1 | 2017-09-28 11:53:13,024 INFO exited: schema-registry (exit status 1; not expected)
landoop_1 | 2017-09-28 11:53:13,735 INFO spawned: 'schema-registry' with pid 341
landoop_1 | 2017-09-28 11:53:13,739 INFO success: rest-proxy entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
in addition, when I go to http://localhost:3030/kafka-topics-ui/, I see the following:
KAFKA REST
/api/kafka-rest-proxy
CONNECTIVITY ERROR
Any suggestions? Thank you.
There are couple of things that i did which simplified the whole process. This is valid only for dev environment
Start the docker with a interactive shell as the entry point
Start the docker on the host network
Make necessary changes to the server.properties file include the host IP where the docker was running (as mentioned in (2) docker is running on host network)
** 4. If you want to make any advanced configuration, you may do now**
Run the actual entry point "/usr/local/bin/setup-and-run.sh"
Actual commands used:
Start the container
sudo docker run -it --entrypoint /bin/bash --net=host --rm -e ADV_HOST=HOSTIP landoop/fast-data-dev:latest
Add following to /run/broker/server.properties
advertised.host.name = HOST-IP
advertised.port = 9092
Run /usr/local/bin/setup-and-run.sh
At today, with latest version of landoop/fast-data-dev is possible to specify custom Kafka configuration options by converting the configuration option to uppercase, replacing dots with underscores and prepending with KAFKA_.
For example if you want to set specific values for "log.retention.bytes" and "log.retention.hours" you should add the following to your docker compose environment section:
environment:
KAFKA_LOG_RETENTION_BYTES: 1073741824
KAFKA_LOG_RETENTION_HOURS: 48
ADV_HOST: 127.0.0.1
RUNTESTS: 0
BROWSECONFIGS: 1
You can specify this way also configuration options for other services ( schema registry, connect, rest proxy ). Check doc for details https://hub.docker.com/r/landoop/fast-data-dev/.
Once the container is up you can confirm this by looking at configuration file at the following path inside the container:
/run/broker/server.properties
or also through the Landoop UI at the following URL if you've set "BROWSECONFIGS" to 1 on the environments parameters:
http://127.0.0.1:3030/config/broker/server.properties
i am a Novice in Docker and wanted to use Sensu for monitoring containers. I have set up a Sensu server and Sensu client ( where my Docker containers are running ) using the below material:
Click [here] (http://devopscube.com/monitor-docker-containers-guide/)
I get the Sensu client information in Uchiwa Dashboard while running the below command:
docker run -d --name sensu-client --privileged \
-v $PWD/load-docker-metrics.sh:/etc/sensu/plugins/load-docker-metrics.sh \
-v /var/run/docker.sock:/var/run/docker.sock \
usman/sensu-client SENSU_SERVER_IP RABIT_MQ_USER RABIT_MQ_PASSWORD CLIENT_NAME CLIENT_IP
However, when i try to fire a new container from the same host machine , i do not get the information of the client in Uchiwa Dashboard.
It would be great if anyone have used Sensu with Docker to monitor Docker containers can guide on the same.
Thanks for the time.
Please logs of the sensu-client
'Supervisord is running as root and it is searching '
2017-01-09 04:11:47,210 CRIT Supervisor running as root (no user in config file)
2017-01-09 04:11:47,212 INFO supervisord started with pid 12
2017-01-09 04:11:48,214 INFO spawned: 'sensu-client' with pid 15
2017-01-09 04:11:49,524 INFO success: sensu-client entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-01-09 04:11:49,530 INFO exited: sensu-client (exit status 0; expected)
[ec2-user#ip-172-31-0-89 sensu-client]$ sudo su
[root#ip-172-31-0-89 sensu-client]# docker logs sensu-client
/usr/lib/python2.6/site-packages/supervisor-3.1.3-py2.6.egg/supervisor/options.py:296: UserWarning: Supervisord is running as root and it is searching for its configuration file in default locations (
including its current working directory); you probably want to specify a "-c" argument specifying an absolute path to a configuration file for improved security.
'Supervisord is running as root and it is searching '
2017-01-09 04:11:47,210 CRIT Supervisor running as root (no user in config file)
2017-01-09 04:11:47,212 INFO supervisord started with pid 12
2017-01-09 04:11:48,214 INFO spawned: 'sensu-client' with pid 15
2017-01-09 04:11:49,524 INFO success: sensu-client entered RUNNING state, process has stayed up for > than 1 seconds (startsecs)
2017-01-09 04:11:49,530 INFO exited: sensu-client (exit status 0; expected)
I'm trying to get my head around something that's been working on a Centos+Vagrant, but not on our providers RHEL (Red Hat Enterprise Linux Server release 6.5 (Santiago)). A sudo service docker restart hands this:
Stopping docker: [ OK ]
Starting cgconfig service: Error: cannot mount cpuset to /cgroup/cpuset: Device or resource busy
/sbin/cgconfigparser; error loading /etc/cgconfig.conf: Cgroup mounting failed
Failed to parse /etc/cgconfig.conf [FAILED]
Starting docker: [ OK ]
The service starts okey enough, but images cannot run. A mounting failed error is shown when I try. And the startup-log also gives a warning or two. Regarding the kernelwarning, centos gives the same and has no problems as Epel should resolve this:
WARNING: You are running linux kernel version 2.6.32-431.17.1.el6.x86_64, which might be unstable running docker. Please upgrade your kernel to 3.8.0.
2014/08/07 08:58:29 docker daemon: 1.1.2 d84a070; execdriver: native; graphdriver:
[1233d0af] +job serveapi(unix:///var/run/docker.sock)
[1233d0af] +job initserver()
[1233d0af.initserver()] Creating server
2014/08/07 08:58:29 Listening for HTTP on unix (/var/run/docker.sock)
[1233d0af] +job init_networkdriver()
[1233d0af] -job init_networkdriver() = OK (0)
2014/08/07 08:58:29 WARNING: mountpoint not found
Anyone had any success overcoming this problem or should I throw in the towel and wait for the provider to update to RHEL 7?
I have the same issue.
(1) check cgconfig status
# /etc/init.d/cgconfig status
if it stopped, restart it
# /etc/init.d/cgconfig restart
check cgconfig is running
(2) check cgconfig is on
# chkconfig --list cgconfig
cgconfig 0:off 1:off 2:off 3:off 4:off 5:off 6:off
if cgconfig is off, turn it on
(3) if still does not work, may be some cgroups modules is missing. In the kernel .config file, make menuconfig, add those modules into kernel and recompile and reboot
after that, it should be OK
I ended up asking the same question at Google Groups and in the end finding a solution with some help. What worked for me was this:
umount cgroup
sudo service cgconfig start
The project of making Docker work was put on halt all the same. Later a problem of network connection for the containers. This took to much time to solve and had to give up.
So I spent the whole day trying to rig docker to work on my vps. I was running into this same error. Basically what it came down to was the fact that OpenVZ didn't support docker containers up until a couple months ago. Specifically this RHEL update:
https://openvz.org/Download/kernel/rhel6/042stab105.14
Assuming this is your problem, or some variation of it, the burden of solving it is on your host. They will need to follow these steps:
https://openvz.org/Docker_inside_CT
In my case
/etc/rc.d/rc.cgconfig start
was generating
Starting cgconfig service: Error: cannot mount cpu,cpuacct,memory to
/cgroup/cpu_and_mem: Device or resource busy /usr/sbin/cgconfigparser;
error loading /etc/cgconfig.conf: Cgroup mounting failed Failed to
parse /etc/cgconfig.conf
i had to use:
/etc/rc.d/rc.cgconfig restart
and it automagicly umouted and mounted groups
Stopping cgconfig service: Starting cgconfig service:
it seems like the cgconfig service not running,so check it!
# /etc/init.d/cgconfig status
# mkdir -p /cgroup/cpuacct /cgroup/memory /cgroup/devices /cgroup/freezer net_cls /cgroup/blkio
# cat /etc/cgconfig.conf |tail|grep "="|awk '{print "mount -t cgroup -o",$1,$1,$NF}'>cgroup_mount.sh
# sh ./cgroup_mount.sh
# /etc/init.d/cgconfig restart
# /etc/init.d/docker restart
This situation occurs when the kernel is booted with cgroup_disable=memory and /etc/cgconfig.conf contains memory = /cgroup/memory;
This causes only /cgroup/cpuset to be mounted instead of the full set.
Solution: either remove cgroup_disable=memory from your kernel boot options or comment out memory = /cgroup/memory; from cgconfig.conf.
The cgconfig service startup uses mount and umount which requires an extra privilege bump from docker.
See the --privileged=true flag here for more info.
I was able to overcome this issue by starting my container with:
docker run -it --privileged=true my-image.
Tested in Centos6, Centos6.5.