Testing with Jenkins & Docker, I don't understand completely what is happening with my containers and images.
Firstly, I built my first docker container from jenkins/jenkins:tls
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
And I received the typical messsage from jenkins installation with the initial password:
INFO:
*************************************************************
*************************************************************
*************************************************************
Jenkins initial setup is required. An admin user has been created and a password generated.
Please use the following password to proceed to installation:
xxxxxxxxxxxxxxxxxxxxxxxxxxxx
This may also be found at: /var/jenkins_home/secrets/initialAdminPassword
*************************************************************
*************************************************************
*************************************************************
I completed the installation process and I were playing with Jenkins for a while. Everything ok.
My missunderstanding come repeating the process from the beggining. I deleted my container and built the same container for a second time.
docker container stop myjenkins <- Stop container
docker container rm myjenkins <- Remove myjenkins container
docker image rm 95bf220e341a <- Remove jenkins/jenkins image
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
But in this case, Jenkins doesn't show me a new initial password for this second time:
Jun 18, 2019 7:43:17 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#83bb567: defining beans [authenticationManager]; root of factory hierarchy
<-- I was expecting the message just here -->
Jun 18, 2019 7:43:17 PM org.springframework.context.support.AbstractApplicationContext prepareRefresh
INFO: Refreshing org.springframework.web.context.support.StaticWebApplicationContext#5bfdcaf3: display name [Root WebApplicationContext]; startup date [Tue Jun 18 19:43:17 UTC 2019]; root of context hierarchy
Jun 18, 2019 7:43:17 PM org.springframework.context.support.AbstractApplicationContext obtainFreshBeanFactory
INFO: Bean factory for application context [org.springframework.web.context.support.StaticWebApplicationContext#5bfdcaf3]: org.springframework.beans.factory.support.DefaultListableBeanFactory#1f98db0a
Jun 18, 2019 7:43:17 PM org.springframework.beans.factory.support.DefaultListableBeanFactory preInstantiateSingletons
INFO: Pre-instantiating singletons in org.springframework.beans.factory.support.DefaultListableBeanFactory#1f98db0a: defining beans [filter,legacy]; root of factory hierarchy
Jun 18, 2019 7:43:18 PM jenkins.InitReactorRunner$1 onAttained
INFO: Completed initialization
Jun 18, 2019 7:43:19 PM hudson.WebAppMain$3 run
INFO: Jenkins is fully up and running
I tried with docker system prune -a but nothing changed. Every time that I tried to rebuild my container, I couldn't get the Initial Admin Password message again.
What's happening? If I delete a container.. How Docker/Jenkins knows that is not the first time I try to install jenkins?
-v jenkins_home:/var/jenkins_home
You're mapping that directory to somewhere so the docker image is (hopefully) immutable. Recreating it makes no difference - if you don't expunge that folder then Jenkins' config data remains intact the next time the image is started.
Also, this means your bootstrap pwd should be available on the docker host at:
jenkins_home/secrets/initialAdminPassword
Related
In my AWS Ubuntu machine, I started a Jenkins build and somehow due to this machine hang so I forcefully stop the instance and then start again after that Jenkins is not starting up. Getting below.
<pre><code>
jenkins.service - LSB: Start Jenkins at boot time
Loaded: loaded (/etc/init.d/jenkins; generated)
Active: failed (Result: exit-code) since Tue 2020-12-01 15:56:48 UTC; 4min 6s ago
Docs: man:systemd-sysv-generator(8)
Process: 27605 ExecStart=/etc/init.d/jenkins start (code=exited, status=7)
Dec 01 15:56:47 ip-172-31-7-133 systemd[1]: Starting LSB: Start Jenkins at boot time...
Dec 01 15:56:47 ip-172-31-7-133 jenkins[27605]: Correct java version found
Dec 01 15:56:47 ip-172-31-7-133 jenkins[27605]: * Starting Jenkins Automation Server jenkins
Dec 01 15:56:48 ip-172-31-7-133 jenkins[27605]: ...fail!
Dec 01 15:56:48 ip-172-31-7-133 systemd[1]: jenkins.service: Control process exited, code=exited status=7
Dec 01 15:56:48 ip-172-31-7-133 systemd[1]: jenkins.service: Failed with result 'exit-code'.
Dec 01 15:56:48 ip-172-31-7-133 systemd[1]: Failed to start LSB: Start Jenkins at boot time.
</code></pre>
I imagine it can't start because there is already a Jenkins process running which is consuming that same port. You should be able to verify that by looking at the Jenkins logfile.
You can do:
ps auxw | grep jenkins
If it returns a process, then you can kill -9 PID.
For example:
[user#server~]$ ps auxw | grep jenkins
jenkins 4656 0.3 33.2 5070780 2716228 ? Ssl Nov05 144:15 /etc/alternatives/java -Djava.awt.headless=true -DJENKINS_HOME=/var/lib/jenkins -jar /usr/lib/jenkins/jenkins.war --logfile=/var/log/jenkins/jenkins.log --webroot=/var/cache/jenkins/war --httpPort=8080 --debug=5 --handlerCountMax=100 --handlerCountMaxIdle=20
user 14665 0.0 0.0 119416 908 pts/0 S+ 23:08 0:00 grep --color=auto jenkins
[user#server~]$ kill -9 4656
Then try and start your Jenkins instance. Depending on how your Jenkins server is setup you would most likely need to do the above via sudo.
I don't know if this applies to you, but it was my problem. It is similar to Terry Sposato's answer.
We had a scenario where our Ubuntu node was running a standalone instance of Jenkins as wells an SSH started child instance managed from another node.
We saw these sort of service start errors if the SSH started child instance was running.
I resolved this by
Accessing the parent Jenkins instance and selecting disconnect for the Ubuntu node
Immediately invoking sudo /etc/init.d/jenkins start on the Ubuntu node to start it's standalone instance.
After that the jenkins parent in step 1 would eventually reconnect to the the Ubuntu node.
We have not seen this sort of behavior with our CentOS node that also has a standalone and child instance of Jenkins running. I suspect it's a defect with Ubuntu's lsb startup scripts, my debugging showed that the problem happened when /etc/init.d/jenkins sourced /lib/lsb/init-functions.
UPDATE 3 (Solution):
I installed the latest Windows updates on my host and specified the exact servercore image to match my updated Windows Server version:
mcr.microsoft.com/windows/servercore:10.0.17763.973
When running the container everything worked as expected.
Original question:
I cannot figure out why nginx doesn't start in my container running on Windows Server 2019.
Nothing is written to the nginx error.log and inspecting the System Event using this answer doesn't provide any information regarding nginx.
When I run nginx directly on the server (i.e. without the container) it starts up fine.
Here are the contents of the dockerfile:
FROM mcr.microsoft.com/windows/servercore:ltsc2019
WORKDIR C:/nginx
COPY /. ./
CMD ["nginx"]
I run the container using the following command:
docker run -d --rm -p 80:80 --name nginx `
-v C:/Data/_config/nginx/conf:C:/nginx/conf `
-v C:/Data/_config/nginx/temp:C:/nginx/temp `
-v C:/Data/_config/nginx/logs:C:/nginx/logs `
nginx-2019
If I exec into the running container I can see that the directory structure is as expected:
Microsoft Windows [Version 10.0.17763.1040]
(c) 2018 Microsoft Corporation. All rights reserved.
C:\nginx>dir
Volume in drive C has no label.
Volume Serial Number is 72BD-907D
Directory of C:\nginx
02/27/2020 06:05 AM <DIR> .
02/27/2020 06:05 AM <DIR> ..
02/27/2020 06:05 AM <DIR> conf
02/27/2020 05:11 AM <DIR> contrib
02/27/2020 05:11 AM <DIR> docs
02/27/2020 05:11 AM <DIR> html
02/27/2020 05:55 AM <DIR> logs
02/27/2020 05:14 AM <DIR> conf
01/21/2020 03:30 PM 3,712,512 nginx.exe
01/21/2020 04:41 PM <DIR> temp
1 File(s) 3,716,608 bytes
9 Dir(s) 21,206,409,216 bytes free
UPDATE 1:
As part of my troubleshooting process I started up a clean VM on Azure and after installing Docker and recreating the Docker image using the exact same files, it starts up as expected.
This means that the issue is specific to my server and not a general issue.
UPDATE 2:
By removing the --rm from the run command I find the following info by running docker ps -a after the container exits:
Exited (3221225785) 4 seconds ago
I can't find any info on what the exit code means.
I was having the same issue, but for me it wasn't docker or nginx, it was the image.
image mcr.microsoft.com/windows/servercore:ltsc2019 was updated on 2/11/2020 and both container and host must have the same update (KB4532691 I think) or some processes may fail silently.
I updated my host, and all is well.
See microsoft-windows-servercore
and you-might-encounter-issues-when-using-windows-server-containers-with-t for more info
I try to see docker logs with the --details flag
I read the docs but i see no difference with or without the flag : https://docs.docker.com/engine/reference/commandline/logs/
For exemple this command echoes the date every second.
$ docker run --name test -d busybox sh -c "while true; do $(echo date); sleep 1; done"
e9d836000532
This command shows logs :
$ docker logs e9d836000532
Sun Jan 26 16:01:55 UTC 2020
...
This command adds nothing more that a "space on the left" :
$ docker logs --details e9d836000532
...
Sun Jan 26 16:01:55 UTC 2020
From docker documentation:
The docker logs --details command will add on extra attributes, such
as environment variables and labels, provided to --log-opt when
creating the container.
currently you have an extra space on the left when you use docker log --details because you probably do not use --log-opt when you create your container.
For your interest, --log-opt is used to use an another log driver than docker default's one
Try out this one :
https://docs.docker.com/config/containers/logging/fluentd/
I have 3 CentOS VMs and I have installed Zookeeper, Marathon, and Mesos on the master node, while only putting Mesos on the other 2 VMs. The master node has no mesos-slave running on it. I am trying to run Docker containers so i specified "docker,mesos" in the containerizes file. One of the mesos-agents starts fine with this configuration and I have been able to deploy a container to that slave. However, the second mesos-agent simply fails when I have this configuration (it works if i take out that containerizes file but then it doesn't run containers). Here are some of the logs and information that has come up:
Here are some "messages" in the log directory:
Apr 26 16:09:12 centos-minion-3 systemd: Started Mesos Slave.
Apr 26 16:09:12 centos-minion-3 systemd: Starting Mesos Slave...
WARNING: Logging before InitGoogleLogging() is written to STDERR
[main.cpp:243] Build: 2017-04-12 16:39:09 by centos
[main.cpp:244] Version: 1.2.0
[main.cpp:247] Git tag: 1.2.0
[main.cpp:251] Git SHA: de306b5786de3c221bae1457c6f2ccaeb38eef9f
[logging.cpp:194] INFO level logging started!
[systemd.cpp:238] systemd version `219` detected
[main.cpp:342] Inializing systemd state
[systemd.cpp:326] Started systemd slice `mesos_executors.slice`
[containerizer.cpp:220] Using isolation: posix/cpu,posix/mem,filesystem/posix,network/cni
[linux_launcher.cpp:150] Using /sys/fs/cgroup/freezer as the freezer hierarchy for the Linux launcher
[provisioner.cpp:249] Using default backend 'copy'
[slave.cpp:211] Mesos agent started on (1)#172.22.150.87:5051
[slave.cpp:212] Flags at startup: --appc_simple_discovery_uri_prefix="http://" --appc_store_dir="/tmp/mesos/store/appc" --authenticate_http_readonly="false" --authenticate_http_readwrite="false" --authenticatee="crammd5" --authentication_backoff_factor="1secs" --authorizer="local" --cgroups_cpu_enable_pids_and_tids_count="false" --cgroups_enable_cfs="false" --cgroups_hierarchy="/sys/fs/cgroup" --cgroups_limit_swap="false" --cgroups_root="mesos" --container_disk_watch_interval="15secs" --containerizers="docker,mesos" --default_role="*" --disk_watch_interval="1mins" --docker="docker" --docker_kill_orphans="true" --docker_registry="https://registry-1.docker.io" --docker_remove_delay="6hrs" --docker_socket="/var/run/docker.sock" --docker_stop_timeout="0ns" --docker_store_dir="/tmp/mesos/store/docker" --docker_volume_checkpoint_dir="/var/run/mesos/isolators/docker/volume" --enforce_container_disk_quota="false" --executor_registration_timeout="1mins" --executor_shutdown_grace_period="5secs" --fetcher_cache_dir="/tmp/mesos/fetch" --fetcher_cache_size="2GB" --frameworks_home="" --gc_delay="1weeks" --gc_disk_headroom="0.1" --hadoop_home="" --help="false" --hostname_lookup="true" --http_authenticators="basic" --http_command_executor="false" --http_heartbeat_interval="30secs" --initialize_driver_logging="true" --isolation="posix/cpu,posix/mem" --launcher="linux" --launcher_dir="/usr/libexec/mesos" --log_dir="/var/log/mesos" --logbufsecs="0" --logging_level="INFO" --max_completed_executors_per_framework="150" --oversubscribed_resources_interval="15secs" --perf_duration="10secs" --perf_interval="1mins" --qos_correction_interval_min="0ns" --quiet="false" --recover="reconnect" --recovery_timeout="15mins" --registration_backoff_factor="1secs" --revocable_cpu_low_priority="true" --runtime_dir="/var/run/mesos" --sandbox_directory="/mnt/mesos/sandbox" --strict="true" --switch_user="true" --systemd_enable_support="true" --systemd_runtime_directory="/run/systemd/system" --version="false" --work_dir="/var/lib/mesos"
[slave.cpp:541] Agent resources: cpus(*):1; mem(*):919; disk(*):2043; ports(*):[31000-32000]
[slave.cpp:549] Agent attributes: [ ]
[slave.cpp:554] Agent hostname: node3
[status_update_manager.cpp:177] Pausing sending status updates
[state.cpp:62] Recovering state from '/var/lib/mesos/meta'
[state.cpp:706] No committed checkpointed resources found at '/var/lib/mesos/meta/resources/resources.info'
[status_update_manager.cpp:203] Recovering status update manager
[docker.cpp:868] Recovering Docker containers
[containerizer.cpp:599] Recovering containerizer
[provisioner.cpp:410] Provisioner recovery complete
[group.cpp:340] Group process (zookeeper-group(1)#172.22.150.87:5051) connected to ZooKeeper
[group.cpp:830] Syncing group operations: queue size (joins, cancels, datas) = (0, 0, 0)
[group.cpp:418] Trying to create path '/mesos' in ZooKeeper
[detector.cpp:152] Detected a new leader: (id='15')
[group.cpp:699] Trying to get '/mesos/json.info_0000000015' in ZooKeeper
[zookeeper.cpp:259] A new leading master (UPID=master#172.22.150.88:5050) is detected
Failed to perform recovery: Collect failed: Failed to run 'docker -H unix:///var/run/docker.sock ps -a': exited with status 1; stderr='Cannot connect to the Docker daemon. Is the docker daemon running on this host?'
To remedy this do as follows:
Step 1: rm -f /var/lib/mesos/meta/slaves/latest
This ensures agent doesn't recover old live executors.
Step 2: Restart the agent.
Apr 26 16:09:13 centos-minion-3 systemd: mesos-slave.service: main process exited, code=exited, status=1/FAILURE
Apr 26 16:09:13 centos-minion-3 systemd: Unit mesos-slave.service entered failed state.
Apr 26 16:09:13 centos-minion-3 systemd: mesos-slave.service failed.
Logs from docker:
$ sudo systemctl status docker
● docker.service - Docker Application Container Engine Loaded:
loaded (/usr/lib/systemd/system/docker.service; disabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
└─flannel.conf Active: inactive (dead) since Tue 2017-04-25 18:00:03 CDT;
24h ago Docs: docs.docker.com Main PID: 872 (code=exited, status=0/SUCCESS)
Apr 26 18:25:25 centos-minion-3 systemd[1]: Dependency failed for Docker Application Container Engine.
Apr 26 18:25:25 centos-minion-3 systemd[1]: Job docker.service/start failed with result 'dependency'
Logs from flannel:
[flanneld-start: network.go:102] failed to retrieve network config: client: etcd cluster is unavailable or misconfigured
You have answer in your logs
Failed to perform recovery: Collect failed:
Failed to run 'docker -H unix:///var/run/docker.sock ps -a': exited with status 1;
stderr='Cannot connect to the Docker daemon. Is the docker daemon running on this host?'
To remedy this do as follows:
Step 1: rm -f /var/lib/mesos/meta/slaves/latest
This ensures agent doesn't recover old live executors.
Step 2: Restart the agent.
Mesos keeps it state/metadata on local disk. When it's restarted it try to load this state. If configuration changed and is not compatible with previous state it won't start.
Just bring docker to live by fixing problems with flannel and etcd and everything will be fine.
add the following flag while starting agent,
--reconfiguration_policy=additive
more details here: http://mesos.apache.org/documentation/latest/agent-recovery/
Trying an example from the "Docker in Action" book.
$docker run -d --name wp2 --link wpdb:mysql -p 80 --read-only wordpress:4
... should have triggered this error ...
Read-only file system: AH00023: Couldn't create the rewrite-map mutex
(file /var/lock/apache2/rewrite-map.1)”
but it did not.
It triggered a file descriptor error ...
$docker logs wp2
WordPress not found in /var/www/html - copying now...
Complete! WordPress has been successfully copied to /var/www/html
Wed Dec 9 23:15:21 2015 (21): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:21 2015 (30): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:21 2015 (39): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:21 2015 (48): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:22 2015 (62): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:22 2015 (76): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:22 2015 (90): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:22 2015 (104): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:22 2015 (118): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:22 2015 (132): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:22 2015 (146): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:22 2015 (160): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:15:22 2015 (164): Fatal Error Unable to create lock file: Bad file descriptor (9)
The book suggested that we could make this work using volumes like so ...
$docker run -d --name wp3 --link wpdb:mysql -p 80 -v /var/lock/apache2/ -v /var/run/apache2/ --read-only wordpress:4
305e62e18d926a54ac7d1a0fb775f61efdb61486d9d9245933c3b18055bd9856
container "seems" to start ok
but it did not ...
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6bd4d90f594b mysql:5 "/entrypoint.sh mysql" 21 minutes ago Up 21 minutes 3306/tcp wpdb
the logs say this ...
$docker logs wp3
WordPress not found in /var/www/html - copying now...
Complete! WordPress has been successfully copied to /var/www/html
Wed Dec 9 23:31:57 2015 (22): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:57 2015 (31): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:57 2015 (40): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:57 2015 (49): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:57 2015 (63): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:58 2015 (77): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:58 2015 (91): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:58 2015 (105): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:58 2015 (119): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:58 2015 (133): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:58 2015 (147): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:58 2015 (161): Fatal Error Unable to create lock file: Bad file descriptor (9)
Wed Dec 9 23:31:58 2015 (165): Fatal Error Unable to create lock file: Bad file descriptor (9)
I'm not sure why this is happening.
The book I am reading says that this should work.
I was not able to find any examples of anyone else who was getting this particular error.
Removing the --read-only flag entirely does work.
$docker run -d --name wp3 --link wpdb:mysql -p 80 -v /var/lock/apache2/ -v /var/run/apache2/ wordpress:4
990874c73691c42d3c04aceb19f83a698f90a2f9ddcf1c07fb3cc9b9f1986723
$docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
990874c73691 wordpress:4 "/entrypoint.sh apach" 5 seconds ago Up 4 seconds 0.0.0.0:32773->80/tcp wp3
6bd4d90f594b mysql:5 "/entrypoint.sh mysql" About an hour ago Up About an hour 3306/tcp wpdb
This is similar to #allingeek's solution, but I couldn't get that to work without explicitly allowing access to /tmp:
docker run -d --name wp --read-only -v /run/lock/apache2/ -v /run/apache2/ -v /tmp/ --link wpdb:mysql -p 80 wordpress:4
Without -v /tmp/ I still got the "Bad file descriptor" log output.
The quick fix for this issue is to use an older version of the WordPress image. It seems that they changed their file locking mechanisms between 4.2 and 4.3. So, the commands become:
$docker run -d --name wp2 --link wpdb:mysql -p 80 --read-only wordpress:4.2
$docker run -d --name wp3 --link wpdb:mysql -p 80 -v /var/lock/apache2/ -v /var/run/apache2/ --read-only wordpress:4.2
Going deeper, it looks like the WordPress image changed the locations where it writes these files. To discover the differences I took the following steps:
Start up a wordpress:4 container without the read-only file system
Examine the file system changes to the container
Change the example to create the volumes at the new locations
This analysis ran as follows:
# Create the writable container
$ docker run -d --name wp10 --link wpdb:mysql -p 80 wordpress:4
# Examine the differences
$ docker diff wp10
C /run
C /run/apache2
A /run/apache2/apache2.pid
C /run/lock
C /run/lock/apache2
C /tmp
# Update the example for the new locations
$ docker run -d --name wp15 --read-only -v /run/lock/apache2/ -v /run/apache2/ --link wpdb:mysql -p 80 wordpress:4
As you can see, the image moved the PID and lock files from /var to /run, and added a write dependency on /tmp. Understanding this analysis is important if you are going to translate this tactic to a different example.
answer to comment above - no, it does not fix the issue, wp comes up and stays up. but fails to connect to mysql. in-turn agent fails and exits
sorry in advance, i don't have enough reputations to comment above.
tl;dr
first run, to start from a known point
docker rm -f $(docker ps -aq) # CAUTION: this removes ALL your containers!!!
Then run this script
#!/bin/bash
# CLIENT_ID must be set or need to use script cmdline arg
docker-name() {
docker inspect --format '{{ .Name }}' "$#"
}
docker-ip() {
docker inspect --format '{{ .NetworkSettings.IPAddress }}' "$#"
}
CLIENT_ID=DUKE
DB_CID=$(docker run -d -e MYSQL_ROOT_PASSWORD=ch2demo mysql:5)
MAILER_CID=$(docker run -d dockerinaction/ch2_mailer)
if [ ! -n "$CLIENT_ID" ]; then
echo "Client ID not set"
exit 1
fi
# NOTE: using wordpress:4.2 not latest/4.3 read-only dirs changed
WP_CID=$(docker create \
--link $DB_CID:mysql \
--name wp_$CLIENT_ID \
-p 80 \
-v /var/lock/apache2/ \
-v /var/run/apache2/ \
-e WORDPRESS_DB_NAME=$CLIENT_ID \
--read-only wordpress:4.2)
docker start $WP_CID
AGENT_CID=$(docker create \
--name agent_$CLIENT_ID \
--link $WP_CID:insideweb \
--link $MAILER_CID:insidemailer \
dockerinaction/ch2_agent)
docker start $AGENT_CID
echo " Client ID: $CLIENT_ID"
echo " MySQL ID: $(docker-name $DB_CID) IP: $(docker-ip $DB_CID)"
echo " Mailer ID: $(docker-name $MAILER_CID) IP: $(docker-ip $MAILER_CID)"
echo " Wordpress ID: $(docker-name $WP_CID) IP: $(docker-ip $WP_CID)"
echo " Agent ID: $(docker-name $AGENT_CID) IP: $(docker-ip $AGENT_CID)"
output:
Client ID: DUKE
MySQL ID: /thirsty_sammet IP: 172.17.0.2
Mailer ID: /sleepy_snyder IP: 172.17.0.3
Wordpress ID: /wp_DUKE IP: 172.17.0.4
Agent ID: /agent_DUKE IP:
run
docker ps -a
IMAGE COMMAND STATUS PORTS NAMES
dockerinaction/ch2_agent "/watcher/watcher.sh" Exited (0) 2 minutes ago agent_DUKE
wordpress:4.2 "/entrypoint.sh apach" Up 2 minutes 0.0.0.0:32773->80/tcp wp_DUKE
dockerinaction/ch2_mailer "/mailer/mailer.sh" Up 2 minutes 33333/tcp sleepy_snyder
mysql:5 "/entrypoint.sh mysql" Up 2 minutes 3306/tcp thirsty_sammet
so wordpress comes up and stays up, but agent fails and exits
docker logs agent_DUKE
nc: can't connect to remote host (172.17.0.4): Connection refused
Wordpress is failing to connet to mysql but does not exit
docker logs wp_DUKE
WordPress not found in /var/www/html - copying now...
Complete! WordPress has been successfully copied to /var/www/html
Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 10
MySQL Connection Error: (2002) Connection refused
Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 10
MySQL Connection Error: (2002) Connection refused
Warning: mysqli::mysqli(): (HY000/2002): Connection refused in - on line 10
MySQL Connection Error: (2002) Connection refused
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.4. Set the 'ServerName' directive globally to suppress this message
AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.17.0.4. Set the 'ServerName' directive globally to suppress this message
[Sun Jan 03 23:37:38.773659 2016] [mpm_prefork:notice] [pid 1] AH00163: Apache/2.4.10 (Debian) PHP/5.6.12 configured -- resuming normal operations
[Sun Jan 03 23:37:38.773802 2016] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
running: docker host is vmware ubuntu 14.04x64 DT on win7x64 host