I have a Spring Boot application that needs to publish messages to an ActiveMQ message broker. My Spring Boot application is launched on one ECS fargate container and the ActiveMQ is launched on another ECS container. The ActiveMQ container starts normally, but it automatically shuts down. I don't understand what is happening there. It used to work properly before. Here is the Dockerfile for ActiveMQ image.
FROM bellsoft/liberica-openjdk-alpine:13
ENV ACTIVEMQ_VERSION 5.16.3
ENV ACTIVEMQ apache-activemq-$ACTIVEMQ_VERSION
ENV ACTIVEMQ_HOME /opt/activemq
RUN apk add --update curl && \
rm -rf /var/cache/apk/* && \
mkdir -p /opt && \
curl -s -S https://archive.apache.org/dist/activemq/$ACTIVEMQ_VERSION/$ACTIVEMQ-bin.tar.gz | tar -xvz -C /opt && \
mv /opt/$ACTIVEMQ $ACTIVEMQ_HOME && \
addgroup -S activemq && \
adduser -S -H -G activemq -h $ACTIVEMQ_HOME activemq && \
chown -R activemq:activemq $ACTIVEMQ_HOME && \
chown -h activemq:activemq $ACTIVEMQ_HOME
EXPOSE 1883 5672 8161 61613 61614 61616
USER activemq
WORKDIR $ACTIVEMQ_HOME
CMD ["/bin/sh", "-c", "bin/activemq console -Djetty.host=0.0.0.0"]
Here is the Log trace:
container/active_mq/*** INFO | Apache ActiveMQ 5.16.3 (localhost, ID:ip-10-225-92-248.us-east-2.compute.internal-39929-1674489574787-0:1) started
container/active_mq/*** INFO | For help or more information please see: http://activemq.apache.org
container/active_mq/*** WARN | Store limit is 102400 mb (current store usage is 0 mb). The data directory: /opt/activemq/data/kahadb only has 17179 mb of usable space. - resetting to maximum available disk space: 17179 mb
container/active_mq/*** WARN | Temporary Store limit is 51200 mb (current store usage is 0 mb). The data directory: /opt/activemq/data only has 17179 mb of usable space. - resetting to maximum available disk space: 17179 mb
container/active_mq/*** INFO | ActiveMQ WebConsole available at http://0.0.0.0:8161/
container/active_mq/*** INFO | ActiveMQ Jolokia REST API available at http://0.0.0.0:8161/api/jolokia/
container/active_mq/41d16203b4434c03aee498e15c952dfc INFO | Apache ActiveMQ 5.16.3 (localhost, ID:ip-10-225-93-206.us-east-2.compute.internal-35717-1674398503454-0:1) is shutting down
container/active_mq/*** INFO | Connector openwire stopped
container/active_mq/*** INFO | Connector amqp stopped
container/active_mq/*** INFO | socketQueue interrupted - stopping
container/active_mq/*** INFO | Connector stomp stopped
container/active_mq/*** INFO | Could not accept connection during shutdown : null (null)
container/active_mq/*** INFO | Connector mqtt stopped
container/active_mq/*** INFO | Connector ws stopped
container/active_mq/*** INFO | PListStore:[/opt/activemq/data/localhost/tmp_storage] stopped
container/active_mq/*** INFO | Stopping async queue tasks
container/active_mq/*** INFO | Stopping async topic tasks
container/active_mq/*** INFO | Stopped KahaDB
container/active_mq/41d16203b4434c03aee498e15c952dfc INFO | Apache ActiveMQ 5.16.3 (localhost, ID:ip-10-225-93-206.us-east-2.compute.internal-35717-1674398503454-0:1) uptime 1 day 1 hour
container/active_mq/*** INFO | Apache ActiveMQ 5.16.3 (localhost, ID:***) is shutdown
container/active_mq/*** INFO | Closing org.apache.activemq.xbean.XBeanBrokerFactory$1#25641d39: startup date [Sun Jan 22 14:41:35 GMT 2023]; root of context hierarchy
Related
I have spent the entire day trying to figure out why my sscala app running on windows is unable to make a successful connection with hbase running in a docker container.
I can shell into the container and run the hbase shell, create tables etc
Also I can port forward to localhost:16010 and see the Hbase UI. Some additional details of the setup as follows.
Env:
Scala app: Windows (host)
Hbase: docker container
Docker container details
FROM openjdk:8
ENV HBASE_VERSION=2.4.12
RUN apt-get update
RUN apt-get install -y netcat
RUN mkdir -p /var/hbase && \
cd /opt && \
wget -q https://archive.apache.org/dist/hbase/${HBASE_VERSION}/hbase-${HBASE_VERSION}-bin.tar.gz && \
tar xzf hbase-${HBASE_VERSION}-bin.tar.gz
WORKDIR /opt/hbase-${HBASE_VERSION}
COPY hbase-site.xml conf
CMD ./bin/start-hbase.sh && tail -F logs/hbase*.log
hbase-site.xml -> hbase.cluster.distributed & hbase.unsafe.stream.capability.enforce set to false
The hbase container is up and running and accesible. Also confirmed zookeeper is reachable within the container as well as from the host using echo ruok | nc localhost 2181; echo
Running container as follows:
docker run -it -p 2181:2181 -p 2888:2888 -p 3888:3888 -p 16010:16010 -p 16000:16000 -p 16020:16020 -p 16030:16030 -p 8080:8080 -h hbb hbase-1
Scala app
val conf : Configuration = HBaseConfiguration.create()
conf.set("hbase.zookeeper.quorum", "hbb")
conf.set("hbase.zookeeper.property.clientPort", "2181")
conf.set("hbase.master", "hbb")
conf.set("hbase.cluster.distributed","false")
// conf.set("hbase.client.pause", "1000")
// conf.set("hbase.client.retries.number", "2")
// conf.set("zookeeper.recovery.retry", "1")
val connection = ConnectionFactory.createConnection(conf)
The part of the stack trace
1043 [ReadOnlyZKClient-hbb:2181#0x30b6ffe0] DEBUG org.apache.zookeeper.ClientCnxn - zookeeper.disableAutoWatchReset is false
3947 [ReadOnlyZKClient-hbb:2181#0x30b6ffe0-SendThread(hbb:2181)] INFO org.apache.zookeeper.ClientCnxn - Opening socket connection to server hbb:2181. Will not attempt to authenticate using SASL (unknown error)
3949 [ReadOnlyZKClient-hbb:2181#0x30b6ffe0-SendThread(hbb:2181)] WARN org.apache.zookeeper.ClientCnxn - Session 0x0 for server hbb:2181, unexpected error, closing socket connection and attempting reconnect
java.nio.channels.UnresolvedAddressException
at sun.nio.ch.Net.checkAddress(Net.java:100)
at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:620)
at org.apache.zookeeper.ClientCnxnSocketNIO.registerAndConnect(ClientCnxnSocketNIO.java:277)
at org.apache.zookeeper.ClientCnxnSocketNIO.connect(ClientCnxnSocketNIO.java:287)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1021)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1064)
3953 [ReadOnlyZKClient-hbb:2181#0x30b6ffe0-SendThread(hbb:2181)] DEBUG org.apache.zookeeper.ClientCnxnSocketNIO - Ignoring exception during shutdown input
java.net.SocketException: Socket is not connected
at sun.nio.ch.Net.translateToSocketException(Net.java:122)
at sun.nio.ch.Net.translateException(Net.java:156)
at sun.nio.ch.Net.translateException(Net.java:162)
at sun.nio.ch.SocketAdaptor.shutdownInput(SocketAdaptor.java:401)
at org.apache.zookeeper.ClientCnxnSocketNIO.cleanup(ClientCnxnSocketNIO.java:200)
at org.apache.zookeeper.ClientCnxn$SendThread.cleanup(ClientCnxn.java:1250)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1174)
I have tried changing the hbase.zookeeper.quorum & master props on the client side to localhost / 127.0.0.1 as well as changing the etc/hosts file with the container id. No luck yet
Would greatly appreciate some guidance on this :)
I'm using supervisord to run multi-service in a container. I want a ldap service for my web application. So I installed and started opendj with the follow info,
Dockerfile
RUN dpkg -i $APP_HOME/packages/opendj_3.0.0-1_all.deb && \
/opt/opendj/setup \
--cli \
--backendType je \
--baseDN dc=test,dc=net \
--ldapPort 389 \
--adminConnectorPort 4444 \
--rootUserDN cn=Directory\ Manager \
--rootUserPassword 123456 \
--no-prompt \
--noPropertiesFile \
--acceptLicense \
--doNotStart
supervisord.conf
[program:ldap]
command=/opt/opendj/bin/start-ds
priority=1
When running my customized imgae, I got the following exiting message for ldap.
2020-05-25 06:46:03,486 INFO exited: ldap (exit status 0; expected)
Logging into the container to get all process status info with supervisorctl status all and ps -aux respectively.
$supervisorctl status all
ldap EXITED May 25 06:46 AM
$ps -aux
root 97 3.4 5.9 3489048 240248 pts/0 Sl 06:15 0:08 /usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -server -Dorg.opends.server.scriptName=start-ds org.opends.server.core.DirectoryServer --configClass org.opends.server.extensions.ConfigFileHandler
I found the ldap program starting up with start-ds shell script, that is, that start-ds shell process exited, but the ldap server which isn't controlled by supervisor is running.
If stopping supervisor subprocesses, the ldap server can't be stopped gracefully.
So my question is how to configure to make the supervisor to control the ldap server process which is started up by start-ds.
There is a --nodetach option that you should use in such cases: https://github.com/ForgeRock/opendj-community-edition/blob/master/resource/bin/start-ds#L60
Reference Doc says:
Options
The start-ds command takes the following options:
-N | --nodetach
Do not detach from the terminal and continue running in the foreground. This option cannot be used with the -t, --timeout option.
Default: false
The statement in start-ds.sh file is:
exec "${OPENDJ_JAVA_BIN}" ${OPENDJ_JAVA_ARGS} ${SCRIPT_NAME_ARG} \
org.opends.server.core.DirectoryServer \
--configClass org.opends.server.extensions.ConfigFileHandler \
--configFile "${CONFIG_FILE}" "${#}"
start-ds script will append this option when run /opt/opendj/bin/start-ds -N
/usr/lib/jvm/java-8-openjdk-amd64/jre/bin/java -server -Dorg.opends.server.scriptName=start-ds org.opends.server.core.DirectoryServer --configClass org.opends.server.extensions.ConfigFileHandler --configFile /opt/opendj/config/config.ldif -N
I am trying not to use root inside my Docker container, but Gunicorn is not starting.
FROM python:2.7
RUN apt update && \
apt install -y python-pip gcc python-dev libpq-dev && \
pip install --upgrade pip && \
pip install gunicorn && \
pip install eventlet && \
pip install psycopg2
RUN addgroup [username_group] && \
useradd -rm -d /home/[home] -s /bin/bash -g [username_group] -G sudo -u 1000 [username] # uid taken from host system
# USER [username] # if this line is un-commented it doesn't work.
COPY ./web2py /home/[home]/web2py
WORKDIR /home/[home]/web2py
EXPOSE 80 443
CMD gunicorn -b 0.0.0.0:80 -w 3 wsgihandler
This is the output
[container] | [2019-01-28 20:21:58 +0000] [6] [INFO] Starting gunicorn 19.9.0
[container] | [2019-01-28 20:21:58 +0000] [6] [ERROR] Retrying in 1 second.
[container] | [2019-01-28 20:21:59 +0000] [6] [ERROR] Retrying in 1 second.
[container] | [2019-01-28 20:22:00 +0000] [6] [ERROR] Retrying in 1 second.
[container] | [2019-01-28 20:22:01 +0000] [6] [ERROR] Retrying in 1 second.
[container] | [2019-01-28 20:22:02 +0000] [6] [ERROR] Retrying in 1 second.
[container] | [2019-01-28 20:22:03 +0000] [6] [ERROR] Can't connect to ('0.0.0.0', 80)
Using the same UID as the host has solved permission issues I was having with volumes. But as I can't use sudo in a Dockerfile I am not sure how to get the server running without leaving the container using root.
I was receiving this error in a Kubernetes cluster running on EKS after upgrading 1.23 to 1.24.
The issue was migration between dockershim and containerd, where dockershim allowed binding on port 80, and containerd did not, unless an additional flag was specified (--sysctl net.ipv4.ip_unprivileged_port_start=0). (Similar to this issue.)
According to the containerd team (here), the bug is that dockershim shouldn't have been allowing binding to that port without the flag in the first place.
In my case, I resolved the issue by changing the port to a non-80 port, and let our Ingress controller handle routing to the new port. I would guess you could set the mentioned flag (and not update the port used) and resolve the issue that way.
I have created a Dockerfile like below. Just adding up an application in tomcat webapps.
FROM tomcat:9-alpine
ADD ./Spring3HibernateApp.war /usr/local/tomcat/webapps/
VOLUME /usr/local/tomcat/webapps
EXPOSE 8080
CMD ["catalina.sh","run"]
Build up a new image and named it test-app:0.1
docker build –t test-app:0.1 .
Spinning up new container like below, where I am using host data directory to mount a container data directory, so that I can make changes or list webapps content of container.
docker run -d --name=tomcat-01 -p 80:8080 --net=bridge -v /vol2/docker/sampleapp/tomcat-webapps:/usr/local/tomcat/webapps test-app:0.1
My problem: When I look at /vol2/docker/sampleapp/tomcat-webapps, I found it blank. However I am thinking it should list out contents of container from location /usr/local/tomcat/webapps. It’s actually cleaning up containers data as well, instead of persisting it and loading it in host data directory.
Am I missing anything?
If I simply remove –v from above command, it works fine and I am able to see contents inside default docker volume location, but not getting same result when I add –v.
Is my understand wrong?
I am referring “Mount a host directory as a data volume” through link https://docs.docker.com/engine/tutorials/dockervolumes/
Same command is working fine, when I am using mysql image and spinning up new container from it.
docker run -d --name=mysql-01 -p 3306:3306 -e MYSQL_ROOT_PASSWORD=password -e MYSQL_DATABASE=employeedb --net=bridge -v /vol2/docker/sampleapp/mysql-data:/var/lib/mysql mysql
This time, I am able to see containers data inside /vol2/docker/sampleapp/mysql-data
When you mount a volume you overwrite the existing directory inside of the container. If you are looking to deploy .war or .jar files from outside of the container you would want to do the following:
FROM tomcat:9-alpine
VOLUME /usr/local/tomcat/webapps
EXPOSE 8080
CMD ["catalina.sh","run"]
Build it: docker build –t test-app:0.1 .
Then run your container like so: docker run -d --name=tomcat-01 -p 80:8080 -v /vol2/docker/sampleapp/tomcat-webapps:/usr/local/tomcat/webapps test-app:0.1 placing the Spring3HibernateApp.war in the /vol2/docker/sampleapp/tomcat-webapps directory.
Once you do this you can go docker logs -f tomcat-01 and watch as tomcat deploys the app like in my sample.war below.
Using CATALINA_BASE: /usr/local/tomcat
Using CATALINA_HOME: /usr/local/tomcat
Using CATALINA_TMPDIR: /usr/local/tomcat/temp
Using JRE_HOME: /usr/lib/jvm/java-1.8-openjdk/jre
Using CLASSPATH: /usr/local/tomcat/bin/bootstrap.jar:/usr/local/tomcat/bin/tomcat-juli.jar
25-Feb-2017 21:58:23.710 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server version: Apache Tomcat/9.0.0.M17
25-Feb-2017 21:58:23.717 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server built: Jan 10 2017 20:59:20 UTC
25-Feb-2017 21:58:23.717 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Server number: 9.0.0.0
25-Feb-2017 21:58:23.717 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Name: Linux
25-Feb-2017 21:58:23.717 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log OS Version: 3.13.0-93-generic
25-Feb-2017 21:58:23.717 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Architecture: amd64
25-Feb-2017 21:58:23.717 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Java Home: /usr/lib/jvm/java-1.8-openjdk/jre
25-Feb-2017 21:58:23.718 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Version: 1.8.0_111-internal-alpine-r0-b14
25-Feb-2017 21:58:23.718 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log JVM Vendor: Oracle Corporation
25-Feb-2017 21:58:23.718 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_BASE: /usr/local/tomcat
25-Feb-2017 21:58:23.718 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log CATALINA_HOME: /usr/local/tomcat
25-Feb-2017 21:58:23.719 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.config.file=/usr/local/tomcat/conf/logging.properties
25-Feb-2017 21:58:23.719 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.util.logging.manager=org.apache.juli.ClassLoaderLogManager
25-Feb-2017 21:58:23.719 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djdk.tls.ephemeralDHKeySize=2048
25-Feb-2017 21:58:23.720 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.protocol.handler.pkgs=org.apache.catalina.webresources
25-Feb-2017 21:58:23.720 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.base=/usr/local/tomcat
25-Feb-2017 21:58:23.721 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Dcatalina.home=/usr/local/tomcat
25-Feb-2017 21:58:23.721 INFO [main] org.apache.catalina.startup.VersionLoggerListener.log Command line argument: -Djava.io.tmpdir=/usr/local/tomcat/temp
25-Feb-2017 21:58:23.721 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent Loaded APR based Apache Tomcat Native library 1.2.10 using APR version 1.5.2.
25-Feb-2017 21:58:23.721 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR capabilities: IPv6 [true], sendfile [true], accept filters [false], random [true].
25-Feb-2017 21:58:23.722 INFO [main] org.apache.catalina.core.AprLifecycleListener.lifecycleEvent APR/OpenSSL configuration: useAprConnector [false], useOpenSSL [true]
25-Feb-2017 21:58:23.725 INFO [main] org.apache.catalina.core.AprLifecycleListener.initializeSSL OpenSSL successfully initialized (OpenSSL 1.0.2j 26 Sep 2016)
25-Feb-2017 21:58:23.838 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["http-nio-8080"]
25-Feb-2017 21:58:23.861 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
25-Feb-2017 21:58:23.868 INFO [main] org.apache.coyote.AbstractProtocol.init Initializing ProtocolHandler ["ajp-nio-8009"]
25-Feb-2017 21:58:23.870 INFO [main] org.apache.tomcat.util.net.NioSelectorPool.getSharedSelector Using a shared selector for servlet write/read
25-Feb-2017 21:58:23.874 INFO [main] org.apache.catalina.startup.Catalina.load Initialization processed in 653 ms
25-Feb-2017 21:58:23.908 INFO [main] org.apache.catalina.core.StandardService.startInternal Starting service Catalina
25-Feb-2017 21:58:23.908 INFO [main] org.apache.catalina.core.StandardEngine.startInternal Starting Servlet Engine: Apache Tomcat/9.0.0.M17
25-Feb-2017 21:58:23.951 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deploying web application archive /usr/local/tomcat/webapps/sample.war
25-Feb-2017 22:00:25.223 INFO [localhost-startStop-1] org.apache.catalina.util.SessionIdGeneratorBase.createSecureRandom Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [120,610] milliseconds.
25-Feb-2017 22:00:25.253 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive /usr/local/tomcat/webapps/sample.war has finished in 121,302 ms
25-Feb-2017 22:00:25.258 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [http-nio-8080]
25-Feb-2017 22:00:25.270 INFO [main] org.apache.coyote.AbstractProtocol.start Starting ProtocolHandler [ajp-nio-8009]
25-Feb-2017 22:00:25.278 INFO [main] org.apache.catalina.startup.Catalina.start Server startup in 121403 ms
Lastly, the MySQL volume mount works like it does because the base image had the volume exposed before as it was build VOLUME /var/lib/mysql if you wanted the same thing to occur in the catalina app you would have to copy their Dockerfile and add a VOLUME /usr/local/tomcat/webapps/ to it like below.
FROM openjdk:8-jre-alpine
ENV CATALINA_HOME /usr/local/tomcat
ENV PATH $CATALINA_HOME/bin:$PATH
RUN mkdir -p "$CATALINA_HOME"
WORKDIR $CATALINA_HOME
# let "Tomcat Native" live somewhere isolated
ENV TOMCAT_NATIVE_LIBDIR $CATALINA_HOME/native-jni-lib
ENV LD_LIBRARY_PATH ${LD_LIBRARY_PATH:+$LD_LIBRARY_PATH:}$TOMCAT_NATIVE_LIBDIR
RUN apk add --no-cache gnupg
# see https://www.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/KEYS
# see also "update.sh" (https://github.com/docker-library/tomcat/blob/master/update.sh)
ENV GPG_KEYS 05AB33110949707C93A279E3D3EFE6B686867BA6 07E48665A34DCAFAE522E5E6266191C37C037D42 47309207D818FFD8DCD3F83F1931D684307A10A5 541FBE7D8F78B25E055DDEE13C370389288584E7 61B832AC2F1C5A90F0F9B00A1C506407564C17A3 79F7026C690BAA50B92CD8B66A3AD3F4F22C4FED 9BA44C2621385CB966EBA586F72C284D731FABEE A27677289986DB50844682F8ACB77FC2E86E29AC A9C5DF4D22E99998D9875A5110C01C5A2F6059E7 DCFD35E0BF8CA7344752DE8B6FB21E8933C60243 F3A04C595DB5B6A5F1ECA43E3B7BBB100D811BBE F7DA48BB64BCB84ECBA7EE6935CD23C10D498E23
RUN set -ex; \
for key in $GPG_KEYS; do \
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done
ENV TOMCAT_MAJOR 9
ENV TOMCAT_VERSION 9.0.0.M17
# https://issues.apache.org/jira/browse/INFRA-8753?focusedCommentId=14735394#comment-14735394
ENV TOMCAT_TGZ_URL https://www.apache.org/dyn/closer.cgi?action=download&filename=tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz
# not all the mirrors actually carry the .asc files :'(
ENV TOMCAT_ASC_URL https://www.apache.org/dist/tomcat/tomcat-$TOMCAT_MAJOR/v$TOMCAT_VERSION/bin/apache-tomcat-$TOMCAT_VERSION.tar.gz.asc
RUN set -x \
\
&& apk add --no-cache --virtual .fetch-deps \
ca-certificates \
tar \
openssl \
&& wget -O tomcat.tar.gz "$TOMCAT_TGZ_URL" \
&& wget -O tomcat.tar.gz.asc "$TOMCAT_ASC_URL" \
&& gpg --batch --verify tomcat.tar.gz.asc tomcat.tar.gz \
&& tar -xvf tomcat.tar.gz --strip-components=1 \
&& rm bin/*.bat \
&& rm tomcat.tar.gz* \
\
&& nativeBuildDir="$(mktemp -d)" \
&& tar -xvf bin/tomcat-native.tar.gz -C "$nativeBuildDir" --strip-components=1 \
&& apk add --no-cache --virtual .native-build-deps \
apr-dev \
gcc \
libc-dev \
make \
"openjdk${JAVA_VERSION%%[-~bu]*}"="$JAVA_ALPINE_VERSION" \
openssl-dev \
&& ( \
export CATALINA_HOME="$PWD" \
&& cd "$nativeBuildDir/native" \
&& ./configure \
--libdir="$TOMCAT_NATIVE_LIBDIR" \
--prefix="$CATALINA_HOME" \
--with-apr="$(which apr-1-config)" \
--with-java-home="$(docker-java-home)" \
--with-ssl=yes \
&& make -j$(getconf _NPROCESSORS_ONLN) \
&& make install \
) \
&& runDeps="$( \
scanelf --needed --nobanner --recursive "$TOMCAT_NATIVE_LIBDIR" \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u \
)" \
&& apk add --virtual .tomcat-native-rundeps $runDeps \
&& apk del .fetch-deps .native-build-deps \
&& rm -rf "$nativeBuildDir" \
&& rm bin/tomcat-native.tar.gz
# verify Tomcat Native is working properly
RUN set -e \
&& nativeLines="$(catalina.sh configtest 2>&1)" \
&& nativeLines="$(echo "$nativeLines" | grep 'Apache Tomcat Native')" \
&& nativeLines="$(echo "$nativeLines" | sort -u)" \
&& if ! echo "$nativeLines" | grep 'INFO: Loaded APR based Apache Tomcat Native library' >&2; then \
echo >&2 "$nativeLines"; \
exit 1; \
fi
ADD ./Spring3HibernateApp.war /
VOLUME ${CATALINA_HOME}
EXPOSE 8080
CMD ["catalina.sh", "run"]
When you mount a host directory into the container, anything inside the container at that path is no longer available. It is still there in the underlying image or volume, but it will be superseded by your volume mount.
That is simply how host volume mounts work: Whatever is in the directory on your host is slapped into place inside the container, and takes precedence over whatever was previously at that path (if anything was).
I am afraid your VOLUME /usr/local/tomcat/webapps clears this directory.
You need to do it differently.
How to deploy a mean stack application in docker?
I have an error in mongodb connection.so mean stack web application is not responding.
Here are my steps:
Pulled the image from DockerHub:
sudo docker pull crissi/airlineinsurance
Verified Images
sudo docker images
Run the mongodb Container
sudo docker run -d -p 27017:27017 --name airlineInsurance -d mongo
Verified it is running:
sudo docker ps -l
Run the Application Container
sudo docker run -d -P crissi/airlineinsurance
Verified with:
sudo docker ps -l
Checking the logs
sudo docker logs 8efba551fdc6
The resulted log is as follows:
[nodemon] 1.11.0
[nodemon] to restart at any time, enter `rs`
[nodemon] watching: *.*
[nodemon] starting `node server.js`
Server running at http://127.0.0.1:9000
Server running at https://127.0.0.1:9030
/app/node_modules/mongodb/lib/server.js:261
process.nextTick(function() { throw err; })
^
MongoError: failed to connect to server [localhost:27017] on first connect
at Pool.<anonymous> (/app/node_modules/mongodb-core/lib/topologies/server.js:313:35)
at emitOne (events.js:96:13)
at Pool.emit (events.js:188:7)
at Connection.<anonymous> (/app/node_modules/mongodb-core/lib/connection/pool.js:271:12)
at Connection.g (events.js:291:16)
at emitTwo (events.js:106:13)
at Connection.emit (events.js:191:7)
at Socket.<anonymous> (/app/node_modules/mongodb-core/lib/connection/connection.js:165:49)
at Socket.g (events.js:291:16)
at emitOne (events.js:96:13)
at Socket.emit (events.js:188:7)
at emitErrorNT (net.js:1281:8)
at _combinedTickCallback (internal/process/next_tick.js:74:11)
at process._tickCallback (internal/process/next_tick.js:98:9)
[nodemon] app crashed - waiting for file changes before starting...
I have included DockerFile for your reference
# Tells the Docker which base image to start.
FROM node
# Adds files from the host file system into the Docker container.
ADD . /app
# Sets the current working directory for subsequent instructions
WORKDIR /app
RUN npm install
RUN npm install -g bower
RUN bower install --allow-root
RUN npm install -g nodemon
#expose a port to allow external access
EXPOSE 9030
# Start mean application
CMD ["nodemon", "server.js"]
It depends on how you define your Dockerfile.
Since your app involves multiple processes (your app + mongodb), you could use supervisor to launch both.
See this example using a supervisord.conf like:
[supervisord]
nodaemon=true
[program:mongod]
command=/usr/bin/mongod --smallfiles
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
[program:nodejs]
command=nodejs /opt/app/server/server.js
Replace the nodejs command by your own application.