Update: cleanup and directly indicate the problem and the solution.
PROBLEM:
Docker-tomcat was properly installed and running, except for the 403 Access error in the Manager App. It also seems that my docker tomcat cannot find my tomcat-users.xml configuration.
SOLUTION
Thanks to Farhad and Sanket for the answers.
[Files]:
Dockerfile
FROM tomcat:8.5.11
MAINTAINER Borgy Manotoy <borgymanotoy#ujeaze.com>
# Update Apt and then install Nano editor (RUN can be removed)
RUN apt-get update && apt-get install -y \
nano \
&& mkdir -p /usr/local/tomcat/conf
# Copy configurations (Tomcat users, Manager app)
COPY tomcat-users.xml /usr/local/tomcat/conf/
COPY context.xml /usr/local/tomcat/webapps/manager/META-INF/
Tomcat Users Configuration (conf/tomcat-users.xml)
<tomcat-users xmlns="http://tomcat.apache.org/xml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
version="1.0">
<role rolename="manager-gui"/>
<role rolename="manager-script"/>
<user username="admin" password="password" roles="manager-gui,manager-script" />
</tomcat-users>
Application Context (webapps/manager/META-INF/context.xml)
<?xml version="1.0" encoding="UTF-8"?>
<Context antiResourceLocking="false" privileged="true" >
<!--
<Valve className="org.apache.catalina.valves.RemoteAddrValve"
allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
-->
</Context>
[STEPS & COMMANDS]:
Build Docker Image
docker build -t borgymanotoy/my-tomcat-docker .
Run Image (my-tomcat-docker and set port to 8088)
docker run --name my-tomcat-docker-container -p 8088:8080 -it -d borgymanotoy/my-tomcat-docker
Go to the container's bash (to check files inside the container thru bash)
docker exec -it biyahe-tomcat-docker-container bash
First you need to expose your application in the container, so you can connect to it from dockerhost/network.
docker run -d -p 8000:8080 tomcat:8.5.11-jre8
You need to change 2 files in order to access the mangaer app from remote host. (Browser on Docker host is considered remote, only packets received on containers loopback are considered local for tomcat)
/usr/local/tomcat/webapps/manager/META-INF/context.xml Note the commented section.
<Context antiResourceLocking="false" privileged="true" >
<!--
<Valve className="org.apache.catalina.valves.RemoteAddrValve"
allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
-->
Please note the commented section.
/usr/local/tomcat/conf/tomcat-users.xml as you stated in the question.
<tomcat-users xmlns="http://tomcat.apache.org/xml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
version="1.0">
<role rolename="manager-gui"/>
<role rolename="manager-script"/>
<user username="admin" password="password" roles="manager-gui,manager-script" />
In order to make changes to files in the container, You can try building your own image, but I suggest using docker volumes or bind mounts.
Also make sure you restart the container so the changes take effect.
Please specify the port when you do a docker run like (i believe mine/tomcat-version is your image name),
docker run -p 8000:8080 -it -d --name MyContainerName mine/tomcat-version
then access the manager page using,
http://<ipaddress>:8000/manager/html
To get the host ip address in docker to need to execute docker-machine ip
Addition info: You can also get into the container using below command,
docker exec -it MyContainerName bash if you want to check different things like tomcat logs, conf files, etc.
Although this is quite late, I wanted to leave my 2 cents.
I took this solution to the next level by building a sample continuous integration system that deploys wars to the docker tomcat just by running mvn clean install via project IDE whilst having the docker tomcat container running.
This solves the problem of having to restart tomcat-container every time a new build is available. Takes advantage of tomcat's auto-deploy
Uses shared volume so that you can deploy multiple wars into the shared volume and a script picks up your wars and deploys to tomcat webapps
Comes with a standard user 'admin' so as to access manager GUI.
Available on public docker repo: docker run -p 8080:8080 -d --name tom -v <YOUR_VOLUME>:/usr/local/stagingwebapps wintersoldier/tomcat_ci:1.0
Picks up any war files dropped to the shared volume and instantly deploys them to tomcat server with an option to deploy it via GUI as well
Here is a sample application with required maven changes & docker file to explore
Related
I am trying a basic docker test in GCP compute instance. I pulled a tomcat image from the official repo. then ran a command to run the container. Command is :
docker run -te --rm -d -p 80:8080 tomcat
It created a container for me with below id.
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
If I do docker ps, I get below
docker run -te --rm -d -p 80:8080 tomcat
3f8ce49393c708f4be4d3d5c956436e000eee6ba7ba08cba48ddf37786104a37
However the tomcat admin console does not open. The reason is tomcat image is trying to create the config files under /usr/local. However, it is a read only file system. So the config files are not created.
Is there a way to ask Docker to create the files in a different location? Or, is there any other way to handle it?
Thanks in advance.
I have been able to successfully run apache ignite with custom config using the command
docker run -it --net=host -v "pathToLocalDirectory"/config:/opt/ignite/apache-ignite/config -e "CONFIG_URI=file:///opt/ignite/apache-ignite/config/default-config.xml" apacheignite/ignite.
But when I run my java project in IntelliJ I get the message
"IP finder returned empty addresses list. Please check IP finder configuration and make sure multicast works on your network...".
Note: the java client project works if I run the ignite server using windows batch file.
Also, I have published 47500 port as well. the result is the same.
try running it using docker -run -it --net=host (don't mount the volumes).
If that doesn't work, it means that either something is incorrect w/your docker setup OR you are configuring discovery differently for clients and servers.
check the IP addresses listed in your client discovery section.
ssh into the container and check what is actually mounted?
run docker exec -it container-name /bin/bash
check: /opt/ignite/apache-ignite/config/default-config.xml is there and contains the correct discovery info.
Check that the ignite log (located in /opt/ignite/apache-ignite/work/log/) specifies that the correct config is being used.
It will have a line like so: [INFO][main][IgniteKernal] Config URL: file:/opt/ignite/apache-ignite/config/default-config.xml
If you don't see the mounted config file try mounting more simply.
docker run -d -v /local/dir/config.xml:/config-file.xml -e CONFIG_URI=/config-file.xml apacheignite/ignite
more info:
https://apacheignite.readme.io/docs/docker-deployment
https://apacheignite.readme.io/docs/tcpip-discovery
I am using Jenkins (2.32.2) Docker container with the Publish over ssh plugin (1.17) and I have added a new server manually.
The newly added server is another Docker container (both running with docker-compose) and I am using a password to connect to it, and everything works just fine when doing it manually, but the problem is when I'm rebuilding the image.
I am already using a volume for the jenkins gone directory and it works just fine. The problem is only on the initial installation (e.g. image build, not a container restart).
It seems like the problem is with the secret key, and I found out that I also need to copy some keys when creating my image.
See the credentials section at Publish over ssh documentation
I tried to copy all the "secrets" directory and the following files: secret.key, secret.key.not-so-secret, identity.key.enc - but I still can't connect after a fresh install.
What am I missing?
Edited:
I just tried to copy the whole jenkins_home directory on my DOCKERFILE and it works, so I guess that the problem is with the first load or something? maybe Jenkins changes the key / salt on the first load?
Thanks.
try to push out jenkins config to docker host of to os where docker host is being installed
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
or
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v ./local/conf:/var/jenkins_home jenkins
I'd like to dockerize my Strongloop Loopback based Node server and start using Process Manager(PM) to keep it running.
I've been using RancherOS on AWS which rocks.
I copied (but didn't add anything to) the following Dockerfile as a template for my own Dockerfile:
https://hub.docker.com/r/strongloop/strong-pm/~/dockerfile/
I then:
docker build -t somename .
(Dockerfile is in .)
It now appears in:
docker images
But when I try to start it, exits right away:
docker run --detach --restart=no --publish 8701:8701 --publish 3001:3001 --publish 3002:3002 --publish 3003:3003 somename
AND if I run the strong-pm image and after opening ports on AWS, it works as above with strongloop/strong-pm not somename
(I can browse aws-instance:8701/explorer)
Also, these instructions to deploy my app https://strongloop.com/strongblog/run-create-node-js-process-manager-docker-images/ require:
slc deploy http://docker-host:8701/
but Rancher doesn't come with npm (or curl) installed and when I bash into the vm, slc isn't installed, so seems like slc needs to be "outside" the vm
docker exec -it fb94ddab6baa bash
If you're still reading, nice. I think I'm trying to add a Dockerfile to my git repo that will deploy my app server (including pulling code from repos) on any docker box.
The workflow for the strongloop/strong-pm docker image assumes you are deploying to it from a workstation. The footprint for npm install -g strongloop is significantly larger than strong-pm alone, which is why the docker image has only strong-pm installed in it.
I have a dockerized web application that I'm running in a HA setup. I have a cron setup that runs dockup every midnight to backup my important information stored on other containers. Now I would like to backup and aggregate my logs from my web application too. Problem is, how do I that? If I use the VOLUME key in Dockerfile to expose /logs to the host machine, there would be a collision because there would be two /logs directories on the dockup container?
I have checked dockup. It does not have a /logs directory. Seems it uses /var/logs for log output.
$ docker run -it --name dockup borja/dockup bash
Otherwise, yes it would be a problem because the volume will be mounted under the mentioned name and also the current container processes will log to the folder. Not good.
Use a logging container like fluentd. In this tutorial it also offers writing to S3 buckets like dockup. Tutorial can be founder here.
Tweak your container, e.g. with symbolic links to log or relay the log to a different volume.
Access log not through containers but native docker and copy it to S3 yourself or running dockup on your local mounted log file.
$ docker logs container/name > logfile.log
$ docker run --rm \
--env-file env.txt \
-v $(pwd)/logfile.log:/customlogs/logfile.txt \
--name dockup borja/dockup
Now you can take the folder /customlogs/ as your backup path inside the env.txt.