How to enable https on a Tomee docker container? - docker

I'm running a tomee(8.0.1) docker image version and I would like to enable ssl on it.
I have seen these topics:
https://mkyong.com/tomcat/how-to-configure-tomcat-to-support-ssl-or-https/
how to make java - tomee application HTTPS?
How to enable HTTPS on Tomcat in a Docker Container?
The first and the second ways are what I tried but it did not work. Even after the restart of my container.
The second is not the way I want to do it. My idea is to configure my server and have it on my repository as an image.
Here under, the configuration I added on my server.xml:
<!-- To generate a keystore certificate, here is the command:
keytool -genkey -alias fnsanzabandi -keyalg RSA -keystore fnsanzabandikeystore
-->
<Connector port="8443" protocol="HTTP/1.1" SSLEnabled="true"
maxThreads="150" scheme="https" secure="true"
clientAuth="false" sslProtocol="TLS"
keystoreFile="conf/fnsanzabandikeystore"
keystorePass="changeit" />
May be I missed something, or there is something else to do in the tomee case.
Can you please help me?
Thank you in advance.

Ok. My problem was silly.
After that configuration I kept running my tomcat container on port 8080 like this:
docker run --name tomee3 -p 8080:8080 fayabobo/tomee:1.2
That is why my my https port 8443 was not accessible. To resolve the problem, I just runned my tomcat container on port 8443 like this:
docker run --name tomee4 -p 8443:8443 fayabobo/tomee:1.2
And:

Related

Enable SSL between GUACD and Guacamole web server (Tomcat)

i'm trying to figure out how and where to set right configuration to get working SSL beetween guacd and server guacamole (tomcat web srv).
I am using docker solution environment and i am bit confused where put right configuration. Let me explain what i've understood and hope someone can clarify me.
guacamole.properties and guacd.conf has to be on same $GUACAMOLE_HOME dir (guacamole container)? or guacamole.properties has to be put inside guacamole container and guacd.conf inside guacd container? (If Yes, under which directory, in guacd container?)
Below container commands :
docker run --name guacd_ssl --restart=always -v /opt/docker_data/guacd:/opt/local -e GUACD_LOG_LEVEL=debug -p 57822:4822 -d guacamole/guacd
docker run --name guacamole-1.2.0-SSL --restart=always -e MYSQL_DATABASE=guacamole_db -e MYSQL_USER=guacamole_user -e MYSQL_PASSWORD=password -e --link guacd_ssl:guacd --link db_guacamole:mysql -v /opt/docker_data/guacamole:/opt/local -e GUACAMOLE_HOME=/opt/local -e GUACD_PORT=57822 -e GUACD-SSL=true -d -p 8090:8080 guacamole/guacamole:latest
Now, certificates where are to be putted? in /opt/docker_data/guacamole (host dir) or into /opt/docker_data/guacd (host dir) ?
Configuration files:
guacd.conf
[ssl]
server_certificate = /opt/local/cert.pem
server_key = /opt/local/key.pem
guacamole.properties
guacd-ssl: true
Can you help me understand?
Regards
To enable SSL for guacd in docker environment, you will need to copy SSL certificate and key into the guacd container. You can do so by creating a customized image atop of the guacd image or via volume mount. If you want to take the first option, you can find guacd Dockerfile at here.
guacamole-properties and guacd.conf are two different files.
guacamole-properties is the configuration file for guacamole-client while guacd.conf is the configuration file for guacamole-server(guacd). Usually, you will place both files in /etc/guacamole/. For docker, the situation is slightly different.
In docker, the default GUACAMOLE_HOME for the guacamole-client container is located at /root/.guacamole. You can find the guacamole.properties file here.
For guacd, you can place your guacd.conf in /etc/guacamole/.
For the certificate and key, you can place it anywhere you like as long as you mentioned the path in guacd.conf.

How to allow HTTPS connections from both localhost and container towards an ASP.NET Core Web API application?

I am trying to use Docker for an existing application and I have the following issue. When the API is trying to get the Identity Server metadata from the container, it fails with the following:
web_api | System.InvalidOperationException: IDX20803: Unable to obtain configuration from: 'https://host.docker.internal:5500/.well-known/openid-configuration'.
web_api | ---> System.IO.IOException: IDX20804: Unable to retrieve document from: 'https://host.docker.internal:5500/.well-known/openid-configuration'.
web_api | ---> System.Net.Http.HttpRequestException: The SSL connection could not be established, see inner exception.
web_api | ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.
This is indeed confirmed by the host browser (certification error in Chrome).
If I access the same metadata using localhost instead of host.docker.internal, it works as expected.
I have used the instructions from here in order to create and trust a localhost certificate that it is also used by the identity server:
dotnet dev-certs https -ep %USERPROFILE%\.aspnet\https\aspnetapp.pfx -p { password here }
dotnet dev-certs https --trust
I assume these instructions create the certificate only for localhost, but I am trying to get a solution that also works for host.docker.internal.
Question: How to allow HTTPS connections from both localhost and container towards an ASP.NET Core Web API application?
I think you are right - dotnet dev-certs only generates certs for localhost. And as far as I can tell is not configurable. So it seems you will have to generate your own self-signed cert and trust it. Assuming you're on Windows, one way to do it is with Powershell's New-SelfSignedCertificate:
#create a SAN cert for both host.docker.internal and localhost
$cert = New-SelfSignedCertificate -DnsName "host.docker.internal", "localhost" -CertStoreLocation cert:\localmachine\my
#export it for docker container to pick up later
$password = ConvertTo-SecureString -String "123123" -Force -AsPlainText
Export-PfxCertificate -Cert $cert -FilePath C:\https\aspnetapp.pfx -Password $password
# trust it on your host machine
$store = New-Object System.Security.Cryptography.X509Certificates.X509Store "TrustedPublisher","LocalMachine"
$store.Open("ReadWrite")
$store.Add($cert)
$store.Close()
Assuming you use Microsoft-supplied base images for your apps, to hint Kestrel to pick the new cert up you will probably have to run docker like so:
docker pull your_docker_image
docker run --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="123123" -e ASPNETCORE_Kestrel__Certificates__Default__Path=\https\aspnetapp.pfx -v %USERPROFILE%\.aspnet\https:C:\https\ your_docker_image
docker run <your image> --rm -it -p 8000:80 -p 8001:443 -e ASPNETCORE_URLS="https://+;http://+" -e ASPNETCORE_HTTPS_PORT=8001 -e ASPNETCORE_Kestrel__Certificates__Default__Password="123123" -e ASPNETCORE_Kestrel__Certificates__Default__Path=/https/aspnetapp.pfx
Note I'm exporting the cert to C:\https which then gets mounted onto the container.
You might have to play around with paths and domain names but hopefully that gives you a starting point.
OpenSSL is another possible solution here that would be cross-platform as well
UPD Since Docker machines are often Linux, this answer might not be a complete solution. Check out my other answer on the same topic - that one leverages off OpenSSL to perform the task and goes into how to embed self-signed certs into Docker images on build.

How to get docker apero/cas server working on localhost?

I tried the following:
docker pull apereo/cas
docker run -p 80:8080 -p 443:8443 -d --name="cas" apereo/cas:v5.2.2
I then do:
docker exec -it /bin/bash
I set the etc/cas/cas.properties file with:
cas.tgc.crypto.encryption.key=
cas.tgc.crypto.signing.key=
cas.webflow.crypto.signing.key=
cas.webflow.crypto.encryption.key=
(and fill in the autogenerated keys populated after running ./bin/run-cas.sh)
I then run:
keytool -genkey -keyalg RSA -alias cas -keystore thekeystore
-storepass changeit -validity 9999 -keysize 2048
Problem is, when I try rerunning ./bin/run-cas.sh, I am getting error:
The Tomcat connector configured to listen on port 8443 failed to
start. The port may already be in use or the connector may be
misconfigured.
Is there something else I need to do to get started by getting CAS running on my local machine?
When you run the container, it executes whatever ENTRYPOINT or CMD is in the Dockerfile. In the case of apero/cas that would be CMD ["/cas-overlay/bin/run-cas.sh"]. What's likely happening is this script runs cas in some capacity and utilizes port 8443.
Now when you docker exec into the container, you're running an additional process in the container. You are a different process. You can see what I'm talking about if you ps aux inside the container, you will see the CMD running via bash. Now when you run run-cas.sh, it attempts to run something (again) using port 8443, but that port it already allocated in the container, and so you get the error.
Try this. docker run -p 80:8080 -p 443:8443 -d --name="cas" apereo/cas:v5.2.2 /bin/bash. When you add /bin/bash to the end, you're overriding the CMD in the Dockerfile, so the first run-cas.sh doesn't run, and 8443 isn't allocated to a process. This is similar in function to using docker exec, except you're running bash before the initial command executes. Then you should be able to do all the things you're trying to do (I know nothing about cas) without this port 8443 error.

Jenkins with publish over ssh - unable to migrate server configuration

I am using Jenkins (2.32.2) Docker container with the Publish over ssh plugin (1.17) and I have added a new server manually.
The newly added server is another Docker container (both running with docker-compose) and I am using a password to connect to it, and everything works just fine when doing it manually, but the problem is when I'm rebuilding the image.
I am already using a volume for the jenkins gone directory and it works just fine. The problem is only on the initial installation (e.g. image build, not a container restart).
It seems like the problem is with the secret key, and I found out that I also need to copy some keys when creating my image.
See the credentials section at Publish over ssh documentation
I tried to copy all the "secrets" directory and the following files: secret.key, secret.key.not-so-secret, identity.key.enc - but I still can't connect after a fresh install.
What am I missing?
Edited:
I just tried to copy the whole jenkins_home directory on my DOCKERFILE and it works, so I guess that the problem is with the first load or something? maybe Jenkins changes the key / salt on the first load?
Thanks.
try to push out jenkins config to docker host of to os where docker host is being installed
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
or
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v ./local/conf:/var/jenkins_home jenkins

Docker Tomcat users configuration not working

Update: cleanup and directly indicate the problem and the solution.
PROBLEM:
Docker-tomcat was properly installed and running, except for the 403 Access error in the Manager App. It also seems that my docker tomcat cannot find my tomcat-users.xml configuration.
SOLUTION
Thanks to Farhad and Sanket for the answers.
[Files]:
Dockerfile
FROM tomcat:8.5.11
MAINTAINER Borgy Manotoy <borgymanotoy#ujeaze.com>
# Update Apt and then install Nano editor (RUN can be removed)
RUN apt-get update && apt-get install -y \
nano \
&& mkdir -p /usr/local/tomcat/conf
# Copy configurations (Tomcat users, Manager app)
COPY tomcat-users.xml /usr/local/tomcat/conf/
COPY context.xml /usr/local/tomcat/webapps/manager/META-INF/
Tomcat Users Configuration (conf/tomcat-users.xml)
<tomcat-users xmlns="http://tomcat.apache.org/xml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
version="1.0">
<role rolename="manager-gui"/>
<role rolename="manager-script"/>
<user username="admin" password="password" roles="manager-gui,manager-script" />
</tomcat-users>
Application Context (webapps/manager/META-INF/context.xml)
<?xml version="1.0" encoding="UTF-8"?>
<Context antiResourceLocking="false" privileged="true" >
<!--
<Valve className="org.apache.catalina.valves.RemoteAddrValve"
allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
-->
</Context>
[STEPS & COMMANDS]:
Build Docker Image
docker build -t borgymanotoy/my-tomcat-docker .
Run Image (my-tomcat-docker and set port to 8088)
docker run --name my-tomcat-docker-container -p 8088:8080 -it -d borgymanotoy/my-tomcat-docker
Go to the container's bash (to check files inside the container thru bash)
docker exec -it biyahe-tomcat-docker-container bash
First you need to expose your application in the container, so you can connect to it from dockerhost/network.
docker run -d -p 8000:8080 tomcat:8.5.11-jre8
You need to change 2 files in order to access the mangaer app from remote host. (Browser on Docker host is considered remote, only packets received on containers loopback are considered local for tomcat)
/usr/local/tomcat/webapps/manager/META-INF/context.xml Note the commented section.
<Context antiResourceLocking="false" privileged="true" >
<!--
<Valve className="org.apache.catalina.valves.RemoteAddrValve"
allow="127\.\d+\.\d+\.\d+|::1|0:0:0:0:0:0:0:1" />
-->
Please note the commented section.
/usr/local/tomcat/conf/tomcat-users.xml as you stated in the question.
<tomcat-users xmlns="http://tomcat.apache.org/xml"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://tomcat.apache.org/xml tomcat-users.xsd"
version="1.0">
<role rolename="manager-gui"/>
<role rolename="manager-script"/>
<user username="admin" password="password" roles="manager-gui,manager-script" />
In order to make changes to files in the container, You can try building your own image, but I suggest using docker volumes or bind mounts.
Also make sure you restart the container so the changes take effect.
Please specify the port when you do a docker run like (i believe mine/tomcat-version is your image name),
docker run -p 8000:8080 -it -d --name MyContainerName mine/tomcat-version
then access the manager page using,
http://<ipaddress>:8000/manager/html
To get the host ip address in docker to need to execute docker-machine ip
Addition info: You can also get into the container using below command,
docker exec -it MyContainerName bash if you want to check different things like tomcat logs, conf files, etc.
Although this is quite late, I wanted to leave my 2 cents.
I took this solution to the next level by building a sample continuous integration system that deploys wars to the docker tomcat just by running mvn clean install via project IDE whilst having the docker tomcat container running.
This solves the problem of having to restart tomcat-container every time a new build is available. Takes advantage of tomcat's auto-deploy
Uses shared volume so that you can deploy multiple wars into the shared volume and a script picks up your wars and deploys to tomcat webapps
Comes with a standard user 'admin' so as to access manager GUI.
Available on public docker repo: docker run -p 8080:8080 -d --name tom -v <YOUR_VOLUME>:/usr/local/stagingwebapps wintersoldier/tomcat_ci:1.0
Picks up any war files dropped to the shared volume and instantly deploys them to tomcat server with an option to deploy it via GUI as well
Here is a sample application with required maven changes & docker file to explore

Resources