How to enable HTTPS on Tomcat in a Docker Container? - docker

I'm new to Tomcat and Docker, and am stuck trying to enable https on my website. First on the server, not in any container:
a) I generated a CSR
b) Acquired a commercial SSL certificate
c) Placed the certificates in a folder on the server /etc/docker/certs
d) Then created my Docker containers with the configuration below
I can use the command docker exec -it <container-id> sh to navigate my container. I can edit server.xml and web.xml but I realize I should install the certificates at the OS level outside the container if I want https configuration to persist past individual containers. In other words, I should be able to remove a container, and create another one without needing to reinstall the ssl.
How can I do this? Any ideas?. Thanks in advance! Below are my configurations:
1.Database
docker run -d --name=example-db --restart=always --net=example-net --mount type=volume,src=mydbdata,target=/example-db --hostname=example-db -e POSTGRES_DB=mydb -e POSTGRES_USER=myuser -e POSTGRES_PASSWORD=secret myapp/db
2.Application
docker run -d --name=example-app --restart=always --mount type=volume,src=mydata,target=/example-app -p 80:8080 --net=example-net -e DB_HOST=example-db -e DB_NAME=mydb -e DB_USER=myuser -e DB_PASSWORD=secret myapp/myapp
Again thanks for your help.
Art

You can map the external certs into a container at docker run time using bind mounts. Assuming your certs are in /etc/docker/certs on the host, and you want them to be at /etc/ssl/certs in the container, then add either of the following:
-v /etc/docker/certs:/etc/ssl/certs:ro
or
--mount type=bind,src=/etc/docker/certs,dst=/etc/ssl/certs,readonly
Your Tomcat config would use /etc/ssl/certs as its path in this case.

Related

Can't save file on remote Jupyter server running in docker container

I'm trying to work in Jupyter Lab run via Docker on a remote machine, but can't save any of the files I open.
I'm working with a Jupyter Docker Stack. I've installed docker on my remote machine and successfully pulled the image.
I set up port forwarding in my ~/.ssh/config file:
Host mytunnel
HostName <remote ip>
User root
ForwardAgent yes
LocalForward 8888 localhost:8888
When I fire up the container, I use the following script:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
The container is running:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c8fc3c720af1 jupyter/tensorflow-notebook "tini -g -- start-no…" 8 minutes ago Up 8 minutes 0.0.0.0:8888->8888/tcp, :::8888->8888/tcp adoring_khorana
I get the regular Jupyter url back:
http://127.0.0.1:8888/lab?token=<token>
But when I access the server in my browser, the Save option is disabled.
I've tried some of the solutions proposed elsewhere in SO, but no luck.
Is this something about connecting over SSH? The Jupyter server thinks it's not a secure connection?
It is possible that the problem is related to the SSH configuration, but I think is more probably related to a permission issue with your volume mount.
Please, try reviewing your docker container logs looking for permissions related errors. You can do that using the following:
docker container logs <container id>
See the output provided by your docker run command too.
In addition, try opening a shell in the container:
docker exec -it <container id> /bin/bash
and see if you are able to create a file in the default work directory:
touch /home/jovyan/work/test_file
Finally, the Jupyter docker stacks repository has a troubleshooting page almost entirely devoted to permissions issues.
Consider especially the solutions provided in the Additional tips and troubleshooting commands for permission-related errors and, as suggested, try providing launching the container with your OS user:
docker run \
-p 8888:8888 \
-e JUPYTER_ENABLE_LAB=yes \
--user "$(id -u)" --group-add users \
-v "${PWD}":/home/jovyan/work jupyter/tensorflow-notebook
After that, as suggested in the mentioned documentation as well, see if the container is properly mounted using the following command:
docker inspect <container_id>
In the obtained result note the value of the RW field which indicates whether the volume is writable (true) or not (false).

Docker container not showing files in the shared volume

I hope someone can help me to locate the issue. I created a SQL Server 2019 container using this code:
docker run --name SQL19 -p 1433:1433 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=zzzzz258*" -v /SqlDockerVol/userdatabase:/userdatabase -v /SqlDockerVol/sqlbackups:/sqlbackups -d mcr.microsoft.com/mssql/server:2019-lates
The problem I am having is the container not showing the files I saved in the /sqlbackups folder.
I am using Ubuntu 20.04.
I logged into the SQL19 container like this:
docker exec -it SQL19 /bin/bash
then issued ls sqlbackups to confirm.
Do I need to set any permission on the host folder. I am not familiar with Linux.
Thanks
I suspect you need to pass absolute path to your folder:
/SqlDockerVol/userdatabase
is it full absolute path?
If it is relative change it to:
$(pwd)/SqlDockerVol/userdatabase
Please check this Docker shared folder with Linux
And technically you need something like:
docker run --name SQL19 -p 1433:1433 -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=zzzzz258*" -v $(pwd)/SqlDockerVol/userdatabase:/userdatabase -v $(pwd)/SqlDockerVol/sqlbackups:/sqlbackups -d mcr.microsoft.com/mssql/server:2019-lates

why do i keep seeing nginx index.html on localhost when i run my docker image

I installed and run nginx on my linux machine to understand the configurations etc. After a while i decided to remove it safely by following this thread in order to use it in docker
By following this documentaion i run this command
sudo docker run --name ngix -d -p 8080:80 pillalexakis/myrestapi:01
And i saw ngix's homepage at localhost
Then i deleted all ngix images & stopped all containers and i also run this command
sudo docker system prune -a
But now restarted my service by this command
sudo docker run -p 192.168.2.9:7777:8085 phillalexakis/myfirstapi:01 and i keep seeing at localhost ngix index.html
How can i totally remove it ?
Note: I'm new with docker and i might have missed a lot of things. Let me know what extra docker commands should i run in order provide better information.
Assuming your host have been preparing as below
your files (index.html, js, etc) under folder - /myhost/nginx/html
your nginx configuration - /myhost/nginx/nginx.conf
Solution
map your files (call volume) on the fly from outside docker image via docker cli
This is the command
docker run -it --rm -d -p 8080:80 --name web \
-v /myhost/nginx/html:/usr/share/nginx/html \
-v /myhost/nginx/nginx.conf:/etc/nginx/nginx.conf \
nginx
copy your files into docker image by build your own docker image via Dockerfile
This is your Dockerfile under /myhost/nginx
FROM nginx:latest
COPY ./html/index.html /usr/share/nginx/html/index.html
This is the command to build your docker image
cd /myhost/nginx
docker build -t pillalexakis/nginx .
This is the command to run your docker image
docker run -it --rm -d -p 8080:80 --name web \
pillalexakis/nginx

Is good idea to deploy production server using docker in ups?

I created a docker file that install my application for php and dependencies then use composer to install vendor packages all in docker container.
This container will link to MongoDB and Nginx to run.
It's ok for development but my question is it's ok to deploy my production env?
Consider in my production server I will install docker then run below command:
docker run --name MongoDB -d --rm mongodb:latest
docker run --name app --link MongoDb:mongodb -p 9000:9000 -d --rm myrepo/myapp:latest
docker run --link app:app --name Nginx --rm -d Nginx:latest
And then enter my domain.com and my production server using this dockers to run my app.
It's ok and stable?
You may want to use docker compose to setup the three as services and deploy the whole thing as one stack using docker-compose up
See this page for a sample

cannot run container after commit changes

Just basic and simple steps illustrating what I have tried:
docker pull mysql/mysql-server
sudo docker run -i -t mysql/mysql-server:latest /bin/bash
yum install vi
vi /etc/my.cnf -> bind-address=0.0.0.0
exit
docker ps
docker commit new_image_name
docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=secret -d new_image_name
docker ps -a STATUS - Exited (1)
Please let me know what I did wrong.
Instead of trying to modify an existing image, try and use (for testing) MYSQL_ROOT_HOST=%.
That would allow root login from any IP. (As seen in docker-library/mysql issue 241)
sudo docker run --name mysql -p 3306:3306 -e MYSQL_ROOT_PASSWORD=123456 -e MYSQL_ROOT_HOST=% -d mysql/mysql-server:latest
The README mentions:
By default, MySQL creates the 'root'#'localhost' account.
This account can only be connected to from inside the container, requiring the use of the docker exec command as noted under Connect to MySQL from the MySQL Command Line Client.
To allow connections from other hosts, set this environment variable.
As an example, the value "172.17.0.1", which is the default Docker gateway IP, will allow connections from the Docker host machine.

Resources