I setup a keycloak container from this manual:
https://hub.docker.com/r/jboss/keycloak
I used the following command:
docker run -d --name keycloak --net keycloak-network -p 8443:8443 -e
KEYCLOAK_USER=******* -e KEYCLOAK_PASSWORD=********
--mount type=bind,source=/home/cert/keycloak.key,target=/etc/x509/https/tls.key
--mount type=bind,source=/home/cert/keycloak.crt,target=/etc/x509/https/tls.crt
jboss/keycloak
It worked like a charm until the SSL run outdated.
The files are located inside the container under the following path:
etc/x509/https
I thought it would be possible updating the SSL-Certificates just by copying them over
UPDATE:
The tls files are bind to Volumes:
"Type": "bind",
"Source": "/home/cert/keycloak.key",
"Destination": "/etc/x509/https/tls.key",
I copied the files to the folder /home/cert and replaced the original ones. After that i restarted the container but when accessing the homepage it still shows the "old" outdated cert.
UPDATE2:
I also run the recommended script:
sudo docker exec -it keycloak /bin/bash
cd /opt/jboss/tools/
bash x509.sh
Which returned
Creating HTTPS keystore via OpenShift's service serving x509 certificate secrets..
HTTPS keystore successfully created at: /opt/jboss/keycloak/standalone/configuration/keystores/https-keystore.jks
After that I restarted the container and still using the old certificates....
Can anyone help me?
Related
i'm trying to figure out how and where to set right configuration to get working SSL beetween guacd and server guacamole (tomcat web srv).
I am using docker solution environment and i am bit confused where put right configuration. Let me explain what i've understood and hope someone can clarify me.
guacamole.properties and guacd.conf has to be on same $GUACAMOLE_HOME dir (guacamole container)? or guacamole.properties has to be put inside guacamole container and guacd.conf inside guacd container? (If Yes, under which directory, in guacd container?)
Below container commands :
docker run --name guacd_ssl --restart=always -v /opt/docker_data/guacd:/opt/local -e GUACD_LOG_LEVEL=debug -p 57822:4822 -d guacamole/guacd
docker run --name guacamole-1.2.0-SSL --restart=always -e MYSQL_DATABASE=guacamole_db -e MYSQL_USER=guacamole_user -e MYSQL_PASSWORD=password -e --link guacd_ssl:guacd --link db_guacamole:mysql -v /opt/docker_data/guacamole:/opt/local -e GUACAMOLE_HOME=/opt/local -e GUACD_PORT=57822 -e GUACD-SSL=true -d -p 8090:8080 guacamole/guacamole:latest
Now, certificates where are to be putted? in /opt/docker_data/guacamole (host dir) or into /opt/docker_data/guacd (host dir) ?
Configuration files:
guacd.conf
[ssl]
server_certificate = /opt/local/cert.pem
server_key = /opt/local/key.pem
guacamole.properties
guacd-ssl: true
Can you help me understand?
Regards
To enable SSL for guacd in docker environment, you will need to copy SSL certificate and key into the guacd container. You can do so by creating a customized image atop of the guacd image or via volume mount. If you want to take the first option, you can find guacd Dockerfile at here.
guacamole-properties and guacd.conf are two different files.
guacamole-properties is the configuration file for guacamole-client while guacd.conf is the configuration file for guacamole-server(guacd). Usually, you will place both files in /etc/guacamole/. For docker, the situation is slightly different.
In docker, the default GUACAMOLE_HOME for the guacamole-client container is located at /root/.guacamole. You can find the guacamole.properties file here.
For guacd, you can place your guacd.conf in /etc/guacamole/.
For the certificate and key, you can place it anywhere you like as long as you mentioned the path in guacd.conf.
Using Ubuntu Linux with docker installed. No VM.
I have build a docker image with a vuejs application. To enable hot reload I start the docker container with:
docker run -it -p 8081:8080 -e "HOST=0.0.0.0" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
It starts up fine and I can access it from my host browser on localhost:8081. But when I make changes to the source files and save those changes they are not reflected in my browser before I press F5 (hot reload does not work).
Some details below:
package.json
"scripts": {
"dev": "webpack-dev-server --inline --progress --config build/webpack.dev.conf.js",
"start": "npm run dev",
build/webpack.dev.conf.js
devServer: {
clientLogLevel: 'warning',
...
hot: true,
...
watchOptions: {
//poll: config.dev.poll,
//aggregateTimeout: 500, // delay before reloading
poll: 100 // enable polling since fsevents are not supported in docker
}
Tried to modify the watchOptions but it has no effect.
EDIT:
Based on below answer I have tried to pass: CHOKIDAR_USEPOLLING=true as an environment variable to docker run:
docker run -it -p 8081:8080 -e "HOST=0.0.0.0" -e "CHOKIDAR_USEPOLLING=true" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
But it has not effect - still not able to hot reload my changes. Also in the provided link it says:
Update/Clarification: This problem only occurs when running your
Docker engine inside a VM. If you are on Linux for both Docker and for
coding you do not have this problem.
So don't think the answer applies to my setup - I am running Ubuntu Linux on my machine where I have installed docker. So no VM setup.
Another update - based on the comment below on changing the port mapping:
# Hot reload works!
docker run -it -p 8080:8080 -e "HOST=0.0.0.0" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
# Hot reload fails!
#docker run -it -p 8081:8080 -e "HOST=0.0.0.0" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
So if I port map to 8080:8080 instead of 8081:8080 hot reload works! Notice the application comes up in both cases when I access it on my host browser on localhost on the before mentioned ports. Its just that hot reload only works when I map the application to 8080 on my host.
But why??
Now if I do:
PORT='8081'
docker run -it -p "${PORT}:${PORT}" -e "HOST=0.0.0.0" -e "PORT=${PORT}" -v ${PWD}:/app/ -v /app/node_modules --name my-frontend my-frontend-image
Hot reload of course works. But still not sure why I cannot map internal container port 8080 to 8081 externally on the host.
Btw; I don't see the problem at all if I use vue-cli-service serve instead - everything works out of the box.
I am not a VueJS user at all, never worked with it, but I use Docker heavily for my development workflow, and in the past I experienced a similar issue.
In my case the Javascript that was sent to the browser was trying to connect with the internal port of the docker container 8080, but once the mapped one for the host was 8081, the JS in the browser was not able to reach 8080 inside the docker container, therefore hot reload was not working.
So it seems to me that you have the same scenario as me, thus you need to configure in your VueJS app the hot reload to listen in the same port you want to use in the host, or just use the same port for both as you already concluded that it works.
If watchOptions doesnt work, you can try out the other option:
environment:
- CHOKIDAR_USEPOLLING=true
As per docs here:
“If watching does not work for you, try out this option. Watching does not work with NFS and machines in VirtualBox.”
Reference:
https://daten-und-bass.io/blog/enabling-hot-reloading-with-vuejs-and-vue-cli-in-docker/
It's been asked a long time ago, but I had the same problem and then noticed there's a sockPort within devServer config object that let's you set the port used by the websocket connection to communicate with the server for live/hot-reloading purposes.
What I did is set this option via an environment variable and it worked just fine when accessing the dev server from outside the container.
installed ownCloud with docker as following:
docker pull owncloud
docker run -v /var/www/owncloud:/var/www/html -d -p 80:80 owncloud
that works. Setup a client with access to server, works also.
Following issue: when i copy a file by command line to the volume it is copied to the container (also good), BUT MY CLIENTS ARE NOT SYNCED. It looks that clients are only synced, if the webinterface is used.
Any idea, how to fix this?
thanks ralfg
I am using Windows 10 and I have installed Docker and pulled nginx:
docker pull nginx
I started nginx with this command:
docker run -dit --rm --name nginx -p 9001:80 nginx
And simple page is available on localhost:9001.
I would like to pass nginx.conf file to nginx. Also, I would like to give it a folder root, so that on localhost:9001 I see static page D:/nginx/website_files/index.html. In folder website_files there are also other static pages.
How to pass nginx.conf and folder path to nginx in Docker on Windows?
I started using Kitematic and pulled hello-world-nginx. With it I was able to browse files by clicking on Volumes -> /website_files. On path that opens, other static files can be added. After that nginx can be restarted and it increments port by 1. Port number can be seen with docker ps.
To change nginx config file, after starting nginx I run this command docker cp D:/nginx/multi.conf b3375f37a95c:/etc/nginx/nginx.conf where b3375f37a95c is container id obtained from docker ps command. After that nginx should be restarted from Kitematic.
If you only want to edit nginx.conf instead of completely changing it, you can first get current conf file with docker cp b3375f37a95c:/etc/nginx/nginx.conf D:/nginx/multi.conf, edit multi.conf and than copy it back as before.
You can use host volume mapping
-v <host-directory>:<container-path>
for example:
docker run -dit --rm -v d:/data:/data -p 9001:80 nginx /bin/sh
Try with this in PS :
PS C:\> docker run --name myNGinX -p 80:80 -p 443:443 -v C:\Docker\Volumes\myNGinX\www\:/etc/nginx/www/:ro -v C:\Docker\Volumes\myNGinX\nginx.conf:/etc/nginx/conf.d/:ro -d nginx
Late to the answer-party, and shameless self-promotion, but I created a repo using Docker-compose having an Nginx proxy server and 2 other websites all in Containers.
Check it out here
I am using Jenkins (2.32.2) Docker container with the Publish over ssh plugin (1.17) and I have added a new server manually.
The newly added server is another Docker container (both running with docker-compose) and I am using a password to connect to it, and everything works just fine when doing it manually, but the problem is when I'm rebuilding the image.
I am already using a volume for the jenkins gone directory and it works just fine. The problem is only on the initial installation (e.g. image build, not a container restart).
It seems like the problem is with the secret key, and I found out that I also need to copy some keys when creating my image.
See the credentials section at Publish over ssh documentation
I tried to copy all the "secrets" directory and the following files: secret.key, secret.key.not-so-secret, identity.key.enc - but I still can't connect after a fresh install.
What am I missing?
Edited:
I just tried to copy the whole jenkins_home directory on my DOCKERFILE and it works, so I guess that the problem is with the first load or something? maybe Jenkins changes the key / salt on the first load?
Thanks.
try to push out jenkins config to docker host of to os where docker host is being installed
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v /var/jenkins_home jenkins
or
docker run --name myjenkins -p 8080:8080 -p 50000:50000 -v ./local/conf:/var/jenkins_home jenkins