i'm trying to figure out how and where to set right configuration to get working SSL beetween guacd and server guacamole (tomcat web srv).
I am using docker solution environment and i am bit confused where put right configuration. Let me explain what i've understood and hope someone can clarify me.
guacamole.properties and guacd.conf has to be on same $GUACAMOLE_HOME dir (guacamole container)? or guacamole.properties has to be put inside guacamole container and guacd.conf inside guacd container? (If Yes, under which directory, in guacd container?)
Below container commands :
docker run --name guacd_ssl --restart=always -v /opt/docker_data/guacd:/opt/local -e GUACD_LOG_LEVEL=debug -p 57822:4822 -d guacamole/guacd
docker run --name guacamole-1.2.0-SSL --restart=always -e MYSQL_DATABASE=guacamole_db -e MYSQL_USER=guacamole_user -e MYSQL_PASSWORD=password -e --link guacd_ssl:guacd --link db_guacamole:mysql -v /opt/docker_data/guacamole:/opt/local -e GUACAMOLE_HOME=/opt/local -e GUACD_PORT=57822 -e GUACD-SSL=true -d -p 8090:8080 guacamole/guacamole:latest
Now, certificates where are to be putted? in /opt/docker_data/guacamole (host dir) or into /opt/docker_data/guacd (host dir) ?
Configuration files:
guacd.conf
[ssl]
server_certificate = /opt/local/cert.pem
server_key = /opt/local/key.pem
guacamole.properties
guacd-ssl: true
Can you help me understand?
Regards
To enable SSL for guacd in docker environment, you will need to copy SSL certificate and key into the guacd container. You can do so by creating a customized image atop of the guacd image or via volume mount. If you want to take the first option, you can find guacd Dockerfile at here.
guacamole-properties and guacd.conf are two different files.
guacamole-properties is the configuration file for guacamole-client while guacd.conf is the configuration file for guacamole-server(guacd). Usually, you will place both files in /etc/guacamole/. For docker, the situation is slightly different.
In docker, the default GUACAMOLE_HOME for the guacamole-client container is located at /root/.guacamole. You can find the guacamole.properties file here.
For guacd, you can place your guacd.conf in /etc/guacamole/.
For the certificate and key, you can place it anywhere you like as long as you mentioned the path in guacd.conf.
Related
I am launching a container for my application. But my app needs few config files to login. Files are stored in host directory. How can I mount the host filepath to container?
host directory : /opt/myApp/config
Docker command used currently :
sudo docker run container -d --name myApp-container -p 8090:8080 myApp-image
Please suggest the changes in docker command to achieve this.
You need to use -v/--volume key in such way:
-v <host dir>:<container dir>:ro
In your case it will be:
-v /opt/myApp/config:/opt/myApp/config:ro
You can use this key multiple times. You can also drop :ro part if you want directory to be writable.
See Docker documentation on volumes.
I'm new to Docker and I want to copy files to/from my local machine directly to a docker container that's on a remote machine without having to scp files from my local to my remote and then using docker cp to copy those files to the container. My container does not have an SSH server installed on it nor do I want to rebuild my image to include it.
I tried following solution given by the second answer here:How to SSH into Docker? . I ran the following command on my remote machine that hosts Docker:
docker run -d -p 2222:22 -v /var/run/docker.sock:/var/run/docker.sock -e CONTAINER=kind_tu -e AUTH_MECHANISM=noAuth jeroenpeeters/docker-ssh
Where kind_tu is the name of my running container.
On my local machine I then used: ssh -L 2222:localhost:2222 remote_account_name#remote_ip and then scp -P 2222 test_file remote_account_name#remote_ip:/destination/path (I'm also not familiar with port forwarding so I'm not sure if my notation is correct). When doing this, I get the following:
ssh: connect to host remote_ip port 2222: Connection refused
lost connection
Could this be an issue with the firewall since the remote machine is on my school's campus?
In all, I'm not sure if what I'm doing is even remotely correct.
According to your comment as a reply to David's, here is the explanation how to bind-mount the directory for your visualization files to your container:
On the host system create a directory, e.g. mkdir /home/sarah/viz/. Then, mount it to your docker container, using e.g.
docker run -v /home/sarah/viz:/data/viz … kind_tu …
Your viz software inside the kind_tu container should place the files in the directory /data/viz – which then lands in /home/sarah/viz/ on the host system, where you can download them to your local computer with scp or rsync or however you can connect to the remote machine.
You can also use docker-compose to have a more persistent environment. Write a file docker-compose.yml with the bind-mount and all the other configuration of the kind_tu container:
version: '3'
services:
kind_tu:
image: your_viz_software_image:latest
volumes:
- /home/sarah/viz:/data/viz:rw
…
Then, instead of docker run … you can just do docker-compose up -d and everything acts according to the config in the compose-file.
I'm transitioning my current Jenkins server to implement Docker. Following the guide on github https://github.com/jenkinsci/docker, I was able to successfully launch jenkins with the command:
docker run -p 8080:8080 -p 50000:50000 -v jenkins_home:/var/jenkins_home jenkins/jenkins:lts
I'm not sure how to view/access the data in my container/volume through file explorer. Is it only accessible through docker inspect? The guide in GitHub says I should avoid using a bind mount from a folder on the host machine into /var/jenkins/home. Is there another way to view and access my jenkins jobs?
As you can see in the Jenkins CI Dockerfile source code
/var/jenkins_home is declared as a VOLUME.
It means that it can be mounted on the host.
Your command mounts a docker volume to it but you could also mount a path on your host.
For example:
docker run -p 8080:8080 -p 50000:50000 -v ~/jenkins_home:/var/jenkins_home jenkins/jenkins:lts
On Windows hosts, you might have to create the directory first.
You can change ~/jenkins_home to whatever suites your host environment but that is a folder that you can easily navigate and inspect.
You can also still use the web interface available on the porta that you map on the host.
If you want see the data on a local host file system you can use bind mounts instead of volume, it will sync all the data from the jenkins_home folder to your local host file system. For example:
docker run -p 8080:8080 \ --name jenkins \ --mount type=bind,source="$(pwd)"/jenkins_home,target=/var/jenkins_home \ jenkins/jenkins
for more clarification on bind mounts and volumes please follow this link.
https://docs.docker.com/storage/bind-mounts/
I am using Windows 10 and I have installed Docker and pulled nginx:
docker pull nginx
I started nginx with this command:
docker run -dit --rm --name nginx -p 9001:80 nginx
And simple page is available on localhost:9001.
I would like to pass nginx.conf file to nginx. Also, I would like to give it a folder root, so that on localhost:9001 I see static page D:/nginx/website_files/index.html. In folder website_files there are also other static pages.
How to pass nginx.conf and folder path to nginx in Docker on Windows?
I started using Kitematic and pulled hello-world-nginx. With it I was able to browse files by clicking on Volumes -> /website_files. On path that opens, other static files can be added. After that nginx can be restarted and it increments port by 1. Port number can be seen with docker ps.
To change nginx config file, after starting nginx I run this command docker cp D:/nginx/multi.conf b3375f37a95c:/etc/nginx/nginx.conf where b3375f37a95c is container id obtained from docker ps command. After that nginx should be restarted from Kitematic.
If you only want to edit nginx.conf instead of completely changing it, you can first get current conf file with docker cp b3375f37a95c:/etc/nginx/nginx.conf D:/nginx/multi.conf, edit multi.conf and than copy it back as before.
You can use host volume mapping
-v <host-directory>:<container-path>
for example:
docker run -dit --rm -v d:/data:/data -p 9001:80 nginx /bin/sh
Try with this in PS :
PS C:\> docker run --name myNGinX -p 80:80 -p 443:443 -v C:\Docker\Volumes\myNGinX\www\:/etc/nginx/www/:ro -v C:\Docker\Volumes\myNGinX\nginx.conf:/etc/nginx/conf.d/:ro -d nginx
Late to the answer-party, and shameless self-promotion, but I created a repo using Docker-compose having an Nginx proxy server and 2 other websites all in Containers.
Check it out here
I am quite new to the world of docker and I am trying to set this up:
Running a solarwinds whd container and trying to mount a local volume on the host using this command:
docker run -d -p 8081:8081 --name=whdinstance -v pwd:/usr/local/webhelpdesk/bin/pgsql/var/lib/pgsql/9.2/data solarwinds/whd-embedded:latest
This starts the container and the volume is mounted but as soon as I go to localhost:8081 to login to the web helpdesk portal it asks me to select the database and then says "Connection refused" See Screenshot
can someone please help, if this might be an issue with the way I am mounting the volume?
Here exemples of how using volumes:
For use directory volume
docker run -itd -p 80:80 --name wordpress -v /path/in/your/host :/path/in/your/container wordpress
You have to put you -v and then the path of your directory in your container : the path of your shared directory on your host. When you done this you can choose your image !
So for you it should be something like
docker run -itd -p 8081:8081 --name=whdinstance -v /usr/local/webhelpdesk/bin/pgsql/var/lib/pgsql/9.2/data solarwinds/whd-embedded:latest