Payara server in Docker-compose up - docker

I am pretty new to Docker and trying to understand it. I have a docker-compose.yml file which contains certain things which I am unclear about. (I have recevied it from client and trying to run/understand it). Please note that I am using windows 10 and Docker version 3.0.
1) What does following piece of code in docker-compose.yml mean ? will it build the vvv.payara image and then start payara on port 4848 ? if yes then, should I be able to open admin page localhost:4848 after doing docker-compose up?
payara:
image: vvv.payara:rc1
build: payara
ports:
- 4848:4848
- 8080:8080
- 8181:8181
2) what is the point of aspecifying three ports for payara ? 4848, 8080 and 8181 ? does it say that if first is occupied start payara on other ?
3) what does line - ./deployments:/opt/payara41/deployments do ? why there is opt folder specified although I am using windows 10 ? I assume opt dir exists on Linux machines.
payara:
image: vvv.payara:rc1
build: payara
ports:
- 4848:4848
- 8080:8080
- 8181:8181
volumes:
- ./deployments:/opt/payara41/deployments
- ./logs:/opt/payara41/glassfish/domains/payaradomain/logs
- ./vvvConfiguration:/opt/vdz/config
working_dir: /opt/payara41/bin/
environment:
- PAYARA_DOMAIN=payaradomain

The build parameter specifies the folder docker will use to build the application (cf. doc).
The list of ports indicates a port exposure of the docker on the host system. This way, you should access ports 4848, 8080 and 8181 of the docker container on localhost
These three ports are required to access all components of payara. They will all be used for different services (of payara) if the ports are available on the host system. (The port 4848 is the admin HTTPS interface the 8080 the HTTP listener and the 8181 is the HTTPS listener)
Those lines are declaring mount points, that behaves like shared folders, between the host and the container. The part before the : refers to the folder on the host and the second part the folder inside the container that it will be linked to.
This means that your folder deployments will be accessible inside the container in the folder /opt/payara41/deployments

Related

mediasoup v3 with Docker

Im trying to run an 2 WebRTC example(using mediasoup) in docker
I want to run two servers as I am working on video calling across a set of instances!
My Error:
Have you seen this Error:
createProducerTransport null Error: port bind failed due to address not available [transport:udp, ip:'172.17.0.1', port:50517, attempt:1/50000]
I think it's something to do with setting the docker network?
docker-compose.yml
version: "3"
services:
db:
image: mysql
restart: always
app:
image: app
build: .
ports:
- "1440:443"
- "2000-2020"
- "80:8080"
depends_on:
- db
app2:
image: app
build: .
ports:
- "1441:443"
- "2000-2020"
- "81:8080"
depends_on:
- db
Dockerfile
FROM node:12
WORKDIR /app
COPY . .
CMD npm start
It sais it couldn't bind the address so it could be the ip or the port that causes the problem.
The ip seems like it's the ip of the docker instance. although of the docker instances are in two different machines it should be the ip of the server and not the docker instance. (in the mediasoup settings)
There are also ports of the rtcp connection that have to be opened in the docker instance. They are normally also in the mediasouo config file. usually a range of a few hundred ports that need to be opened.
You should set your rtc min and max port to 2000 and 2020 for testing purpose. Also you are not forwarding these ports I guess. In docker-compose use 2000-2020:2000-2020 Also make sure to set your listenIps properly.
If you are running mediasoup in docker, container where mediasoup is installed should be run in network mode host.
This is explained here:
How to use host network for docker compose?
and official docs
https://docs.docker.com/network/host/
Also you should pay attention to mediasoup configuration settings webRtcTransport.listenIps and plainRtpTransport.listenIp they should tell client on which IP address is your mediasoup server listening.

Running an executable inside a docker container from another container

I am trying to run an executable file from another docker container while already inside a docker container. Is this possible?
version: '3.7'
services:
py:
build: .
tty: true
networks:
- dataload
volumes:
- './src:/app'
- '~/.ssh:/ssh'
winexe:
build:
context: ./winexe
dockerfile: Dockerfile
networks:
- dataload
ports:
- '8001:8001'
volumes:
- '~/path/to/winexe:/usr/bin/winexe'
- '~/.ssh:/ssh'
depends_on:
- py
networks:
dataload:
driver: bridge
I am trying to access Winexe from 'py'
Assuming you mean running another Docker container from inside a container, this can be done in several ways:
Install the docker command inside your container and:
Contact the hosting Docker instance over TCP/IP. For this you will have to have exposed the Docker host to the network, which is neither default nor recommended.
Map the docker socket (usually /var/run/docker.sock) in to your container using a volume. This will allow the docker command inside the container to contact the host instance directly.
Be aware this essentially gives the container root level access to the host! I'm sure there are many more ways to do the same, but approach number 2 is the one I see most often.
If you mean to run another executable inside another - already running - Docker container, you can do that in the above way as well by using docker exec or run some kind of daemon in the second container that accepts commands and runs the required command for you.
So you need to think of your containers as if they were two separate computers, or servers, and they can interact accordingly.
Happily, docker-compose gives you a url you can use to communicate between the containers. In the case of your docker-compose file, you could access the winexe container from your py container like so:
http://winexe:8001 // or ws://winexe:8001 or postgres://winexe:8001 (you get the idea)
(I've used port 8001 here because that's the port you've made available for winexe – I have no idea if it could be used for this).
So now what you need is something in your winexe container than listens to that signal and sends a useful reply (like a browser sending an ajax call to a server)
Learn more here:
https://docs.docker.com/compose/networking/

docker rabbitmq how to expose port and reuse container with a docker file

Hi I am finding it very confusing how I can create a docker file that would run a rabbitmq container, where I can expose the port so I can navigate to the management console via localhost and a port number.
I see someone has provided this dockerfile example, but unsure how to run it?
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
I have got rabbit working locally fine, but everyone tells me docker is the future, at this rate I dont get it.
Does the above look like a valid way to run a rabbitmq container? where can I find a full understandable example?
Do I need a docker file or am I misunderstanding it?
How can I specify the port? in the example above what are first numbers 5672:5672 and what are the last ones?
How can I be sure that when I run the container again, say after a machine restart that I get the same container?
Many thanks
Andrew
Docker-compose
What you posted is not a Dockerfile. It is a docker-compose file.
To run that, you need to
1) Create a file called docker-compose.yml and paste the following inside:
version: "3"
services:
rabbitmq:
image: "rabbitmq:3-management"
ports:
- "5672:5672"
- "15672:15672"
volumes:
- "rabbitmq_data:/data"
volumes:
rabbitmq_data:
2) Download docker-compose (https://docs.docker.com/compose/install/)
3) (Re-)start Docker.
4) On a console run:
cd <location of docker-compose.yml>
docker-compose up
Do I need a docker file or am I misunderstanding it?
You have a docker-compose file. The rabbitmq:3-management is the Docker image built using the RabbitMQ Dockerfile (which you don't need. The image will be downloaded the first time you run docker-compose up.
How can I specify the port? In the example above what are the first numbers 5672:5672 and what are the last ones?
"5672:5672" specifies the port of the queue.
"15672:15672" specifies the port of the management plugin.
The numbers on the left-hand-side are the ports you can access from outside of the container. So, if you want to work with different ports, change the ones on the left. The right ones are defined internally.
This means you can access the management plugin after at http:\\localhost:15672 (or more generically http:\\<host-ip>:<port exposed linked to 15672>).
You can see more info on the RabbitMQ Image on the Docker Hub.
How can I be sure that when I rerun the container, say after a machine restart that I get the same container?
I assume you want the same container because you want to persist the data. You can use docker-compose stop restart your machine, then run docker-compose start. Then the same container is used. However, if the container is ever deleted you lose the data inside it.
That is why you are using Volumes. The data collected in your container gets also stored in your host machine. So, if you remove your container and start a new one, the data is still there because it was stored in the host machine.

How to configure a dockerfile and docker-compose for Jenkins

Im absolutely new in Docker and Jenkins as well. I have a question about the configuration of Dockerfile and docker-compose.yml file. I tried to use the easiest configuration to be able to set-up these files correctly. Building and pushing is done correctly, but the jenkins application is not running on my localhost (127.0.0.1).
If I understand it correctly, now it should default running on port 50000 (ARG agent_port=50000 in jenkins "official" Dockerfile). I tried to use 50000, 8080 and 80 as well, nothing is working. Do you have any advice, please? Im using these files: https://github.com/fdolsky321/Jenkins_Docker
The second question is, whats the best way to handle the crashes of the container. Lets say, that if the container crashes, I want to recreate a new container with the same settings. Is the best way just to create a new shell file like "crash.sh" and provide there the information, that I want to create new container with the same settings? Like is mentioned in here: https://blog.codeship.com/ensuring-containers-are-always-running-with-dockers-restart-policy/
Thank you for any advice.
docker-compose for Jenkins
docker-compose.yml
version: '2'
services:
jenkins:
image: jenkins:latest
ports:
- 8080:8080
- 50000:50000
# uncomment for docker in docker
privileged: true
volumes:
# enable persistent volume (warning: make sure that the local jenkins_home folder is created)
- /var/wisestep/data/jenkins_home:/var/jenkins_home
# mount docker sock and binary for docker in docker (only works on linux)
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
Replace the port 8080, 50000 as you need in your host.
To recreate a new container with the same settings
The volumne mounted jenkins_home, is the placewhere you store all your jobs and settings etc..
Take the backup of the mounted volume jenkins_home on creating every job or the way you want.
Whenever there is any crash, run the Jenkins with the same docker-compose file and replace the jenkins_home folder with the backup.
Rerun/restart jenkins again
List the container
docker ps -a
Restart container
docker restart <Required_Container_ID_To_Restart>
I've been using a docker-compose.yml that looks like the following:
version: '3.2'
volumes:
jenkins-home:
services:
jenkins:
image: jenkins-docker
build: .
restart: unless-stopped
ports:
- target: 8080
published: 8080
protocol: tcp
mode: host
volumes:
- jenkins-home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
container_name: jenkins-docker
My image is a locally built Jenkins image, based off of jenkins/jenkins:lts, that adds in some other components like docker itself, and I'm mounting the docker socket to allow me to run commands on the docker host. This may not be needed for your use case. The important parts for you are the ports being published, which for me is only 8080, and the volume for /var/jenkins_home to preserve the Jenkins configuration between image updates.
To recover from errors, I have restart: unless-stopped inside the docker-compose.yml to configure the container to automatically restart. If you're running this in swarm mode, that would be automatic.
I typically avoid defining a container name, but in this scenario, there will only ever be one jenkins-docker container, and I like to be able to view the logs with docker logs jenkins-docker to gather things like the initial administrator login token.
My Dockerfile and other dependencies for this image are available at: https://github.com/bmitch3020/jenkins-docker
HyperV with docker for Windows.
In that case, you must be sure you port-forward any published port (like 5000).
Open HyperV manager, and right-click on the machine defined there: you will be able to add port-forwarding rules in order for localhost:5000 to access your VM:5000.

docker-compose scale with nginx and without environment variable

I use docker-compose to describe the deployment of one of my application. The application is composed of a
mongodb database,
a nodejs application
a nginx front end the static file of nodejs.
If i scale the nodejs application, i would like nginx autoscale to the three application.
Recently i use the following code snippet :
https://gist.github.com/cmoore4/4659db35ec9432a70bca
This is based on the fact that some environment variable are created on link, and change when new server are present.
But now with the version 2 of the docker-compse file and the new link system of docker, the environment variable doesn't exist anymore.
How my nginx can now detect the scaling of my application ?
version: '2'
services:
nodejs:
build:
context: ./
dockerfile: Dockerfile.nodejs
image: docker.shadoware.org/passprotect-server:1.0.0
expose:
- 3000
links:
- mongodb
environment:
- MONGODB_HOST=mongodb://mongodb:27017/passprotect
- NODE_ENV=production
- DEBUG=App:*
nginx:
image: docker.shadoware.org/nginx:1.2
links:
- nodejs
environment:
- APPLICATION_HOST=nodejs
- APPLICATION_PORT=3000
mongodb:
image: docker.shadoware.org/database/mongodb:3.2.7
Documentation states here that:
Containers for the linked service will be reachable at a hostname identical to the alias, or the service name if no alias was specified.
So I believe that you could just set your services names in that nginx conf file like:
upstream myservice {
yourservice1;
yourservice2;
}
as they would be exported as host entries in /etc/hosts for each container.
But if you really want to have that host:port information as environment variables you could write a script to parse that docker-compose.yml and define an .env file, or doing it manually.
UPDATE:
You can get that port information from outside the container, this will return you the ports
docker inspect --format='{{range $p, $conf := .NetworkSettings.Ports}} {{$p}} -> {{(index $conf 0).HostPort}} {{end}}' your_container_id
But if you want to do it from the inside of a containers then what you want is a service discovery system like zookeeper
There's a long feature request thread in docker's repo, about that.
One workaround solution caught my attention. You could try building your own nginx image based on that.

Resources