Docker - Run app on swarm manager (cant connect) - docker

TLDR version:
How can I verify/set ports 7946 & 4789 on my swarm node so that I can view my application running from my docker-machine?
Complete question:
I am going through the docker tutorials and am on step 4
https://docs.docker.com/get-started/part4/#accessing-your-cluster
When I get to accessing your cluster section. It says that I should just be able to grab the ip address from one of my nodes displayed using docker-machine ls. I run that command, see the IP, grab it and put it into my browser (or alternatively use curl) and i receive the error
This site can’t be reached
192.168.99.100 refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Below this step it has a note saying that before you enable swarm mode, assuming they mean when you run:
docker-machine ssh myvm1 "docker swarm init --advertise-addr <myvm1 ip>"
You should check the following port settings
Having connectivity trouble?
Keep in mind that in order to use the ingress network in the swarm, you need to have the following ports open between the swarm nodes before you enable swarm mode:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container ingress network.
I've spent the last few days going through documentation, redoing the steps and trying everything I can to have this work but nothing has been successful.
Can anyone explain/provide documentation to show me how to view/set these ports, or explain if I am missing some other important information?
UPDATE
I wasn't able to get swarm working, so I decided to just run everything from a docker-compose.yml file. Here is the code I used below:
docker-compose.yml file:
version: '3'
services:
www:
build: .
ports:
- "80:80"
links:
- db
depends_on:
- db
volumes:
- .:/opt/www
db:
image: mysql:5.7
volumes:
- /var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: supersecure
MYSQL_DATABASE: test_db
MYSQL_USER: jake
MYSQL_PASSWORD: supersecure
and a Dockerfile located in the same directory containing the following:
# A simple Flask app container.
FROM python:2.7
LABEL maintainer="your name here"
# Place app in container.
ADD . /opt/www
WORKDIR /opt/www
# Install dependencies.
RUN pip install -r requirements.txt
EXPOSE 80
ENV FLASK_APP index.py
ENV FLASK_DEBUG 1
CMD python index.py
you'll need to create any other files which are referenced in these two files (example requirements.txt & index.py) but those are all in the same directory as the dockerfile & docker-compose.yml files. Please comment if anyone has questions

Related

Sharing Linux socket between Docker containers

I have two Docker containers—redis (running a Redis database) and node (running a Node.js application). My Node.js application needs to communicate with the Redis database, but I'm not sure how I should arrange this. Here are some ways I've thought of so far:
Put the two containers on one network, expose (but do not publish) port 6379 (or wherever the Redis server is listening) of the redis container, and connect to the exposed port from the node container.
Have the Redis server listen on a UNIX socket that is mounted to some location on the host (i.e., outside of the redis container) that is also mounted into the node container (willl that even work?).
Ditch the separate containers idea altogether and put the Redis server and Node app in the same container (I really do not want to do this).
Which option is the best, or is there something else that you would suggest? I want to maximize performance and security, but I also need to use container(s).
P.S. There are some similar questions to this one out there, but none of them seem to answer my question. That being said, if you find an existing answer that might help, please do link to it.
Presumably, the second approach has a higher performance. There are some comparisons here and here.
In order to implement this approach, you can simply create a docker volume and mount it in both containers. e.g. you can create a Dockerfile based on redis image:
FROM redis:7-alpine
RUN mkdir -p /run/redis-socket
RUN chmod 777 /run/redis-socket
COPY ./etc/redis.conf /etc/redis.conf
ENTRYPOINT ["docker-entrypoint.sh", "/etc/redis.conf"]
and write such config to ./etc/redis.conf:
unixsocket /run/redis-socket/redis.sock
unixsocketperm 777
port 0
Then mount a docker volume to /run/redis-socket/ of redis container. Don't forget to mount that volume to an arbitrary path in your node container.
If you use docker-compose, a sample docker-compose.yml would be like this:
version: "2.3"
services:
redis:
image: my-altered-redis-image
container_name: my-redis
volumes:
- redis-socket:/run/redis-socket
mynode:
image: my-node-image
container_name: my-node
depends_on:
- "redis"
volumes:
- redis-socket:/run/redis-socket
volumes:
redis-socket:

Docker - all-spark-notebook Communications link failure

I'm new using docker and spark.
My docker-compose.yml file is
volumes:
shared-workspace:
services:
notebook:
image: docker.io/jupyter/all-spark-notebook:latest
build:
context: .
dockerfile: Dockerfile-jupyter-jars
ports:
- 8888:8888
volumes:
- shared-workspace:/opt/workspace
And the Dockerfile-jupyter-jars is:
FROM docker.io/jupyter/all-spark-notebook:latest
USER root
RUN wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar
RUN mv mysql-connector-java-8.0.28.jar /usr/local/spark/jars/
USER jovyan
To it start up a run
docker-compose up --build
The server is up and running and I'm interested to use spark-sql, but it is throwing and error trying to connect to mysql server:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
I can see the mysql-connector-java-8.0.28.jar in the "jars" folder, and I have used same sql instruction in apache spark non docker version and it works.
Mysql db server is also reachable from the same server I'm running the Docker.
Do I need to enable something to reach external connections? Any idea?
Reference: https://hub.docker.com/r/jupyter/all-spark-notebook
The docker-compose.yml and Dockerfile-jupyter-jars files were correct, since I was using mysql-connector-java-8.0.28.jar it requires a SSL or to disable explicitly.
jdbc:mysql://user:password#xx.xx.xx.xx:3306/inventory?useSSL=FALSE&nullCatalogMeansCurrent=true
I'm going to left this example for: Docker - all-spark-notebook with MySQL dataset

mediasoup v3 with Docker

Im trying to run an 2 WebRTC example(using mediasoup) in docker
I want to run two servers as I am working on video calling across a set of instances!
My Error:
Have you seen this Error:
createProducerTransport null Error: port bind failed due to address not available [transport:udp, ip:'172.17.0.1', port:50517, attempt:1/50000]
I think it's something to do with setting the docker network?
docker-compose.yml
version: "3"
services:
db:
image: mysql
restart: always
app:
image: app
build: .
ports:
- "1440:443"
- "2000-2020"
- "80:8080"
depends_on:
- db
app2:
image: app
build: .
ports:
- "1441:443"
- "2000-2020"
- "81:8080"
depends_on:
- db
Dockerfile
FROM node:12
WORKDIR /app
COPY . .
CMD npm start
It sais it couldn't bind the address so it could be the ip or the port that causes the problem.
The ip seems like it's the ip of the docker instance. although of the docker instances are in two different machines it should be the ip of the server and not the docker instance. (in the mediasoup settings)
There are also ports of the rtcp connection that have to be opened in the docker instance. They are normally also in the mediasouo config file. usually a range of a few hundred ports that need to be opened.
You should set your rtc min and max port to 2000 and 2020 for testing purpose. Also you are not forwarding these ports I guess. In docker-compose use 2000-2020:2000-2020 Also make sure to set your listenIps properly.
If you are running mediasoup in docker, container where mediasoup is installed should be run in network mode host.
This is explained here:
How to use host network for docker compose?
and official docs
https://docs.docker.com/network/host/
Also you should pay attention to mediasoup configuration settings webRtcTransport.listenIps and plainRtpTransport.listenIp they should tell client on which IP address is your mediasoup server listening.

Networking in Docker Compose file

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

How to configure a dockerfile and docker-compose for Jenkins

Im absolutely new in Docker and Jenkins as well. I have a question about the configuration of Dockerfile and docker-compose.yml file. I tried to use the easiest configuration to be able to set-up these files correctly. Building and pushing is done correctly, but the jenkins application is not running on my localhost (127.0.0.1).
If I understand it correctly, now it should default running on port 50000 (ARG agent_port=50000 in jenkins "official" Dockerfile). I tried to use 50000, 8080 and 80 as well, nothing is working. Do you have any advice, please? Im using these files: https://github.com/fdolsky321/Jenkins_Docker
The second question is, whats the best way to handle the crashes of the container. Lets say, that if the container crashes, I want to recreate a new container with the same settings. Is the best way just to create a new shell file like "crash.sh" and provide there the information, that I want to create new container with the same settings? Like is mentioned in here: https://blog.codeship.com/ensuring-containers-are-always-running-with-dockers-restart-policy/
Thank you for any advice.
docker-compose for Jenkins
docker-compose.yml
version: '2'
services:
jenkins:
image: jenkins:latest
ports:
- 8080:8080
- 50000:50000
# uncomment for docker in docker
privileged: true
volumes:
# enable persistent volume (warning: make sure that the local jenkins_home folder is created)
- /var/wisestep/data/jenkins_home:/var/jenkins_home
# mount docker sock and binary for docker in docker (only works on linux)
- /var/run/docker.sock:/var/run/docker.sock
- /usr/bin/docker:/usr/bin/docker
Replace the port 8080, 50000 as you need in your host.
To recreate a new container with the same settings
The volumne mounted jenkins_home, is the placewhere you store all your jobs and settings etc..
Take the backup of the mounted volume jenkins_home on creating every job or the way you want.
Whenever there is any crash, run the Jenkins with the same docker-compose file and replace the jenkins_home folder with the backup.
Rerun/restart jenkins again
List the container
docker ps -a
Restart container
docker restart <Required_Container_ID_To_Restart>
I've been using a docker-compose.yml that looks like the following:
version: '3.2'
volumes:
jenkins-home:
services:
jenkins:
image: jenkins-docker
build: .
restart: unless-stopped
ports:
- target: 8080
published: 8080
protocol: tcp
mode: host
volumes:
- jenkins-home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
container_name: jenkins-docker
My image is a locally built Jenkins image, based off of jenkins/jenkins:lts, that adds in some other components like docker itself, and I'm mounting the docker socket to allow me to run commands on the docker host. This may not be needed for your use case. The important parts for you are the ports being published, which for me is only 8080, and the volume for /var/jenkins_home to preserve the Jenkins configuration between image updates.
To recover from errors, I have restart: unless-stopped inside the docker-compose.yml to configure the container to automatically restart. If you're running this in swarm mode, that would be automatic.
I typically avoid defining a container name, but in this scenario, there will only ever be one jenkins-docker container, and I like to be able to view the logs with docker logs jenkins-docker to gather things like the initial administrator login token.
My Dockerfile and other dependencies for this image are available at: https://github.com/bmitch3020/jenkins-docker
HyperV with docker for Windows.
In that case, you must be sure you port-forward any published port (like 5000).
Open HyperV manager, and right-click on the machine defined there: you will be able to add port-forwarding rules in order for localhost:5000 to access your VM:5000.

Resources