mediasoup v3 with Docker - docker

Im trying to run an 2 WebRTC example(using mediasoup) in docker
I want to run two servers as I am working on video calling across a set of instances!
My Error:
Have you seen this Error:
createProducerTransport null Error: port bind failed due to address not available [transport:udp, ip:'172.17.0.1', port:50517, attempt:1/50000]
I think it's something to do with setting the docker network?
docker-compose.yml
version: "3"
services:
db:
image: mysql
restart: always
app:
image: app
build: .
ports:
- "1440:443"
- "2000-2020"
- "80:8080"
depends_on:
- db
app2:
image: app
build: .
ports:
- "1441:443"
- "2000-2020"
- "81:8080"
depends_on:
- db
Dockerfile
FROM node:12
WORKDIR /app
COPY . .
CMD npm start

It sais it couldn't bind the address so it could be the ip or the port that causes the problem.
The ip seems like it's the ip of the docker instance. although of the docker instances are in two different machines it should be the ip of the server and not the docker instance. (in the mediasoup settings)
There are also ports of the rtcp connection that have to be opened in the docker instance. They are normally also in the mediasouo config file. usually a range of a few hundred ports that need to be opened.

You should set your rtc min and max port to 2000 and 2020 for testing purpose. Also you are not forwarding these ports I guess. In docker-compose use 2000-2020:2000-2020 Also make sure to set your listenIps properly.

If you are running mediasoup in docker, container where mediasoup is installed should be run in network mode host.
This is explained here:
How to use host network for docker compose?
and official docs
https://docs.docker.com/network/host/
Also you should pay attention to mediasoup configuration settings webRtcTransport.listenIps and plainRtpTransport.listenIp they should tell client on which IP address is your mediasoup server listening.

Related

Docker - all-spark-notebook Communications link failure

I'm new using docker and spark.
My docker-compose.yml file is
volumes:
shared-workspace:
services:
notebook:
image: docker.io/jupyter/all-spark-notebook:latest
build:
context: .
dockerfile: Dockerfile-jupyter-jars
ports:
- 8888:8888
volumes:
- shared-workspace:/opt/workspace
And the Dockerfile-jupyter-jars is:
FROM docker.io/jupyter/all-spark-notebook:latest
USER root
RUN wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar
RUN mv mysql-connector-java-8.0.28.jar /usr/local/spark/jars/
USER jovyan
To it start up a run
docker-compose up --build
The server is up and running and I'm interested to use spark-sql, but it is throwing and error trying to connect to mysql server:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
I can see the mysql-connector-java-8.0.28.jar in the "jars" folder, and I have used same sql instruction in apache spark non docker version and it works.
Mysql db server is also reachable from the same server I'm running the Docker.
Do I need to enable something to reach external connections? Any idea?
Reference: https://hub.docker.com/r/jupyter/all-spark-notebook
The docker-compose.yml and Dockerfile-jupyter-jars files were correct, since I was using mysql-connector-java-8.0.28.jar it requires a SSL or to disable explicitly.
jdbc:mysql://user:password#xx.xx.xx.xx:3306/inventory?useSSL=FALSE&nullCatalogMeansCurrent=true
I'm going to left this example for: Docker - all-spark-notebook with MySQL dataset

How to access a website running in a container when you´re using network_mode: host

I have a very tricky topic because I need to access a private DB in AWS. In order to connect to this DB, first I need to create a bridge like this:
ssh -L 127.0.0.1:LOCAL_PORT:DB_URL:PORT -N -J ACCOUNT#EMAIL.DOMAIN -i ~/KEY_LOCATION/KEY_NAME.pem PC_USER#PC_ADDRESS
Via 127.0.0.1:LOCAL_PORT:DB_URL I can connect to the DB in my Java app. Let´s say the port is 9991 for this case.
My docker files more or less look this:
docker-compose.yml
version: '3.4'
services:
api:
image: fanmixco/example:v0.01
build:
context: .
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Dockerfile
FROM openjdk:11
RUN mkdir /home/app/
WORKDIR /home/app/
RUN mkdir logs
COPY ./target/MY_JAVA_APP.jar .
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "MY_JAVA_APP.jar"]
The image runs properly. However, if I try:
using localhost:8080/MY_APP fails
using 127.0.0.1/MY_APP fails
getting the container's IP and use it later fails
using host.docker.internal/MY_APP fails
I´m wondering how I can test my app. I know it´s running because I get a successful message in the console and the new data was added to the DB, but I don´t know how I can test it or access it. Any idea of the proper way to do it? Thanks.
P.S.:
I´m running my Images in Docker Desktop for Windows.
I have another case using tomcat 9 and running CMD ["catalina.sh", "run"] and I know it's working because I get this message in the console:
INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [9905] milliseconds
But I cannot access it again.
I'm not really sure what the issue is based on the above information since I cannot replicate the system on my own machine.
However, these are some places to look:
you might be running into an issue similar to this: https://github.com/docker/for-mac/issues/1031 because of the networking magic you are doing with ssh and AWS DB
you should try specifying either a build/Dockerfile or an image, and avoid specifying both
version: '3.4'
services:
api:
image: fanmixco/example:v0.01 # choose using an image
build: # or building from a Dockerfile
context: . # but not both
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Hope that helps 🤞🏻 and good luck 🍀
I guess you need to bind the port of your container.
Try to add the 'port' property to your docker-compose file
version: '3.4'
services:
api:
image: fanmixco/example:v0.01
build:
context: .
port:
- 8080:8080
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Have a look on https://docs.docker.com/compose/compose-file/compose-file-v3/#endpoint_mode

Networking in Docker Compose file

I am writing a docker compose file for my web app.If I use 'link' to connect services with each other do I also need to include 'port'? And is 'depends on' an alternate option of 'links'? What will be best for connection services in a compose file with one another?
The core setup for this is described in Networking in Compose. If you do absolutely nothing, then one service can call another using its name in the docker-compose.yml file as a host name, using the port the process inside the container is listening on.
Up to startup-order issues, here's a minimal docker-compose.yml that demonstrates this:
version: '3'
services:
server:
image: nginx
client:
image: busybox
command: wget -O- http://server/
# Hack to make the example actually work:
# command: sh -c 'sleep 1; wget -O- http://server/'
You shouldn't use links: at all. It was an important part of first-generation Docker networking, but it's not useful on modern Docker. (Similarly, there's no reason to put expose: in a Docker Compose file.)
You always connect to the port the process inside the container is running on. ports: are optional; if you have ports:, cross-container calls always connect to the second port number and the remapping doesn't have any effect. In the example above, the client container always connects to the default HTTP port 80, even if you add ports: ['12345:80'] to the server container to make it externally accessible on a different port.
depends_on: affects two things. Try adding depends_on: [server] to the client container to the example. If you look at the "Starting..." messages that Compose prints out when it starts, this will force server to start starting before client starts starting, but this is not a guarantee that server is up and running and ready to serve requests (this is a very common problem with database containers). If you start only part of the stack with docker-compose up client, this also causes server to start with it.
A more complete typical example might look like:
version: '3'
services:
server:
# The Dockerfile COPYs static content into the image
build: ./server-based-on-nginx
ports:
- '12345:80'
client:
# The Dockerfile installs
# https://github.com/vishnubob/wait-for-it
build: ./client-based-on-busybox
# ENTRYPOINT and CMD will usually be in the Dockerfile
entrypoint: wait-for-it.sh server:80 --
command: wget -O- http://server/
depends_on:
- server
SO questions in this space seem to have a number of other unnecessary options. container_name: explicitly sets the name of the container for non-Compose docker commands, rather than letting Compose choose it, and it provides an alternate name for networking purposes, but you don't really need it. hostname: affects the container's internal host name (what you might see in a shell prompt for example) but it has no effect on other containers. You can manually create networks:, but Compose provides a default network for you and there's no reason to not use it.

Docker - Run app on swarm manager (cant connect)

TLDR version:
How can I verify/set ports 7946 & 4789 on my swarm node so that I can view my application running from my docker-machine?
Complete question:
I am going through the docker tutorials and am on step 4
https://docs.docker.com/get-started/part4/#accessing-your-cluster
When I get to accessing your cluster section. It says that I should just be able to grab the ip address from one of my nodes displayed using docker-machine ls. I run that command, see the IP, grab it and put it into my browser (or alternatively use curl) and i receive the error
This site can’t be reached
192.168.99.100 refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Below this step it has a note saying that before you enable swarm mode, assuming they mean when you run:
docker-machine ssh myvm1 "docker swarm init --advertise-addr <myvm1 ip>"
You should check the following port settings
Having connectivity trouble?
Keep in mind that in order to use the ingress network in the swarm, you need to have the following ports open between the swarm nodes before you enable swarm mode:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container ingress network.
I've spent the last few days going through documentation, redoing the steps and trying everything I can to have this work but nothing has been successful.
Can anyone explain/provide documentation to show me how to view/set these ports, or explain if I am missing some other important information?
UPDATE
I wasn't able to get swarm working, so I decided to just run everything from a docker-compose.yml file. Here is the code I used below:
docker-compose.yml file:
version: '3'
services:
www:
build: .
ports:
- "80:80"
links:
- db
depends_on:
- db
volumes:
- .:/opt/www
db:
image: mysql:5.7
volumes:
- /var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: supersecure
MYSQL_DATABASE: test_db
MYSQL_USER: jake
MYSQL_PASSWORD: supersecure
and a Dockerfile located in the same directory containing the following:
# A simple Flask app container.
FROM python:2.7
LABEL maintainer="your name here"
# Place app in container.
ADD . /opt/www
WORKDIR /opt/www
# Install dependencies.
RUN pip install -r requirements.txt
EXPOSE 80
ENV FLASK_APP index.py
ENV FLASK_DEBUG 1
CMD python index.py
you'll need to create any other files which are referenced in these two files (example requirements.txt & index.py) but those are all in the same directory as the dockerfile & docker-compose.yml files. Please comment if anyone has questions

Access host machine dns from a docker container

/I'm using docker beta on a mac an have some services set up in service-a/docker-compose.yml:
version: '2'
services:
service-a:
# ...
ports:
- '4000:80'
I then set up the following in /etc/hosts:
::1 service-a.here
127.0.0.1 service-a.here
and I've got an nginx server running that proxies service-a.here to localhost:4000.
So on my mac I can just run: curl http://service-a.here. This all works nicely.
Now, I'm building another service, service-b/docker-compose.yml:
version: '2'
services:
service-b:
# ...
ports:
- '4001:80'
environment:
SERVICE_A_URL: service-a.here
service-b needs service-a for a couple of things:
It needs to redirect the user in the browser to the $SERVICE_A_URL
It needs to perform HTTP requests to service-a, also using the $SERVICE_A_URL
With this setup, only the redirection (1.) works. HTTP requests (2.) do not work because the service-b container
has no notion of service-a.here in it's DNS.
I tried adding service-a.here using the add_hosts configuration variable, but I'm not sore what to set it to. localhost will not work of course.
Note that I really want to keep the docker-compose files separate (joining them would not fix my problem by the way) because they both already have a lot of services running inside of them.
Is there a way to have access to the DNS resolving on localhost from inside a docker container, so that for instance curl service-a.here will work from inside a container?
You can use 'link' instruction in your docker-compose.yml file to automatically resolve the address from your container service-b.
service-b:
image: blabla
links:
- service-a:service-a
service-a:
image: blablabla
You will now have a line in the /etc/hosts of you service-b saying:
service-a 172.17.0.X
And note that service-a will be created before service-b while composing your app. I'm not sure how you can after that specify a special IP but Docker's documentation is pretty well done. Hope that's what you were looking for.

Resources