Docker - all-spark-notebook Communications link failure - docker

I'm new using docker and spark.
My docker-compose.yml file is
volumes:
shared-workspace:
services:
notebook:
image: docker.io/jupyter/all-spark-notebook:latest
build:
context: .
dockerfile: Dockerfile-jupyter-jars
ports:
- 8888:8888
volumes:
- shared-workspace:/opt/workspace
And the Dockerfile-jupyter-jars is:
FROM docker.io/jupyter/all-spark-notebook:latest
USER root
RUN wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar
RUN mv mysql-connector-java-8.0.28.jar /usr/local/spark/jars/
USER jovyan
To it start up a run
docker-compose up --build
The server is up and running and I'm interested to use spark-sql, but it is throwing and error trying to connect to mysql server:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
I can see the mysql-connector-java-8.0.28.jar in the "jars" folder, and I have used same sql instruction in apache spark non docker version and it works.
Mysql db server is also reachable from the same server I'm running the Docker.
Do I need to enable something to reach external connections? Any idea?
Reference: https://hub.docker.com/r/jupyter/all-spark-notebook

The docker-compose.yml and Dockerfile-jupyter-jars files were correct, since I was using mysql-connector-java-8.0.28.jar it requires a SSL or to disable explicitly.
jdbc:mysql://user:password#xx.xx.xx.xx:3306/inventory?useSSL=FALSE&nullCatalogMeansCurrent=true
I'm going to left this example for: Docker - all-spark-notebook with MySQL dataset

Related

Docker file File shares on Ubuntu host

Good morning,
I am currently try to figure out how I create File shares on Ubuntu as the host OS for docker. On Windows and OSX you can set up Filesharing as below:
I require access to the File share in my docker-compose as an example see below:
version: '3.9'
services:
node_gauc:
image: node-g:v1
ports:
- "444:444" # https test port
volumes:
- ./NodeServer/cert/https.crt:/usr/share/node/cert/https.crt
- ./NodeServer/cert/key.pem:/usr/share/node/cert/key.pem
build:
context: .
dockerfile: ./NodeServer/dockerfile
restart: unless-stopped
container_name: node-g
If I don't have access when I build and start the container I get the following issues:
ERROR: for node-g Cannot start service node_g: error while creating mount source path '/usr/share/t/work/6b37be0079afed03/NodeServer/cert/https.crt': mkdir /usr/share/t: read-only file system
ERROR: for node_g Cannot start service node_g: error while creating mount source path '/usr/share/t/work/6b37be0079afed03/NodeServer/cert/https.crt': mkdir /usr/share/t: read-only file system
ERROR: Encountered errors while bringing up the project.
I am still unsure why its trying to create a directory but I suppose that is another matter.
Is it possible to create File share on a Ubuntu host server similar to what you can on OSX(Mac) or Windows OS?
Many thanks for your help

How to access a website running in a container when you´re using network_mode: host

I have a very tricky topic because I need to access a private DB in AWS. In order to connect to this DB, first I need to create a bridge like this:
ssh -L 127.0.0.1:LOCAL_PORT:DB_URL:PORT -N -J ACCOUNT#EMAIL.DOMAIN -i ~/KEY_LOCATION/KEY_NAME.pem PC_USER#PC_ADDRESS
Via 127.0.0.1:LOCAL_PORT:DB_URL I can connect to the DB in my Java app. Let´s say the port is 9991 for this case.
My docker files more or less look this:
docker-compose.yml
version: '3.4'
services:
api:
image: fanmixco/example:v0.01
build:
context: .
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Dockerfile
FROM openjdk:11
RUN mkdir /home/app/
WORKDIR /home/app/
RUN mkdir logs
COPY ./target/MY_JAVA_APP.jar .
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "MY_JAVA_APP.jar"]
The image runs properly. However, if I try:
using localhost:8080/MY_APP fails
using 127.0.0.1/MY_APP fails
getting the container's IP and use it later fails
using host.docker.internal/MY_APP fails
I´m wondering how I can test my app. I know it´s running because I get a successful message in the console and the new data was added to the DB, but I don´t know how I can test it or access it. Any idea of the proper way to do it? Thanks.
P.S.:
I´m running my Images in Docker Desktop for Windows.
I have another case using tomcat 9 and running CMD ["catalina.sh", "run"] and I know it's working because I get this message in the console:
INFO [main] org.apache.catalina.startup.Catalina.start Server startup in [9905] milliseconds
But I cannot access it again.
I'm not really sure what the issue is based on the above information since I cannot replicate the system on my own machine.
However, these are some places to look:
you might be running into an issue similar to this: https://github.com/docker/for-mac/issues/1031 because of the networking magic you are doing with ssh and AWS DB
you should try specifying either a build/Dockerfile or an image, and avoid specifying both
version: '3.4'
services:
api:
image: fanmixco/example:v0.01 # choose using an image
build: # or building from a Dockerfile
context: . # but not both
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Hope that helps 🤞🏻 and good luck 🍀
I guess you need to bind the port of your container.
Try to add the 'port' property to your docker-compose file
version: '3.4'
services:
api:
image: fanmixco/example:v0.01
build:
context: .
port:
- 8080:8080
network_mode: host
environment:
- SPRING_DATASOURCE_URL=jdbc:postgresql://host.docker.internal:9991/MY_DB
Have a look on https://docs.docker.com/compose/compose-file/compose-file-v3/#endpoint_mode

mediasoup v3 with Docker

Im trying to run an 2 WebRTC example(using mediasoup) in docker
I want to run two servers as I am working on video calling across a set of instances!
My Error:
Have you seen this Error:
createProducerTransport null Error: port bind failed due to address not available [transport:udp, ip:'172.17.0.1', port:50517, attempt:1/50000]
I think it's something to do with setting the docker network?
docker-compose.yml
version: "3"
services:
db:
image: mysql
restart: always
app:
image: app
build: .
ports:
- "1440:443"
- "2000-2020"
- "80:8080"
depends_on:
- db
app2:
image: app
build: .
ports:
- "1441:443"
- "2000-2020"
- "81:8080"
depends_on:
- db
Dockerfile
FROM node:12
WORKDIR /app
COPY . .
CMD npm start
It sais it couldn't bind the address so it could be the ip or the port that causes the problem.
The ip seems like it's the ip of the docker instance. although of the docker instances are in two different machines it should be the ip of the server and not the docker instance. (in the mediasoup settings)
There are also ports of the rtcp connection that have to be opened in the docker instance. They are normally also in the mediasouo config file. usually a range of a few hundred ports that need to be opened.
You should set your rtc min and max port to 2000 and 2020 for testing purpose. Also you are not forwarding these ports I guess. In docker-compose use 2000-2020:2000-2020 Also make sure to set your listenIps properly.
If you are running mediasoup in docker, container where mediasoup is installed should be run in network mode host.
This is explained here:
How to use host network for docker compose?
and official docs
https://docs.docker.com/network/host/
Also you should pay attention to mediasoup configuration settings webRtcTransport.listenIps and plainRtpTransport.listenIp they should tell client on which IP address is your mediasoup server listening.

Start Docker Container with docker-compose

I am trying to start a docker image (https://hub.docker.com/r/parrotstream/hbase/)
on Windows 10 with
docker-compose -p parrot up
but I get this error:
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
Executing the command in the directory with the docker image in it does not work either.
I am new to using Docker and I am unsure how to start the container. According to the Docker Hub page of the image, this is all I have to do. Am I missing something?
Thanks
Edit:
As pointed out by the replies, I've downloaded the folder from github, including the docker-compose.yml. I am currently getting an error because of my permission.
ERROR: for hbase Cannot start service hbase: driver failed programming external connectivity on endpoint hbase (5fb66c3b2b0d3092edce09f03cc803cc3ea447c07a1a2135271238de626458c6): Error starting userland proxy: Bind for 0.0.0.0:8080: unexpected error Permission denied
ERROR: for hbase Cannot start service hbase: driver failed programming external connectivity on endpoint hbase (5fb66c3b2b0d3092edce09f03cc803cc3ea447c07a1a2135271238de626458c6): Error starting userland proxy: Bind for 0.0.0.0:8080: unexpected error Permission denied
ERROR: Encountered errors while bringing up the project.
Do I have a wrong configuration in docker?
The actual docker-compose.yml that you are looking for may be the one hosted in their github repo found here.
version: '3'
services:
hbase:
container_name: hbase
build:
context: .
dockerfile: Dockerfile
image: parrotstream/hbase:latest
external_links:
- hadoop
- zookeeper
ports:
- 8080:8080
- 8085:8085
- 9090:9090
- 9095:9095
- 60000:60000
- 60010:60010
- 60020:60020
- 60030:60030
networks:
default:
external:
name: parrot_default
By default, docker-compose tries to read the configuration from a file named docker-compose.yml within you current working directory. You could override this behavior with docker-compose -f <anotherfile.yml>.
Options:
-f, --file FILE Specify an alternate compose file
(default: docker-compose.yml)
Yes, command needs a compose file and the readme assumes that you have a docker-compose.yml in the directory where you execute the command.
You can find one in the linked repository from DockerHub parrot-stream/docker-hbase
You need to create a docker-compose file as follows
# docker-compose.yml
version: '2'
services:
parrot:
image: parrotstream/hbase
then you can create a build and run is using
docker-compose build parrot # build image
docker-compose up parrot # run

Docker - Run app on swarm manager (cant connect)

TLDR version:
How can I verify/set ports 7946 & 4789 on my swarm node so that I can view my application running from my docker-machine?
Complete question:
I am going through the docker tutorials and am on step 4
https://docs.docker.com/get-started/part4/#accessing-your-cluster
When I get to accessing your cluster section. It says that I should just be able to grab the ip address from one of my nodes displayed using docker-machine ls. I run that command, see the IP, grab it and put it into my browser (or alternatively use curl) and i receive the error
This site can’t be reached
192.168.99.100 refused to connect.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_REFUSED
Below this step it has a note saying that before you enable swarm mode, assuming they mean when you run:
docker-machine ssh myvm1 "docker swarm init --advertise-addr <myvm1 ip>"
You should check the following port settings
Having connectivity trouble?
Keep in mind that in order to use the ingress network in the swarm, you need to have the following ports open between the swarm nodes before you enable swarm mode:
Port 7946 TCP/UDP for container network discovery.
Port 4789 UDP for the container ingress network.
I've spent the last few days going through documentation, redoing the steps and trying everything I can to have this work but nothing has been successful.
Can anyone explain/provide documentation to show me how to view/set these ports, or explain if I am missing some other important information?
UPDATE
I wasn't able to get swarm working, so I decided to just run everything from a docker-compose.yml file. Here is the code I used below:
docker-compose.yml file:
version: '3'
services:
www:
build: .
ports:
- "80:80"
links:
- db
depends_on:
- db
volumes:
- .:/opt/www
db:
image: mysql:5.7
volumes:
- /var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: supersecure
MYSQL_DATABASE: test_db
MYSQL_USER: jake
MYSQL_PASSWORD: supersecure
and a Dockerfile located in the same directory containing the following:
# A simple Flask app container.
FROM python:2.7
LABEL maintainer="your name here"
# Place app in container.
ADD . /opt/www
WORKDIR /opt/www
# Install dependencies.
RUN pip install -r requirements.txt
EXPOSE 80
ENV FLASK_APP index.py
ENV FLASK_DEBUG 1
CMD python index.py
you'll need to create any other files which are referenced in these two files (example requirements.txt & index.py) but those are all in the same directory as the dockerfile & docker-compose.yml files. Please comment if anyone has questions

Resources