Docker file File shares on Ubuntu host - docker

Good morning,
I am currently try to figure out how I create File shares on Ubuntu as the host OS for docker. On Windows and OSX you can set up Filesharing as below:
I require access to the File share in my docker-compose as an example see below:
version: '3.9'
services:
node_gauc:
image: node-g:v1
ports:
- "444:444" # https test port
volumes:
- ./NodeServer/cert/https.crt:/usr/share/node/cert/https.crt
- ./NodeServer/cert/key.pem:/usr/share/node/cert/key.pem
build:
context: .
dockerfile: ./NodeServer/dockerfile
restart: unless-stopped
container_name: node-g
If I don't have access when I build and start the container I get the following issues:
ERROR: for node-g Cannot start service node_g: error while creating mount source path '/usr/share/t/work/6b37be0079afed03/NodeServer/cert/https.crt': mkdir /usr/share/t: read-only file system
ERROR: for node_g Cannot start service node_g: error while creating mount source path '/usr/share/t/work/6b37be0079afed03/NodeServer/cert/https.crt': mkdir /usr/share/t: read-only file system
ERROR: Encountered errors while bringing up the project.
I am still unsure why its trying to create a directory but I suppose that is another matter.
Is it possible to create File share on a Ubuntu host server similar to what you can on OSX(Mac) or Windows OS?
Many thanks for your help

Related

Docker - all-spark-notebook Communications link failure

I'm new using docker and spark.
My docker-compose.yml file is
volumes:
shared-workspace:
services:
notebook:
image: docker.io/jupyter/all-spark-notebook:latest
build:
context: .
dockerfile: Dockerfile-jupyter-jars
ports:
- 8888:8888
volumes:
- shared-workspace:/opt/workspace
And the Dockerfile-jupyter-jars is:
FROM docker.io/jupyter/all-spark-notebook:latest
USER root
RUN wget https://repo1.maven.org/maven2/mysql/mysql-connector-java/8.0.28/mysql-connector-java-8.0.28.jar
RUN mv mysql-connector-java-8.0.28.jar /usr/local/spark/jars/
USER jovyan
To it start up a run
docker-compose up --build
The server is up and running and I'm interested to use spark-sql, but it is throwing and error trying to connect to mysql server:
com.mysql.cj.jdbc.exceptions.CommunicationsException: Communications link failure
I can see the mysql-connector-java-8.0.28.jar in the "jars" folder, and I have used same sql instruction in apache spark non docker version and it works.
Mysql db server is also reachable from the same server I'm running the Docker.
Do I need to enable something to reach external connections? Any idea?
Reference: https://hub.docker.com/r/jupyter/all-spark-notebook
The docker-compose.yml and Dockerfile-jupyter-jars files were correct, since I was using mysql-connector-java-8.0.28.jar it requires a SSL or to disable explicitly.
jdbc:mysql://user:password#xx.xx.xx.xx:3306/inventory?useSSL=FALSE&nullCatalogMeansCurrent=true
I'm going to left this example for: Docker - all-spark-notebook with MySQL dataset

When running docker-compose remotely, an error occurs with mounting volumes

I am trying to run a project on docker-compose via a remote server. Everything works, but as soon as I add the item about mounting the volume, it gives an error:
Error response from daemon: invalid mount config for type "bind": invalid mount path: 'C:/Users/user/Projects/my-raspberry-test' mount path must be absolute
To run I use tools from PhpStorm.
The docker-compose.yml file itself looks like this:
version: "3"
services:
php:
image: php:cli
volumes:
- ./:/var/www/html/
working_dir: /var/www/html/
ports:
- 80:80
command: php -S 0.0.0.0:80
I checked by ssh:
Daemon is running,
Docker works (on a similar Dockerfile with the same tasks),
Docker-compose works (on the same file).
Also checked docker remote run using phpstorm and file:
FROM php:cli
COPY . /var/www/html/
WORKDIR /var/www/html/
CMD php -S 0.0.0.0:80
It didn’t give an error and it worked.
OS on devices:
PC: Windows 10
Server: Fedora Server
Without mounting the volume in docker-compose, everything starts. Maybe someone faced a similar problem?
php for an example.
The path must be absolute for the remote host and the project data itself must be loaded there. That is, you need to upload the project to a remote host.
I corrected everything like this:
volumes:
- /home/peter-alexeev/my-test:/var/www/html/

How to mount large volume in Docker Desktop (WSL2)?

I am new to Docker and tried to set up Plex Media Server (https://docs.linuxserver.io/images/docker-plex) on Windows, using Docker Desktop and WSL2. I used docker-compose using the yml file shown below. I need to mount my media folders (on Windows drive F:) into the docker container, for obvious resasons.
I read that it is recommended to load the data into the WSL (https://docs.docker.com/docker-for-windows/wsl/#best-practices and https://stackoverflow.com/a/64238886/6149322), however since the media libray is very large (several hundret GB) it needs to be kept on a different hard drive. Hence, I think I cannot just move it to the WSL file system.
When I try to run docker-compose -f "docker-compose.yml" up -d --build, I get the following error message and a popup from Docker Desktop: ("Zugriff verweigert" is german for "access denied")
Creating plex ... error
ERROR: for plex Cannot create container for service plex: Zugriff verweigert
ERROR: for plex Cannot create container for service plex: Zugriff verweigert
ERROR: Encountered errors while bringing up the project.
Do you have any ideas or hints for me? Thank you very much!
docker-compose.yml file:
version: "2.1"
services:
plex:
image: ghcr.io/linuxserver/plex
container_name: plex
network_mode: bridge
environment:
- PUID=1000
- PGID=1000
- VERSION=docker
volumes:
- /f/Plex:/config
- /f/Serien:/tv
- /f/Filme:/movies
ports:
- 32400:32400
restart: unless-stopped
I think I have resolved this issue as follows. I am not entirely sure which step was the key...
Set "Ubuntu" as my default wsl system (instead of docker-desktop-data) using the command wsl --set-default Ubuntu in PowerShell
Copied my docker-compose.yml into the WSL file system. You can mount the WSL system in your Windows Explorer by typing \\WLS$ in the path file of Windows Explorer.
Opened the Ubuntu shell and run the docker-compose command from there. Now it works fine with the following volume definition
volumes:
- /mnt/f/Plex:/config
- /mnt/f/Serien:/tv
- /mnt/f/Filme:/movies
Hope this helps anyone with a similar problem :)

Start Docker Container with docker-compose

I am trying to start a docker image (https://hub.docker.com/r/parrotstream/hbase/)
on Windows 10 with
docker-compose -p parrot up
but I get this error:
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
Executing the command in the directory with the docker image in it does not work either.
I am new to using Docker and I am unsure how to start the container. According to the Docker Hub page of the image, this is all I have to do. Am I missing something?
Thanks
Edit:
As pointed out by the replies, I've downloaded the folder from github, including the docker-compose.yml. I am currently getting an error because of my permission.
ERROR: for hbase Cannot start service hbase: driver failed programming external connectivity on endpoint hbase (5fb66c3b2b0d3092edce09f03cc803cc3ea447c07a1a2135271238de626458c6): Error starting userland proxy: Bind for 0.0.0.0:8080: unexpected error Permission denied
ERROR: for hbase Cannot start service hbase: driver failed programming external connectivity on endpoint hbase (5fb66c3b2b0d3092edce09f03cc803cc3ea447c07a1a2135271238de626458c6): Error starting userland proxy: Bind for 0.0.0.0:8080: unexpected error Permission denied
ERROR: Encountered errors while bringing up the project.
Do I have a wrong configuration in docker?
The actual docker-compose.yml that you are looking for may be the one hosted in their github repo found here.
version: '3'
services:
hbase:
container_name: hbase
build:
context: .
dockerfile: Dockerfile
image: parrotstream/hbase:latest
external_links:
- hadoop
- zookeeper
ports:
- 8080:8080
- 8085:8085
- 9090:9090
- 9095:9095
- 60000:60000
- 60010:60010
- 60020:60020
- 60030:60030
networks:
default:
external:
name: parrot_default
By default, docker-compose tries to read the configuration from a file named docker-compose.yml within you current working directory. You could override this behavior with docker-compose -f <anotherfile.yml>.
Options:
-f, --file FILE Specify an alternate compose file
(default: docker-compose.yml)
Yes, command needs a compose file and the readme assumes that you have a docker-compose.yml in the directory where you execute the command.
You can find one in the linked repository from DockerHub parrot-stream/docker-hbase
You need to create a docker-compose file as follows
# docker-compose.yml
version: '2'
services:
parrot:
image: parrotstream/hbase
then you can create a build and run is using
docker-compose build parrot # build image
docker-compose up parrot # run

Dockerfiles not found when running docker-compose on windows (boot2docker)

I'm hoping that I've just missed something terribly obvious, but here's the situation I'm faced with.
Problem
Running docker-compose on windows after following docker-compose install steps from the website
docker-compose.yml file works fine on unix systems (have tested on Mac)
Currently fails immediately on Windows when it cannot locate any Dockerfiles for the services defined in the yml file. Here's the error:
NOTE: The image above might be a bit confusing. The environment variable below is called GOPATH, but the folder on my colleague's computer is also called GOPATH. This gives the impression that the env var isn't set correctly, but it is indeed.
version: '3'
services:
renopost:
depends_on:
- reno-cassandra
- reno-kafka
- reno-consul
build:
context: ${GOPATH}/src/renopost
dockerfile: ${GOPATH}/src/renopost/docker/dev/Dockerfile
container_name: renopost
image: renopost
ports:
- "4000:4000"
volumes:
- ${GOPATH}/src/renopost:/go/src/renopost
Above is a snippet of the docker-compose.yml file that is being run. The GOPATH env variable is indeed set and when following the directory path listed, I can confirm the file exists in that location.
Is there some interaction here with the OracleBoxVM that boot2docker uses where it isn't actually finding that file?

Resources