Docker-compose cannot find existing SMB volume - docker

I am attempting to run a Jellyfin server instance in a Docker container. I am trying to run the server on one machine, but my videos reside on a Linux- based NAS that is offering them as an SMB service.
Using docker-compose,ker-compose yaml file is shown below:
version: "3.5"
services:
jellyfin:
image: jellyfin/jellyfin
container_name: jellyfin
user: <REDACTED>
network_mode: "host"
volumes:
- /path/to/config:/config
- /path/to/cache:/cache
- myservice:/media
restart: "unless-stopped"
volumes:
myservice:
driver: host
driver_opts:
type: cifs
o: "username=<REDACTED>,password=<REDACTED>"
device: "//<Service name or IP address>"
Note that I have tried using the actual name of the SMB service, or the service's IP address. It doesn't matter, because while my host machine is able to access the SMB service, when running
docker-compose up
I get the following message:
ERROR: Volume myservice specifies nonexistant driver host
I am clearly doing something wrong, but I have found very little useful documentation on how to mount different network volumes using docker or docker-compose (for example, there appears to be nothing in the documentation that enumerates different driver types or different drivers).
Can someone please tell me what I am doing wrong here? How can I successfully mount the SMB service?
If using SMB is not possible, is it possible to use another network service type? My NAS also offers my videos as NFS services.

Related

Connecting dockerized apps network to do api call

I have a bit of a problem with connecting the dots.
I managed to dockerized our legacy app and our newer app, but now I need to make them to talk to one another via API call.
Projects:
Project1 = using project1_appnet (bridge driver)
Project2 = using project2_appnet (bridge driver)
Project3 = using project3_appnet (bridge driver)
On my local, I have these 3 projects on 3 separates folders. Each project will have their own app, db and cache services.
This is the docker-compose.yml for one of the project. (They have nearly all the same docker-compose.yml only with different image and volume path)
version: '3'
services:
app:
build: ./docker/app
image: 'cms/app:latest'
networks:
- appnet
volumes:
- './:/var/www/html:cached'
ports:
- '${APP_PORT}:80'
working_dir: /var/www/html
cache:
image: 'redis:alpine'
networks:
- appnet
volumes:
- 'cachedata:/data'
db:
image: 'mysql:5.7'
environment:
MYSQL_ROOT_PASSWORD: '${DB_ROOT_PASSWORD}'
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USER}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
ports:
- '${DB_PORT}:3306'
networks:
- appnet
volumes:
- 'dbdata:/var/lib/mysql'
networks:
appnet:
driver: bridge
volumes:
dbdata:
driver: local
cachedata:
driver: local
Question:
How can I make them be able to talk to one another via API call? (On my local for development and for prod environment)
On production, the setting will be a bit different, they will be in different machines but still in the same VPC or even through public network. What is the setting for that?
Note:
I have been looking at link but apparently it is deprecated for v3 or not really recommended
Tried curl from project1 container to project2 container, by doing:
root#bc3afb31a5f1:/var/www/html# curl localhost:8050/login
curl: (7) Failed to connect to localhost port 8050: Connection refused
If your final setup will be that each service will be running on a physically different system, there aren't really any choices. One system can't directly access the Docker network on another system; the only way service 1 will be able to reach service 2 is via its host's DNS name (or IP address) and the published port. Since this will be different in different environments, I'd suggest making that value a configured environment variable.
environment:
SERVICE_2_URL: 'http://service-2-host.example.com/' # default port 80
Once you've settled on that, you can use the same setup for a single-host deployment, mostly. If your developer systems use Docker for Mac or Docker for Windows you should be able to use a special Docker hostname to reach the other service
environment:
SERVICE_2_URL: 'http://host.docker.internal:8082/'
(If you use Linux on the desktop you will have to know some IP address for the host; not localhost because that means "this container", and not the docker0 interface address because that will be on a specific network, but something like the host's eth0 address.)
Your other option is to "borrow" the other Docker Compose network as an external network. There is some trickiness if all of your Docker Compose setups have the same names; from some experimentation it seems like the Docker-internal DNS will always resolve to your own Docker Compose file first, and you have to know something like the Compose-assigned container name (which isn't hard to reconstruct and is stable) to reach the other service.
version: '3'
networks:
app2:
external:
name: app2_appnet
services:
app:
networks:
- appnet
- app2_appnet
environment:
SERVICE_2_URL: 'http://app2_app_1/' # using the service-internal port
MYSQL_HOST: db # in this docker-compose.yml
(I would suggest using the Docker Compose default network over declaring your own; that will mostly let you delete all of the networks: blocks in the file without any ill effect, but in this specific case you will need to declare networks: [default, app2_default] to connect to both.)
You may also consider a multi-host container solution when you're starting to look at this. Kubernetes is kind of heavy-weight, but it will run containers on any node in the cluster (you don't specifically have to worry about placement) and it provides both namespaces and automatic DNS resolution for you; you can just set SERVICE_2_URL: 'http://app.app2/' to point at the other namespace without worrying about these networking details.
If you run this docker compose locally; given app and db are on the same network - appnet - app should be able to talk to db using localhost:${DB_PORT}.
In production, if app and db are on different machines; app would probably need to talk to database using ip or domain name.
Considering that you are using different machines for the different docker deployments you good put them behind a regular webserver (Apache2, Nginx) and then route the traffic from the specific domain to $APP_PORT using a simple vhost. I prefer to do that instead of directly exposing the container to the network. This way you would also be able to host multiple applications on the same machine ( if you like to ). So I suggest you should not try to connect docker networks but "regular " ones.
Was playing around with inspect and cURL. I think I found the solution.
Locally:
In my local, I inspected the container and view the NetworkSettings.Network.<network name>.Gateway which is 172.25.0.1
Then I get the the exposed port which is 8050
Then I did a curl inside the app1 container curl 172.25.0.1:8050/login to check whether app1 can do a http request to app2 container. OR docker exec -it project1_app_1 curl 172.25.0.1:8050/login
Vice versa, I did curl 172.25.0.1:80 for app2 -> app 1 OR docker exec -it project2_app_1 curl 172.25.0.1:80
The only issue is that, the Gateway value changes when we restart via docker-compose up -d
Production likewise:
I am not that pro with networking and stuff. My estimate for production would be:
Do curl app2-domain.com which is pointed to the app by the webserver as they are in their own machine (even with a load balancer).

can not use user-defined bridge in swarm compose yaml file

I learned from docker documentation that I can not use docker DNS to find containers using their hostnames without utilizing user-defined bridge network. I created one using the command:
docker network create --driver=overlay --subnet=172.22.0.0/16 --gateway=172.22.0.1 user_defined_overlay
and tried to deploy a container that uses it. compose file looks like:
version: "3.0"
services:
web1:
image: "test"
ports:
- "12023:22"
hostname: "mytest-web1"
networks:
- test
web2:
image: "test"
ports:
- "12024:22"
hostname: "mytest-web2"
networks:
- test
networks:
test:
external:
name: user_defined_overlay
my docker version is: Docker version 17.06.2-ce, build cec0b72
and I got the following error when I tried deploying the stack:
network "user_defined_bridge" is declared as external, but it is not in the right scope: "local" instead of "swarm"
I was able to create an overlay network and define it in compose file. that worked fine but it didn't for bridge.
result of docker network ls:
NETWORK ID NAME DRIVER SCOPE
cd6c1e05fca1 bridge bridge local
f0df22fb157a docker_gwbridge bridge local
786416ba8d7f host host local
cuhjxyi98x15 ingress overlay swarm
531b858419ba none null local
15f7e38081eb user_defined_overlay overlay swarm
UPDATE
I tried creating two containers running on two different swarm nodes(1st container runs on manager while second runs on worker node) and I specified the user-defined overlay network as shown in stack above. I tried pinging mytest-web2 container from within mytest-web1 container using hostname but I got unknown host mytest-web2
As of 17.06, you can create node local networks with a swarm scope. Do so with the --scope=swarm option, e.g.:
docker network create --scope=swarm --driver=bridge \
--subnet=172.22.0.0/16 --gateway=172.22.0.1 user_defined_bridge
Then you can use this network with services and stacks defined in swarm mode. For more details, you can see PR #32981.
Edit: you appear to have significantly overcomplicated your problem. As long as everything is being done in a single compose file, there's no need to define the network as external. There is a requirement to use an overlay network if you want to communicate container-to-container. DNS discovery is included on bridge and overlay networks with the exception of the default "bridge" network that docker creates. With a compose file, you would never use this network without explicitly configuring it as an external network with that name. So to get container to container networking to work, you can let docker-compose or docker stack deploy create the network for your project/stack automatically with:
version: "3.0"
services:
web1:
image: "test"
ports:
- "12023:22"
web2:
image: "test"
ports:
- "12024:22"
Note that I have also removed the "hostname" setting. It's not needed for DNS resolution. You can communicate directly with a service VIP with the name "web1" or "web2" from either of these containers.
With docker-compose it will create a default bridge network. Swarm mode will create an overlay network. These defaults are ideal to allow DNS discovery and container-to-container communication in each of the scenarios.
The overlay network is the network to be used in swarm. Swarm is meant to be used to manage containers on multiple hosts and overlay networks are docker's multi-host networks https://docs.docker.com/engine/userguide/networking/get-started-overlay/

How to get docker oracle container ip into the java application using docker-compose?

The below code iam using in docker-compose:
integration_test:
image: service:1.0.0
volumes:
- .:/service
links:
- oracle_container
# used volumes_from as workaround to wait until the following containers to start
volumes_from:
- oracle_container
container_name: integration_test
tty: true
environment:
USER: go
command: ["mvn clean install -DskipTests"]
oracle_container:
image: inmage_name:1.0.0
container_name: oracle_container
ports:
- "49161:1521"
I want to make the both containers talk application-->oracle
Both containers are running in same machine and i used the below jdbc string to connect the oracle via application,
jdbc:oracle:thin:#localhost:49161/xe
But iam not able to connect the oracle and its throwing SQLRecoverable Exception.
As per my understanding, this comes under Docker Networking and I have used links to connect two containers. but this issue is with the connection string and more specifically ip of the oracle container.
Can someone help on this issue?
You need to use
jdbc:oracle:thin:#oracle_container:1521/xe
In docker-compose each container can reach other on their service name of the container name. You should not used the host ports instead the container port only

Cannot connect with Docker container from inside a swarm and the host machine

I'm composing two containers, one with the web services and one with the database.
Here's my compose file:
version: '3.3'
services:
web:
image: microsoft/aspnetcore:1.1
container_name: web
ports:
- 5555:80
links:
- db
db:
image: microsoft/mssql-server-linux:rc1
container_name: db
ports:
- 1533:1433
environment:
- "ACCEPT_EULA=Y"
- "MSSQL_SA_PASSWORD=MyAdminPwd2017"
- "MSSQL_PID=Developer"
So, from my asp.net core app running in the web service I can access the database at db just using db as hostname. But db is not visible from the host (I have a default bridge network). I do can access my database from the host if I inspect the running db container and find it's ip address, then I can connect to <ip>,1533.
The thing is, the file storing the the credentials to access the database is used both by the web container and the host machine. So I need a way to name db so I can access from both worlds (being inside the swarm and outside, from my host machine)
Is there a way to achieve that? I tried defining a host network in my docker-compose file and have both services uses that network, but I got a message saying only I one host network can be defined.
EDIT: tried to improve the question title but I'm still not convinced, improvements are welcome
Ok, so, to answer my own question, what I want is not posible, at least as of today.
Somebody wrote a nice article on why I can't achieve what I want. Having said that, it is doable in Windows Containers, but it might be a temporary "limitation".
Link to the post: https://derickbailey.com/2016/08/29/so-youre-saying-docker-isnt-a-virtual-machine/
and link to the Docker Forum: https://forums.docker.com/t/access-dockerized-services-via-the-containers-ip-address/21151

How to join the default bridge network with docker-compose v2?

I tried to setup an nginx-proxy container to access my other containers via subdomains on port 80 instead of special ports. As you can guess, I could not get it to work.
I'm kind of new to docker itself and found that it's more comfortable for me to write docker-compose.yml files so I don't have to constantly write long docker run ... commands. I thought there's no difference in how you start the containers, either with docker or docker-compose. However, one difference I noticed is that starting the container with docker does not create any new networks, but with docker-compose there will be a xxx_default network afterwards.
I read that containers on different networks cannot access each other and maybe that might be the reason why the nginx-proxy is not forwarding the requests to the other containers. However, I was unable to find a way to configure my docker-compose.yml file to not create any new networks, but instead join the default bridge network like docker run does.
I tried the following, but it resulted in an error saying that I cannot join system networks like this:
networks:
default:
external:
name: bridge
I also tried network_mode: bridge, but that didn't seem to make any difference.
How do I have to write the docker-compose.yml file to not create a new network, or is that not possible at all?
Bonus question: Are there any other differences between docker and docker-compose that I should know of?
Adding network_mode: bridge to each service in your docker-compose.yml will stop compose from creating a network.
If any service is not configured with this bridge (or host), a network will be created.
Tested and confirmed with:
version: "2.1"
services:
app:
image: ubuntu:latest
network_mode: bridge

Resources