Docker Networking Stack - docker

Currently I am trying to make my docker container's network stack be the same as my machine's in my docker-compose yml. I saw on the docker docs you can use "hostnet" to use your own network stack. I am using this but I keep getting an error saying...
services:
xxxxx:
xxxxx:
networks:
hostnet:{}
networks:
hostnet:
external: true
name: host
networks.hostnet value Additional properties are not allowed ('name' was unexpected)
What is wrong, and is there also a way to configure a docker compose file so that my container will have the same Network ID?

This can be achieved using:
network_mode: "host"
in the definition for your service. See Docker's compose file documenting for more details on this: https://docs.docker.com/compose/compose-file/#network_mode
Note that I personally consider this the option of last resort. If you can achieve your goals by publishing a single port instead of disabling the network namespacing features of docker, your solution will be more secure and portable.

Related

Using two host IP's with docker-compose

I've googled this to death.
I want one container bound to one host IP/port, the other container on a different one, for various reasons.
I have (snipped out some info)
container1:
network_mode:
host ports:
- 192.168.1.224:80:80
container2:
network_mode:
host ports:
- 192.168.1.225:80:80
Docker actually starts up, but when I visit each IP in the browser, both URL's will return container1's stuff.
Has anyone done this? All I can find online is mostly related to docker and not docker-compose (starting docker with some arguments), or people arguing it should be done another way.
Delete the network_mode: host setting: it's getting in your way here.
Specifying network_mode: host bypasses all of Docker's normal networking setup. The ports: setting has no effect. Each process here sees both of your host interfaces, and presumably tries to bind to both of them. If you use the default network_mode: bridge, each container gets an isolated network stack, and you can use ports: as you've done to selectively expose containers to specific interfaces.
network_mode: host is really only appropriate in a couple of specific cases; only if your server process is listening on thousands of ports, or its specific port is unpredictable, or if you have an actual need to inspect the host's network setup but can't run your process directly on the host.

Couldn't connect containers using docker-compose.yaml file

I created two Dockerfiles to run frontend and backend in a web application. When I run docker-compose.yaml file, web application front-end is opened of web browser. But I cannot login to the system. I think there is a problem with connecting those containers. Following is my docker-compose.yaml file. What can I do to resolve this problem ?
version: '2'
services:
gulp:
build: './ui'
ports:
- "3000:4000"
python:
build: '.'
ports:
- "5000:5000"
You need to use --links to enable communication between containers and you should use their DNS network alias like http://python:5000
Containers within a docker-compose file are part of one network by default. And one container can access other container using their host name.
Host name can be defined in docker-compose file using hostname. And if hostname is not defined, then the service name is considered the hostname.
Internally, docker containers can talk to each other by referring to each other at their hostname. Like in your case, gulp can access python at http://python:5000 and that would be possible even if you did not declare ports. This all is happening all because it is internal to the docker system.
From outside, if you want to connect to any of the services, then you can define ports, as you did and then access those services at the defined port number.

Cannot connect with Docker container from inside a swarm and the host machine

I'm composing two containers, one with the web services and one with the database.
Here's my compose file:
version: '3.3'
services:
web:
image: microsoft/aspnetcore:1.1
container_name: web
ports:
- 5555:80
links:
- db
db:
image: microsoft/mssql-server-linux:rc1
container_name: db
ports:
- 1533:1433
environment:
- "ACCEPT_EULA=Y"
- "MSSQL_SA_PASSWORD=MyAdminPwd2017"
- "MSSQL_PID=Developer"
So, from my asp.net core app running in the web service I can access the database at db just using db as hostname. But db is not visible from the host (I have a default bridge network). I do can access my database from the host if I inspect the running db container and find it's ip address, then I can connect to <ip>,1533.
The thing is, the file storing the the credentials to access the database is used both by the web container and the host machine. So I need a way to name db so I can access from both worlds (being inside the swarm and outside, from my host machine)
Is there a way to achieve that? I tried defining a host network in my docker-compose file and have both services uses that network, but I got a message saying only I one host network can be defined.
EDIT: tried to improve the question title but I'm still not convinced, improvements are welcome
Ok, so, to answer my own question, what I want is not posible, at least as of today.
Somebody wrote a nice article on why I can't achieve what I want. Having said that, it is doable in Windows Containers, but it might be a temporary "limitation".
Link to the post: https://derickbailey.com/2016/08/29/so-youre-saying-docker-isnt-a-virtual-machine/
and link to the Docker Forum: https://forums.docker.com/t/access-dockerized-services-via-the-containers-ip-address/21151

How to join the default bridge network with docker-compose v2?

I tried to setup an nginx-proxy container to access my other containers via subdomains on port 80 instead of special ports. As you can guess, I could not get it to work.
I'm kind of new to docker itself and found that it's more comfortable for me to write docker-compose.yml files so I don't have to constantly write long docker run ... commands. I thought there's no difference in how you start the containers, either with docker or docker-compose. However, one difference I noticed is that starting the container with docker does not create any new networks, but with docker-compose there will be a xxx_default network afterwards.
I read that containers on different networks cannot access each other and maybe that might be the reason why the nginx-proxy is not forwarding the requests to the other containers. However, I was unable to find a way to configure my docker-compose.yml file to not create any new networks, but instead join the default bridge network like docker run does.
I tried the following, but it resulted in an error saying that I cannot join system networks like this:
networks:
default:
external:
name: bridge
I also tried network_mode: bridge, but that didn't seem to make any difference.
How do I have to write the docker-compose.yml file to not create a new network, or is that not possible at all?
Bonus question: Are there any other differences between docker and docker-compose that I should know of?
Adding network_mode: bridge to each service in your docker-compose.yml will stop compose from creating a network.
If any service is not configured with this bridge (or host), a network will be created.
Tested and confirmed with:
version: "2.1"
services:
app:
image: ubuntu:latest
network_mode: bridge

Link Running External Docker to docker-compose services

I assume that there is a way to link via one or a combination of the following: links, external_links and networking.
Any ideas? I have come up empty handed so far.
Here is an example snippet of a Docker-compose which is started from within a separate Ubuntu docker
version: '2'
services:
web:
build: .
depends_on:
- redis
redis:
image: redis
I want to be able to connect to the redis port from the Docker that launched the docker-compose.
I do not want to bind the ports on the host as it means I won't be able to start multiple docker-compose from the same model.
-- context --
I am attempting to run a docker-compose from within a Jenkins maven build Docker so that I can run tests. But I cannot for the life of me get the original Docker to access exposed ports on the docker-compose
Reference the machines by hostname, v2 automatically connects the nodes by hostname on a private network by default. You'll be able to ping "web" and "redis" from within each container. If you want to access the machines from your host, include a "ports" definition for each service in your yml.
The v1 links were removed from the v2 compose syntax since they are now implicit. From the docker compose file documentation
links with environment variables: As documented in the environment variables reference, environment variables created by links have been
deprecated for some time. In the new Docker network system, they have
been removed. You should either connect directly to the appropriate
hostname or set the relevant environment variable yourself...

Resources