Docker - connect from HOST to container - docker

I am using Docker for Windows with created bridge network:
"bridge":"none" (daemon.json)
docker network create --subnet 192.168.23.1/24 --gateway 192.168.23.1 --driver bridge my-network
... and container with Jenkins image.
When I configure connection between Jenkins (container) and Gitlab ("internet") everything is working fine. But when I am creating Webhook in Gitlab I have to enter URL of Jenkins. I was trying with localhost and IP obtained from IPAddress property:
"Networks": {
"my-network": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"jenkins",
"dff5dcb7c95a"
],
"NetworkID": "xxx",
"EndpointID": "yyy",
"Gateway": "192.168.23.1",
"IPAddress": "192.168.23.2",
"IPPrefixLen": 24,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "zzz",
"DriverOpts": null
}
}
.. but both options have been not working.
Question: How to determine correct URL?
How to connect from HOST to my container? Is it correct approach? What issues should I know to resolve following problems in the future?
Thanks for help :)

If you are running your Gitlab-instance also in a Docker-container you just need to add the Gitlab-container to the same Docker-network.
If your Gitlab-instance is really in the internet, you can not solve this with localhost or any local IP-adress. You need to:
find out your public IP-adress, maybe use dynDNS to get a fix domain if you have a dynamic IP
open a port on your router and configure your firewall
open a port on your local windows firewall
need to find out on which port jenkins is waiting for the webhooks from GitLab
map this port to the docker-container by using
--p <docker-internal-port>:<docker-external-port>
If you would provide some more information about your network infrastruture, the answer could be clearer.

Related

How to connect to hdfs deployed in the docker?

I deployed hadoop and spark from this project https://github.com/Marcel-Jan/docker-hadoop-spark
I've completed instructions "Quick Start Spark (PySpark)" in readme and got a dataframe from hdfs
I need to do the same thing but through airflow. I successfull connect to spark master:
spark = (SparkSession
.builder
# .master('local')
.master("spark://127.0.0.1:7077")
.appName("Test")
.getOrCreate())
But when i tried to reach hadoop cluster, i got the error
Call From SPB-379/127.0.1.1 to namenode:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I understand that airflow doesn't see address "namenode" and i tried to execute command -
docker inspect <hadoop container id> and get
"Networks": {
"docker-hadoop-spark_default": {
"IPAMConfig": null,
"Links": null,
"Aliases": [
"namenode",
"namenode",
"8a06dd5cca57"
],
"NetworkID": "f57164de9f26ef3a1a33c4ee46b24903c0824009ecfeda06f7f45ba9206f6a0a",
"EndpointID": "6bf662634c9ef925dc435d8a28364b045ef8b572e22f5c5316257ada2a52cc0d",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.9",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:13:00:09",
"DriverOpts": null
}
But this ipaddress also gave a error. I am not confident that this way is correct and i want to give any feedback:)
For a better understanding
It works
It doesn't work
Network config of the docker-compose
Error - allocate worker to airflow
22/12/26 19:54:02 ERROR TransportRequestHandler: Error while invoking RpcHandler#receive() for one-way message.
java.io.InvalidClassException: org.apache.spark.deploy.DeployMessages$ExecutorUpdated; local class incompatible: stream classdesc serialVersionUID = 1654279024112373855, local class serialVersionUID = -1971851081955655249
The error says namenode:9000 could not be resolved from your host. Regarding airflow, you need to ensure it is also running as a container, and not on your host, then Spark code will execute within the Docker network as well.
I have a Docker example here that works fine from a Spark notebook.

Docker jenkins access port

I am running docker compose of jenkins image by command docker-compose up my config files
Dockerfile:
FROM java:openjdk-8-jre
EXPOSE 50000
docker-compose.yml
version: '3'
services:
jenkins:
image: jenkins:2.60.3-alpine
ports:
- 50000:50000
My container runs docker ps result:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7dbe18cdbb53 jenkins:2.60.3-alpine "/bin/tini -- /usr/l…" 22 minutes ago Up 22 minutes 8080/tcp, 0.0.0.0:50000->50000/tcp docker_jenkins_1
I checked the docker host by docker inspect 7dbe18cdbb53
Result:
],
"NetworkID": "e3a5461960939397615620f051696f8b78fde9352d0c8b42b4ed679a1e847b9b",
"EndpointID": "999b5d3525b2fe823c5ed0033bb27e85b3ca26356b4bd9f1525de005739fecde",
"Gateway": "172.18.0.1",
"IPAddress": "172.18.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:12:00:02",
"DriverOpts": null
`
When i try to access from browser like 172.18.0.2:50000 it doesnt work.
The jenkins image that you have is configured to listen http traffic on port 8080.
Have a look at this: https://github.com/jenkinsci/docker/blob/c2d6f2122fa03c437e139a317b7fe5b9547fe49e/Dockerfile
This is the part where the http port default value is defined
ARG http_port=8080
and this is where it is exposed
# for main web interface:
EXPOSE ${http_port}
Similarly you can find that (slave) Agent port is 50000.
In your docker ps output you can see that the jenkins container is listening on 8080 but this is not publish on the host.
So basically you are trying to connect via http to the agent port and it is not working as expected.
Do change the docker compose file to also publish the 8080 port and use that.

Docker Toolbox for Windows, Container is not accesible on the host

I am new and working on Docker on my Windows Machine. I got toolbox installed on my machine well and ran a container, see below:
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
cea8e6cf92b5 seqvence/static-site "/bin/sh -c 'cd /usr…" 21 minutes ago Up 21 minutes 0.0.0.0:32769->80/tcp, 0.0.0.0:32768->443/tcp competent_goodall
Now, this is a Linux container running on a Oracle VM on my windows machine. After this I expect to do a http://172.17.0.2:32769 on my windows machine and get a web page running on Ngnix server.
Here is the container inspect:
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "81d64a885b80a000f3b91e9959acf125b170b7acb11a918bf77bf7fa3fea3ae1",
"EndpointID": "6cf13c7007539f0b31c6d8da52844477f13e1debd84a8f3e2ec63ee140e90014",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2",
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02",
"DriverOpts": null
}
I am not sure if any more details would be needed to understand the problem, so please feel free to let me know.
I think you should try either of the following,
http://localhost:32769
http://(ipv4 of your windows machine):32769

cannot reach web app from docker host (not docker-machine)

I have a simple web app container running on docker engine for mac (v.1.12.5) using the following:
docker run --rm -p 80:8089 test-app
I've checked my container's IP under Networks > bridge from the following:
docker inspect $(docker ps -l --format "{{.ID}}")
"Networks": {
"bridge": {
"IPAMConfig": null,
"Links": null,
"Aliases": null,
"NetworkID": "f53f1b93aa0f2fda186498d30e7f6e5b97ba952d1b6fe442663ac6025fd74ce3",
"EndpointID": "178937cf211c2360d9f9c594891985637d1d82a334a40b1b46d3acb2ea8aaf20",
"Gateway": "172.17.0.1",
"IPAddress": "172.17.0.2", // <- use this?
"IPPrefixLen": 16,
"IPv6Gateway": "",
"GlobalIPv6Address": "",
"GlobalIPv6PrefixLen": 0,
"MacAddress": "02:42:ac:11:00:02"
}
}
As far as I understand, I am running my web app container on my docker engine directly on my laptop (not via docker-machine). At this point, I'm not so much concerned with making it work rather than understanding.
My container has an assigned IP 172.17.0.2 which I've pasted above and I've mapped my web app container (with an EXPOSE 80) to port 8089 via the docker run -p flag.
I'm under the impression that I should be able to reach my web app at http:// 172.17.0.2:8089 but I just get no response. Why?
If the process in the container listens on port 80, your -p flag should be the other way round -p 8089:80. The service can then be reached at localhost:8089.

Binding a port to a host interface using the REST API

The documentation for the commandline interface says the following:
To bind a port of the container to a specific interface of the host
system, use the -p parameter of the docker run command:
General syntax
docker run -p [([<host_interface>:[host_port]])|(<host_port>):]<container_port>[/udp] <image>
When no host interface is provided, the port is bound to
all available interfaces of the host machine (aka INADDR_ANY, or
0.0.0.0).When no host port is provided, one is dynamically allocated. The possible combinations of options for TCP port are the following
So I was wondering how I do the same but with the REST API?
With POST /container/create I tried:
"PortSpecs": ["5432:5432"] this seems to expose the port but not bind it to the host interface.
"PortSpecs": ["5432"] gives me the same result as the previous one.
"PortSpecs": ["0.0.0.0:5432:5432"] this returns the error Invalid hostPort: 0.0.0.0 which makes sense.
When I do sudo docker ps the container shows 5432/tcp which should be 0.0.0.0:5432/tcp.
Inspecting the container gives me the following:
"NetworkSettings": {
"IPAddress": "172.17.0.25",
"IPPrefixLen": 16,
"Gateway": "172.17.42.1",
"Bridge": "docker0",
"PortMapping": null,
"Ports": {
"5432/tcp": null
}
}
Full inspect can be found here.
This is an undocumented feature. I found my answer on the mailing list:
When creating the container you have to set ExposedPorts:
"ExposedPorts": { "22/tcp": {} }
When starting your container you need to set PortBindings:
"PortBindings": { "22/tcp": [{ "HostPort": "11022" }] }
There already is an issue on github about this.
Starting containers with PortBindings in the HostConfig was deprecated in v1.10 and removed in v1.12.
Both these configuration parameters should now be included when creating the container.
POST /containers/create
{
"Image": image_id,
"ExposedPorts": {
"22/tcp": {}
},
"HostConfig": {
"PortBindings": { "22/tcp": [{ "HostPort": "" }] }
}
}
I know this question had been answered, I using the above solution and here is how I did it in java using Docker Java Client v3.2.5
PortBinding portBinding = PortBinding.parse( hostPort + ":" + containerPort);
HostConfig hostConfig = HostConfig.newHostConfig()
.withPortBindings(portBinding);
CreateContainerResponse container =
dockerClient.createContainerCmd(imageName)
.withHostConfig(hostConfig)
.withExposedPorts(ExposedPort.parse(containerPort+"/tcp"))
.exec();

Resources