How can I connect UaExpert with docker container? - docker

I have started successful a OpcUa Server in a docker container. Now I want to connect to this container via UaExpert.
But this is not working:
Message: Could not connect to server:BadTimeout
I created a own subnet (pa1net) and want use the ip 192.123.0.32.
My command: docker run --net pa1net --ip 192.123.0.32 -it pa1
Output:
**********************************************
Server opened enpoints for following URLs:
opc.tcp://2e54ds688fd4:48010
**********************************************
But this url opc.tcp://2e5... contains the docker container id (2e54ds688fd4) and that isn't ok ?!
When I ping the ip 192.123.0.32 I get packages back.
I do not know which URL / IP is the correct one to connect to the container...
Or is my command wrong to connect to a container ?

This container located behind nat. You need to publish port or be the same docker network. Or you have to correct your route.

Related

Can I open a port on the docker container in “overlay” network to communicate with the server on the local host?

I have a container running with -network= my-overlay-network, that I can prevent any api calls within the containers to the service on the internet. However, I do need to make an api calls within the container to the localhost.
I used the -p dockerport:localhostport in the docker run command to publish/map the port of the container to the localhost. However, it always shows as "Connection refused".
Also I tried to add --add-host host.docker.internal:$(ip addr show docker0 | grep -Po 'inet \K[\d.]+') in docker run. I still cannot connect the server on the port. I have got "Couldn't connect to server" to host.docerk.internal:port.
Can I open a port when the container is under the overlay network?
It sounds like you got the ports backwards, instead of -p dockerport:localhostport it should be -p localhostport:containerport, where localhostport is the port you want to open on the local host, and containerport is the port the container exposed in it's dockerfile.

Stuck exposing a port of Docker

I'm stuck on port mapping in Docker.
I want to map port 8090 on the outside of a container to port 80 on the inside of the container.
Here is the container running:
ea41c430105d tag-xx "/usr/local/openrest…" 4 minutes ago Up 4 minutes 8090/tcp, 0.0.0.0:8090->80/tcp web
Notice that it says that port 8090 is mapped to port 80.
Now inside another container I do
curl web
I get a 401 response. Which means that the container responds. So far so good.
But when I do curl web:8090 I get:
curl: (7) Failed to connect to web port 8090: Connection refused
Why is port mapping not working for me?
Thanks
P.S. I know that specifically my container responds to curl web with a 401 because when I stop docker stop web and do curl web again, I get could not resolve host: web.
You cannot connect to a published port from inside another container because those are only available on the host. In your case:
From host:
curl localhost:8090 will connect to your container
curl localhost:80 won't connect to your container because the port isn't published
From another container in the same network
curl web will work
curl web:8090 won't work because the only port exposed and listening for the web service is the 80.
Docker containers unless specified connects to the default bridge network. This default bridge network does not support automatic DNS resolution between containers. It looks like you are most likely on the default bridge network. However, on a default bridge network, you could connect using the container IP Address which can be found out using the following command
docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' <container name>
So, curl <IP Address of web container>:8090 should work.
It is always better to create a user defined bridge network and attach the containers to this network. On a user defined bridge network, the containers that are connected have their ports exposed to each other and not to the outside world. The user defined bridge network also support automatic DNS resolution and you could refer to the container's name instead of IP Address. You could try the following commands to create a user defined bridge network and attach your containers to it.
docker network create --driver bridge my-net
docker attach web
docker attach <other container name>
Now, from the other container you should be able to run curl on the 'web' container.
You can create network to connect between containers.
Or you can use --link :
docker run --name container1 -p 80:???? -d image (expose on port 80)
docker run --name container2 --links lcontainer1:container1
and inside container2 you can use :
curl lcontainer1
Hope it helps

Map container to hostname other than localhost in Docker for Mac

I am creating an Nginx container that I would like to access locally at http://api. Using Docker Machine, I assumed running docker-machine create default and docker-machine ip default to receive the IP and editing my hosts file to something like this:
# docker-machine ip default --> 192.168.99.100
192.168.99.100 api
should map requests to api\ to the Docker Machine IP and serve my content.
Two things are confusing me:
I launch Docker through the Mac App and can create Nginx containers and access content at http://localhost. However, running docker-machine ls returns no machines. This is confusing because I thought Docker had to run on a VM.
Starting from scratch and starting Docker Machine, then spinning up containers seems to have no effect. In other words, I still can access content at http://localhost but not http://api
Instead of accessing my container at http://localhost I want to access it at http://api. How do I do this?
I'm using Docker for Mac 17.12 and Docker Machine 0.14.
On the base of your this question:
Instead of accessing my container at http://localhost I want to access
it at http://api. How do I do this?
Your docker run command:
docker run -it --rm --name test --add-host api:192.168.43.8 -p 80:80 apachehttpd
1st Thing: The --add-host flag add value to /etc/hosts in your container /etc/hosts so http://api will also response inside the container if ping inside that container.
This is how will ping response inside container
2nd Thing: Edit your host etc/hosts file and add
api 192.168.43.8 [your ip]
This is how you can see in Browser.

Docker: Refer to registry by ip address

I have a Docker image I want to push to my registry (hosted on localhost). I do:
docker push localhost:5000/my_image
and works properly. However, if I tag the image and push it by:
docker push 172.20.20.20:5000/my_image
I get an error.
The push refers to a repository [172.20.20.20:5000/my_tomcat] (len: 1)
unable to ping registry endpoint https://172.20.20.20:5000/v0/ v2
ping attempt failed with error:
Get https://172.20.20.20:5000/v2/: Gateway Time-out
Can't I refer to registry by IP? If so, how could I push an image from another host that it is not localhost?
EDIT
I'm running the registry this way:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
As mentioned in "IPs for all the Things" (by Jess Frazelle), you should be able, with docker 1.10, to run your registry with a fixed IP address.
It uses the --net= --ip= options of docker run.
# create a new bridge network with your subnet and gateway for your ip block
$ docker network create --subnet 203.0.113.0/24 --gateway 203.0.113.254 iptastic
# run a nginx container with a specific ip in that block
$ docker run --rm -it --net iptastic --ip 203.0.113.2 nginx
# curl the ip from any other place (assuming this is a public ip block duh)
$ curl 203.0.113.2
You can adapt this example to your registry docker run parameters.
First of all please check whether you are able to connect the registry on the port 5000. In Linux/windows you can do this using telnet. below is the command.
$ telnet 172.20.20.20 5000
If the connectivity check is failed then please check your firewall settings.
I am not sure whether you are running your running registry with login functionality. But from the question i can assume that you are not using it. In this case please add your registry as an insecure registry in the docker daemon from which you are trying to access the registry. The process is described here - https://docs.docker.com/registry/insecure/
Please let me know if are successful.

How to get the mapped port on host from a docker container?

I want to run a task in some docker containers on different hosts. And I have written a manager app to manage the containers(start task, stop task, get status, etc...) . Once a container is started, it will send an http request to the manager with its address and port, so the manager will know how to manage the container.
Since there may be more than one containers running on a same host, they would be mapped to different ports. To register a container on my manager, I have to know which port each container is mapped to.
How can I get the mapped port inside a docker container?
There's an solution here How do I know mapped port of host from docker container? . But it's not applicable if I run container with -P. Since this question is asked more than 1 year ago, I'm wondering maybe there's a new feature added to docker to solve this problem.
You can also you docker port container_id
The doc
https://docs.docker.com/engine/reference/commandline/port/
examples from the doc
$ docker port test
7890/tcp -> 0.0.0.0:4321
9876/tcp -> 0.0.0.0:1234
$ docker port test 7890/tcp
0.0.0.0:4321
$ docker port test 7890/udp
2014/06/24 11:53:36 Error: No public port '7890/udp' published for test
$ docker port test 7890
0.0.0.0:4321
i share /var/run/docker.sock to container and get self info
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock alpine:latest sh
in container shell
env //get HOSTNAME
curl --unix-socket /var/run/docker.sock http://localhost/containers/3c6b9e44a622/json
the 3c6b9e44a622 is your HOSTNAME
Once a container is started, it will send an http request to the manager with its address and port
This isn't going to be working. From inside a container you cannot figure out to which docker host port a container port is mapped to.
What I can think about which would work and be the closest to what you describe is making the container open a websocket connection to the manager. Such a connection would allow two ways communication between your manager and container while still being over HTTP.
What you are trying to achieve is called service discovery. There are already tools for service discovery that work with Docker. You should pick one of them instead of trying to make your own.
See for instance:
etcd
consul
zookeeper
If you really want to implement your service discovery system, one way to go is to have your manager use the docker event command (or one of the docker client librairies). This would enable your manager to get notified of containers creations/deletions with nothing to do on the container side.
Then query the docker host to figure out the ports that are mapped to your containers with docker port.

Resources