I'm trying to connect to a Manager with swarm version 1.12.1 from the docker client:
$ docker -H tcp://MY_MANAGER_1_IP:2377 info
I got the following error message:
Are you trying to connect to a TLS-enabled daemon without TLS?
Anyone has idea, thank you in advance.
The integrated docker swarm in 1.12 is managed via the docker host, not via the swarm port as you would have done before in the standalone swarm product (which you can still install in a 1.12 environment if you wish). Connect to the docker host as you always have, and manage it via docker swarm, docker service, and docker node commands.
The port you open for the integrated swarm isn't for the docker API, it's for traffic between swarm managers and workers. To see the info on the swarm, the docker info on the swarm manager will include some details, and docker node will give a status of managers and workers. Note that this also means you cannot submit jobs to the integrated swarm with a docker -H ... run ... command, you must use the new docker service commands to manage containers in the new swarm.
For remote access to any docker host, which would let you run API commands from another machine, see the docs on securing the Docker API which is a procedure to enable TLS and setup the daemon to listen for external traffic instead of using the docker.sock socket.
Related
I am trying to find a list of all running Docker Swarm Mode services that have a specific tag - but from inside a container.
I know about the hack to mount docker.sock into the container, but I'm looking for a more elegant/secure way to retrieve this information.
Essentially, I want my app to "auto-discover" other available services. Since the docker swarm manager node already has this information, this would eliminate the need for a dedicated service registry like Consul.
You can query docker REST API from within the container.
For example, on MacOS, run on the host to list docker images:
curl --unix-socket /var/run/docker.sock http:/v1.40/images/json
To run the same inside the container, first install socat on the host.
Then establish a relay between host's unix-socket /var/run/docker.sock and host's port 2375 using socat:
socat TCP-LISTEN:2375,reuseaddr,fork UNIX-CONNECT:/var/run/docker.sock
Then query host's 2375 port from within a container:
curl http://host.docker.internal:2375/v1.40/images/json
You should see the same result.
Notes:
I don't have initialized docker swarm, so examples use docker images
listing. Refer to Docker docs for listing services api.
You can find out API version from the output of docker info
Refer to What is linux equivalent of “host.docker.internal” if you don't use MacOS. Latest Linux docker versions should support host.docker.internal.
This links says:
In Docker 17.06 and higher, you can also use a host network for a swarm service, by passing --network host to the docker container create command.
But I'm using docker version 17.03 which I cannot upgrade. Is there a workaround?
I want the containers created using docker stack to have access to the host's network.
Unable to login in iscsi initiator in docker container running inside a kubernetes cluster
I have installed open-iscsi package in a docker ubuntu container with privileged mode inside a kubeminion. The iscsi target is running and the iscsi initiator discovery returns the correct initiator name iqn. When I try to login, I get this:
ERROR :
iscsiadm: got read error (0/111), daemon died? iscsiadm: Could not
login to [iface: default, target: iqn.2016-09.com.abcdefg.xyza:name,
portal: 10.102.83.21,3260]. iscsiadm: initiator reported error (18 -
could not communicate to iscsid) iscsiadm: Could not log into all
portals
I tried service iscsid restart and debug with iscsid -d 8 -f command, still login is not successful
Adding --net=host flag and --privileged flag while docker run within the cluster, both iscsi discover and login will be successful. iscsi expects host's networking services to run with privileged access. The command should be,
docker run -it --privileged --net=host name:tag
With the network set to host a container will share the host’s network stack and all interfaces from the host will be available to the container. The container’s hostname will match the hostname on the host system.
For more details, refer the documentation :
https://docs.docker.com/engine/reference/run/#network-settings
Note:Flag --net works on older and latest versions of docker, --network works on latest docker version only.
For Docker Swarm, the Swarm manager runs on master node while swarm agent runs on slave node. I’m interested in the steps of starting a container. There are two options:
Swarm manager starts containers directly through Docker remote API.
Swarm manager asks Swarm agent to start container, then Swarm agent ask local Docker daemon to start container.
Personally, I think the first one is right. But I’m not sure...
Swarm agents don't have access to the Docker daemon, they are only there to communicate via etcd, consul or zookeeper with the master. So the first one is correct. They agent registers the host with the discovery service and from then on the manager can access it via the daemon listening on a TCP port.
I want to run a task in some docker containers on different hosts. And I have written a manager app to manage the containers(start task, stop task, get status, etc...) . Once a container is started, it will send an http request to the manager with its address and port, so the manager will know how to manage the container.
Since there may be more than one containers running on a same host, they would be mapped to different ports. To register a container on my manager, I have to know which port each container is mapped to.
How can I get the mapped port inside a docker container?
There's an solution here How do I know mapped port of host from docker container? . But it's not applicable if I run container with -P. Since this question is asked more than 1 year ago, I'm wondering maybe there's a new feature added to docker to solve this problem.
You can also you docker port container_id
The doc
https://docs.docker.com/engine/reference/commandline/port/
examples from the doc
$ docker port test
7890/tcp -> 0.0.0.0:4321
9876/tcp -> 0.0.0.0:1234
$ docker port test 7890/tcp
0.0.0.0:4321
$ docker port test 7890/udp
2014/06/24 11:53:36 Error: No public port '7890/udp' published for test
$ docker port test 7890
0.0.0.0:4321
i share /var/run/docker.sock to container and get self info
docker run --rm -it -v /var/run/docker.sock:/var/run/docker.sock alpine:latest sh
in container shell
env //get HOSTNAME
curl --unix-socket /var/run/docker.sock http://localhost/containers/3c6b9e44a622/json
the 3c6b9e44a622 is your HOSTNAME
Once a container is started, it will send an http request to the manager with its address and port
This isn't going to be working. From inside a container you cannot figure out to which docker host port a container port is mapped to.
What I can think about which would work and be the closest to what you describe is making the container open a websocket connection to the manager. Such a connection would allow two ways communication between your manager and container while still being over HTTP.
What you are trying to achieve is called service discovery. There are already tools for service discovery that work with Docker. You should pick one of them instead of trying to make your own.
See for instance:
etcd
consul
zookeeper
If you really want to implement your service discovery system, one way to go is to have your manager use the docker event command (or one of the docker client librairies). This would enable your manager to get notified of containers creations/deletions with nothing to do on the container side.
Then query the docker host to figure out the ports that are mapped to your containers with docker port.