Previously I had a Windows Server 2019 and I could start hundreds of containers without any issue.
Now I'm trying to renew my environment with a Windows Server 2022 Datacenter.
I have 3 copies of docker-compose.ymls. Each has a exposed port as follows:
version: '3.8'
services:
web:
image: "mcr.microsoft.com/windows/nanoserver:ltsc2022"
ports:
- "4604:80"
command: "ping 4.2.2.4 -t"
The exposed ports are different on each yml so there is no conflict.
First container starts, Second container start but when staring the THIRD container, the Host Network Service (HNS) comes up in TaskManager, taking 20-30% of CPU and container stuck on starting and nothing happens after this.
I tried several scenarios to reach the simplest way to reproduce this error. The key point is exposing ports on containers. Without exposing ports I can start multiple containers.
What is the difference between networking defaults in Windows Server 2019 and 2022?
Should I start some windows services on 2022?
Related
My app from container wants to access Mysql from host machine, but is not able to connect. I googled a lot and tried many solutions but could not figure the error, could you please help me on this.
It is a windows image
IIS Website works
Website pages that use DB Connection does not work
Mysql DB is install in local machine (same pc where docker desktop is installed in)
Connection string in app uses 'host.docker.internal' with port 3306.
Tried docker uninstall, reinstall, image prune, container prune, WSL stop and start, host file commenting for below lines:
192.168.1.8 host.docker.internal
192.168.1.8 gateway.docker.internal
Below is the ipconfig from container
Nslookup and Ping commands:
network LS:
Docker Compose:
version: "3.9"
services:
web:
container_name: dinesh_server_container
image: dinesh_server:1
build: .
ports:
- "8000:80"
- "8001:81"
volumes:
- .\rowowcf:c:\rowowcf
- .\rowowcf_supportfiles:c:\rowowcf_supportfiles
- .\rowocollectionsite:c:\rowocollectionsite
environment:
TZ: Asia/Calcutta
Build image uses: FROM mcr.microsoft.com/dotnet/framework/aspnet:4.8
Host OS: Win 10 Pro 10.0.19043 Build 19043
HyperV is enabled too.
Tried the below too:
extra_hosts:
- "host.docker.internal:host-gateway"
Since it is on windows OS - host network mode is not supported (per my research)
EDIT:
MYSQL Bind Address is 0.0.0.0:
Maybe the problem is not directly related to docker but to mysql.
Please, try making your local mysql database listen in all the network interfaces of your host: by default it only listens in 127.0.0.1 and for this reason perhaps docker is unable to connect to it.
I am not sure about how to do it in Windows but typically you need to provide the value 0.0.0.0 for the bind-address configuration option in mysqld.cnf:
bind-address = 0.0.0.0
Please, consider review as well this article or this related serverfault question.
From a totally different point of view, I found this and this other related open issues in the Docker for Windows repository that in a certain way resembles your problem: especially the first one provides some workaround for the problem, like shutting down the wsl backend and restarting the docker daemon. I doesn't look like a solution to me, but perhaps it could be of help.
I have been able to set up containerised RabbitMQ server, and reach into it with basic .NET Core clients and check message send and receive working using management portal on http://localhost:15672/.
But I am having real frustrations when I also Containerise my Sender/Receiver .NET Core clients, on being able to establish a connection. I have set up an explicit "shipnetwork", so all containers in the following docker-compose deployment should see each other.
This is the Error I get in the sender attempting the connection:
My SendRabbit .NET core App is as follows. This code was working on my local Windows 10 development machine, with a host of 'localhost' against the RabbitMQ server running as a container. But when I change this to a [linux] docker project, and set the host to "rabbitmq", to correspond to the service name in the docker compose. Now I just get Endpoint Connection errors exceptions within my Sender container.
I have also attempted the same RabbitMQ server and Sender Image with the same docker-compose on a Google Cloud Linux Virtual Machine, and get the same errors. So I do not think it is the Windows 10 docker hosting VM environment hassles.
I thought docker was going to make development and deployment of microservices, but setting up a basic RabbitMQ connections is proving to be a real pain.
I have thought that maybe the rabbitmq server is not up and running, so perhaps ambitious to put in the same docker-compose. But I have checked running my SendRabbit container
$docker run --network shipnetwork sendrabbit
some minutes later. But I still get the same connection error
docker networks **** networks !
When I checked the actual docker networks, I had:
bridge
host
shipnetwork
rabbitship_shipnetwork
The docker compose was actually creating the 'new' network: rabbitship_shipnetwork every time it was spun up, and placing the rabbimq server on that network. The netwrok is named from appending the directory name, with the name in the compsos yaml. So I was using the wrong network in my senders. So I should have been using
$docker run --network rabbitship_shipnetwork sendrabbit
This works fine, and creates messages into the rabbitmq server
So I don't feel that docker-compose is actually very helpful in creating networks, since it is sensitive to the directory name it is run in ! Its unlikely that I can build an app .docker files, and deploy all Apps from a single directory, especially when rabbitmq has to be started separately, before senders and receivers can use it.
docker-compose 0
I am new to Docker-compose and I am struggling to run my Tomcat+Postgres stack.
I have "successfully" launched the stack, in the sense that my Java Web Application successfully connects to Postgresql and deploys into Tomcat.
But no port is mapped and the hosts are not reachable. But the hosts can reach themselves.
The following is my project layout (I use Palantir's Gradle Docker plugin)
edcom3-docker/
edcom3-tomcat/
build.gradle
src/main/
docker/Dockerfile
resources
webapps/edcom3.war
(Other stuff I am too lazy to list)
edcom3-postgres/
build.gradle
src/main/
docker/Dockerfile
src/main/docker/
docker-compose.yml
.env
Thanks to Gradle Docker plugin, the context is built into $baseDir/build/docker
The following is my current docker-compose.yml. I needed to expand the directory structure to justify links
version: '3'
services:
edcom3-postgres:
build: ../../../edcom3-postgres/build/docker
image: edcom3-postgres
restart: always
environment:
POSTGRES_PASSWORD: postgres
# networks:
# - edcom3-net
expose:
- "5432/tcp"
ports:
- "${EDCOM3_SQL_PORT}:5432"
volumes:
- "edcom3-postgres-data:/var/lib/postgresql/data"
edcom3-tomcat:
depends_on:
- edcom3-postgres
build: ../../../edcom3-tomcat/build/docker
image: edcom3-tomcat
expose:
- "8009/tcp"
- "8080/tcp"
ports:
- "${EDCOM3_AJP_PORT}:8009"
volumes:
- "edcom3-config-location:/home/tomcat"
- "edcom3-file-repository:/mnt/fileRepository"
- "edcom3-logs:/mnt/phoenix-logs"
- "edcom3-tomcat-logs:/usr/local/tomcat/logs"
restart: always
# networks:
# - edcom3-net
links:
- edcom3-postgres
#networks:
# edcom3-net:
# driver: bridge
# internal: true
volumes:
edcom3-config-location:
edcom3-file-repository:
edcom3-logs:
edcom3-tomcat-logs:
edcom3-postgres-data:
What I have tried
I run first gradle :edcom3-tomcat:docker and :edcom3-postgres:gradle to build the contexts.
Then I cd into src/main/docker of the main project, where the above docker-compose is located, and launch the stack.
edcom3-tomcat_1 | 06-Feb-2020 15:51:12.943 INFO [localhost-startStop-1] org.apache.catalina.startup.HostConfig.deployWAR Deployment of web application archive [/usr/local/tomcat/webapps/edcom3.war] has finished in [66,278] ms
The stack starts and the application is deployed. As you can see, I have instructed docker-compose to expose AJP port (variables are bound to port 50000 and 50001) so that Apache can reverse-proxy into Tomcat. Apache is a stand-alone container.
But I can't find the port bindings in docker ps
[docker#DOCKER01 ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
78acb0e5ff5d edcom3-tomcat "catalina.sh run" 11 minutes ago Up 11 minutes (unhealthy) edcom3_edcom3-tomcat_1
60bbed143adf edcom3-postgres "docker-entrypoint.s…" 16 minutes ago Up 16 minutes (unhealthy) edcom3_edcom3-postgres_1
23265ae20793 postgres:11.6-alpine "docker-entrypoint.s…" 7 weeks ago Up 2 days 192.168.0.72:5432->5432/tcp postgres11
9c8b0eda42e9 portainer/portainer:1.23.0 "/portainer --ssl --…" 7 weeks ago Up 2 days 192.168.0.72:8000->8000/tcp, 192.168.0.72:9000->9000/tcp keen_grothendieck
63985a2c656f initech/sqlserver2017:20191204 "/opt/mssql/bin/nonr…" 2 months ago Up 2 days (healthy) 192.168.0.72:1433->1433/tcp sqlserver2017
09589b076513 oracle/database:12.2.0.1-SE2 "/bin/sh -c 'exec $O…" 2 months ago Up 2 days (healthy) 192.168.0.72:1521->1521/tcp, 192.168.0.72:5500->5500/tcp oracle12c
Considerations: (un)commenting the network in the compose file has no effect.
I can clearly see that the containers are reported unhealthy. I tried to remove the health check from their Dockerfiles but it had no effect: the container is not determined its health but still no port available
Then I tried to ping the containers within their network (network block in docker-compose commented out). From my Windows workstation
> docker inspect 4ce2be94fbe8 (tomcat)
....
"NetworkID": "8196b4a9dab76b899494f427286c0a9250ba4b74f8e4c6dbb8cd4459243509ac",
"EndpointID": "17d969ad49fe127870f73e63211e309f23d37a23d2918edb191381ffd7b2aaff",
"Gateway": "172.25.0.1",
"IPAddress": "172.25.0.3",
....
(Oh, cool, the server is listening on port 8080 on that)
> telnet 172.25.0.1 8009
(connection failed)
> tracert 172.25.0.1
(a number of nodes)
It is interesting to see tracert result (which I have omitted). Basically Windows 10 tries to reach 172.25.x.x, which is notably a class 16 private IP address, through the main gateway, only to be ignored by our external ISP (4 external hosts appear in the trace)
Okay, Windows has not configured routing tables.
Let's try on our docker server running CentOS
$ docker inspect 60bbed143adf
.....
"NetworkID": "10a52bc3f822f756f5b76c300787be5af255afd061453add0c70664f69ee06c8",
"EndpointID": "f054747f6a5d0370916caa74b8c01c3e7b30d255e06ebb9d0c450bf1db38efb1",
"Gateway": "172.19.0.1",
"IPAddress": "172.19.0.2",
"IPPrefixLen": 16,
.....
[docker#DOCKER01 ssl]$ telnet 172.19.0.3 8009
Trying 172.19.0.3...
Connected to 172.19.0.3.
Escape character is '^]'.
It's interesting that I can finally access the network
Conclusion: question
Can somebody help me understand why can't I map port 8009 (AJP) from the web container to the host machine? If I can achieve that, the web application will be available to Apache load balancer via AJP protocol
In the compose file, the container port 8009 is exposed on the host port ${EDCOM3_AJP_PORT}.
So you should be able to access your tomcat AJP with <IP-OF-DOCKER-SERVER>:${EDCOM3_AJP_PORT}.
Port publication is done with the ports section, expose only "expose ports without publishing them to the host machine - they’ll only be accessible to linked services"
But we can see in the docker ps that the PORTS section is empty for the edcom3-tomcat container, so I'll suggests that EDCOM3_AJP_PORT is not well defined (but then it should fail...)
On using 172.25.0.1:8009: the container IP is private to the docker host (the CentOS machine), so no problem to access it (on any listening port of the container) from this machine, but it's not possible from any other machine.
See Container networking - Published ports:
By default, when you create a container, it does not publish any of its ports to the outside world.
To make a port available to services outside of Docker, or to Docker containers which are not connected to the container’s network, use the --publish or -p flag.
This creates a firewall rule which maps a container port to a port on the Docker host.
I have found that the edcom3-net was the main culprit to blame.
My issue was caused by a number of unfortunate coincidences, so let me put order on that.
Ports not exposed with network
Simply put, I have found that if you connect a Docker container to an internal network, it won't publish ports on the host. It must be connected to the host network to publish ports. If you comment the network part on the docker-compose file, ports will be successfully publishes
Host up but not reachable when internal network is in use
I found this to be another aspect, which requires me additional investigation and rather a separate Q&A.
What I have discovered is that when you attach a container to an internal network, it will obtain its own private IP address and not expose the ports. Still, the container has IP address and ports available. There are two noticeable things here
1. If you run a load balancer on another Docker container (this time on the host network)
You will have a hard time reaching the target container. You must configure routing tables to link multiple networks.
2. If the load balancer runs on the Docker host, instead
It should be easier to set a reverse proxy. However, I had the unfortunate coincidence to run this scenarion on a Windows host. And Docker Desktop, in my case, did not set the routing tables. Even after docker network inspecting the network edcom3-net several times, the 172.16.x.x address of Tomcat was routed via the main gateway
Appreciate an expert advise. We have a Docker EE setup on RH Linux platform.
Given that we have setup Docker EE as:
2 manager nodes (linux)
2 worker nodes (linux)
2 worker node (windows server)
UCP
Docker Swarm
When I build a windows container to run a .NET console service built on .NET 4.6.2. How this container gets allocated in the swarm?
Questions:
How will this be able to join the swarm?
Will my container be able to run on the worker nodes running Linux host OS?
How docker swarm manage the fail-over of the nodes? Will the replica only gets distributed on the windows worker nodes? Is this setup of ours make sense?
I had some readings that windows containers only runs on Windows host but Linux containers can run both Linux and Windows host nodes. Will be testing this this week but would be great to hear your experiences. //TIA
You join your windows container hosts to swarm the same way you join UNIX ones (docker swarm join). You assign label to those nodes to identify that those are windows nodes and when you deploy service specify constraint for windows containers.
It will work as you would expect with UNIX services. Current limitation is that you can only deploy in global mode, that is you have to have windows nodes running on each node since swarm mesh is not fully supported yet.
https://learn.microsoft.com/en-us/virtualization/windowscontainers/manage-containers/swarm-mode
You no longer need to create OS labels for each node. Docker Swarm recognizes worker node OS automatically. Just specify the desired OS for each service in your compose file:
version: '3'
services:
service_1:
restart: on-failure
image: 'service_1'
deploy:
placement:
constraints:
- node.platform.os == windows
junittestsuite:
restart: on-failure
image: 'junit_test_suite:1.0'
command: ant test ...
deploy:
placement:
constraints:
- node.platform.os == linux
To start, I am more familiar running Docker through Portainer than I am with doing it through the console.
What I'm Doing:
Currently, I'm running Mopidy through a container, which is being accessed by other machines through the default Mopidy port. In another container, I am running a Slack bot using the Limbo repo as a base. Both of them are running on Alpine Linux.
What I Need:
What I want to do is for my Slack bot to be able to call MPC commands, such as muting the volume, etc. This is where I am stuck. What is the best way for this to work
What I've tried:
I could ssh into the other container to send a command, but it doesn't make sense to do this since they're both running on the same server machine.
The best way to connect a bunch of containers is to define a service stack using docker-compose.yml file and launch all of them using docker-compose up. This way all the containers will be connected via single user-defined bridge network which will make all their ports accessible to each other without you explicitly publishing them. It will also allow the containers to discover each other by the service name via DNS-resolution.
Example of docker-compose.yml:
version: "3"
services:
service1:
image: image1
ports:
# the following only necessary to access port from host machine
- "host_port:container_port"
service2:
image: image2
In the above example any application in the service2 container can reach some port on service1 just by using service2:port address.