Our project is a microservice application. we run 5 to 6 docker containers using docker-compose and this is working fine in ubuntu, centos, and redhat. Am not able to run the same in windriver operation system. All the containers are sharing information using the docker network. when I start the service using docker-compose, am getting the following error.
ERROR: for my-service Cannot start service my-service: failed to create endpoint my-service on network my-net: failed to add the host (veth78f811b) <=> sandbox (vethdd9d629) pair interfaces: operation not supported
Related
Disclaimer
This is only happening on my machine. I tested the exact same code and procedure on my colleague's machine and it's working fine.
Problem
Hello, I have a fairly weird problem at hand.
I am running two Docker containers: One is a crossbar server instance, and the other is an application that uses WAMP (Web Application Messaging Protocol) and registers to the running crossbar server.
Nothing crazy
I run these two applications on two different docker containers that share the same network.
docker network create poc-bridge
docker run --net=poc-bridge -d --name cross my-crossbar-image
docker run --net=poc-bridge --name app my-app-image
Here is the dockerfile I used to build the image my-crossbar-image
FROM crossbario/crossbar
EXPOSE 8080
USER root
COPY deployment/crossbar/.crossbar /node/.crossbar
It simply exposes the port and copy some config files.
The other image for the app that needs to register to the crossbar server is not relevant.
Once I run my app in its container and it tries to register something to the crossbar server using the websocket address ws://cross:8080/ws I get: OSError: [Errno 113] Connect call failed ('172.24.0.2', 8080)
What I tried
I checked that the two containers are actually on the same network (they are)
I could ping container cross from my container app with docker exec app ping cross -c2 (weird)
What can it be???
The reason of the problem was not clear. However, it disappeared. All I had to do was:
Stopping/Removing all the created containers
Removing all the created images
Removing all the created networks
Re-building all again
Now the services can communicate to each other
I have a .net application running in a docker container via docker compose. I'm using a Windows machine with Docker Desktop running Linux containers.
The application connects to a Cosmos instance. The account key is set to the emulator key by default.
Here is the section from docker-compose.yml
customerapi:
container_name: mycustomerapi
image: acr.io/customer-api:master
ports:
- "20102:80"
environment:
- ASPNETCORE_ENVIRONMENT=Development
- CosmosOptions__Endpoint=${endpoint}
- CosmosOptions__DisableSsl=true
If I override the account key and endpoint, I can get the application to connect using a real instance hosted in Azure, but I can't get it to connect to the emulator running on the host machine.
I've tried setting ${endpoint} to the following values with no luck;
https://host.docker.internal:8081/ Fails after about 5 mins with the error System.Net.Http.HttpRequestException: Connection refused (127.0.0.1:8081).
https://192.168.10.110:8081/ This is my local IP address. It fails much faster (around 10 seconds) with the same error as above.
I've also tried using network_mode: host with both endpoints.
https://host.docker.internal:8081/ Fails with the same error as above.
https://192.168.10.110:8081/ Fails after about 10 seconds with the error System.Net.Http.HttpRequestException: No route to host (192.168.10.110:8081)
I needed to run Cosmos with AllowNetworkAccess
This answer shows how to start the emualtor with the /AllowNetworkAccess argument.
Azure Cosmos DB Emulator on a Local Area Network
Once that was running I was able to use https://host.docker.internal:8081/ and the container sprung to life!
I am running Dask Gateway in a Kubernetes namespace. I am able to connect to the Gateway using the following code, while not running in a Docker container.
from dask.distributed import Client
from dask_gateway import Gateway
gateway = Gateway('http://[redacted traefik ip]')
cluster = gateway.new_cluster()
However, when I run the same code from a Docker container, I get this warning after gateway.new_cluster().
distributed.comm.tcp - WARNING - Closing dangling stream in <TLS local=tls://[local ip]:51060 remote=gateway://[redacted ip]:80/dask-gateway.e71c345decde470e8f9a23c3d5a64956>
What is the cause for this? I have also tried running this with --net=host on the Docker container, that resulted in the same error.
Additional Info: This doesn't appear to be a Docker networking issue... I am able to use the Coiled clusters from within a Docker container, but not the Dask-Gateway clusters...
It appears that the initial outgoing connection from the docker container to the traefik pod succeeds. A dask-scheduler is successfully spun up in the cluster. However, the connection drop (timeout?) that prevents further interactions.
I'm trying to run my docker-compose setup into my localhost kubernetes cluster that comes default with Docker for Desktop.
I run the following command and it just .. hangs???
> docker stack deploy --orchestrator=kubernetes -c docker-compose.yml hornet
Ignoring unsupported options: build
Ignoring deprecated options:
container_name: Setting the container name is not supported.
expose: Exposing ports is unnecessary - services on the same network can access each other's containers on any port.
top-level network "backend" is ignored
top-level network "frontend" is ignored
service "website.public": network "frontend" is ignored
service "website.public": container_name is deprecated
service "website.public": build is ignored
service "website.public": depends_on are ignored
....
<snip> heaps of services 'ignored'
....
Waiting for the stack to be stable and running...
The docker-compose up command works great, locally, when I run that.
Is there any ways I can try and see what's going on under the hood, which hangs this?
I am having a problem in all my Dockers running in IBM Container service. The application (the docker it self, I mean) is started when the service still has not configured the red for that docker. After some seconds (maybe 20 or 30) the docker has full network connectivity. This is generating a lot of problems, as it takes about that time to have both internal and external IP interfaces correctly configured by system.
Currently I am inserting a sleep thread in all my dockers applications so they wait for that time before starting to work but I wonder if there is a way to instruct the host not to start the container until the network is ready.
Thanks
Note: This question is related to IBM Containers service, not generic Docker. That is why I don't specify any Docker version, as it is a CaaS service. Anyway, and to be precise, we run the container service using the cloud foundry extensions, not the docker command:
cf ic run --name CONTAINER_NAME -m 512 registry.ng.bluemix.net/MY_ZONE/MY_DOCKER_IMAGE