I have a .net application running in a docker container via docker compose. I'm using a Windows machine with Docker Desktop running Linux containers.
The application connects to a Cosmos instance. The account key is set to the emulator key by default.
Here is the section from docker-compose.yml
customerapi:
container_name: mycustomerapi
image: acr.io/customer-api:master
ports:
- "20102:80"
environment:
- ASPNETCORE_ENVIRONMENT=Development
- CosmosOptions__Endpoint=${endpoint}
- CosmosOptions__DisableSsl=true
If I override the account key and endpoint, I can get the application to connect using a real instance hosted in Azure, but I can't get it to connect to the emulator running on the host machine.
I've tried setting ${endpoint} to the following values with no luck;
https://host.docker.internal:8081/ Fails after about 5 mins with the error System.Net.Http.HttpRequestException: Connection refused (127.0.0.1:8081).
https://192.168.10.110:8081/ This is my local IP address. It fails much faster (around 10 seconds) with the same error as above.
I've also tried using network_mode: host with both endpoints.
https://host.docker.internal:8081/ Fails with the same error as above.
https://192.168.10.110:8081/ Fails after about 10 seconds with the error System.Net.Http.HttpRequestException: No route to host (192.168.10.110:8081)
I needed to run Cosmos with AllowNetworkAccess
This answer shows how to start the emualtor with the /AllowNetworkAccess argument.
Azure Cosmos DB Emulator on a Local Area Network
Once that was running I was able to use https://host.docker.internal:8081/ and the container sprung to life!
Related
Our project is a microservice application. we run 5 to 6 docker containers using docker-compose and this is working fine in ubuntu, centos, and redhat. Am not able to run the same in windriver operation system. All the containers are sharing information using the docker network. when I start the service using docker-compose, am getting the following error.
ERROR: for my-service Cannot start service my-service: failed to create endpoint my-service on network my-net: failed to add the host (veth78f811b) <=> sandbox (vethdd9d629) pair interfaces: operation not supported
I have a GRPC service that I've implemented with TypeScript, and I'm interactively testing with BloomRPC. I've set it up (both client and server) with insecure connections to get things up and running. When I run the service locally (on port 3333), I'm able to interact with the service perfectly using BloomRPC - make requests, get responses.
However, when I include the service into a Docker container, and expose the same ports to the local machine, BloomRPC returns an error:
{
"error": "2 UNKNOWN: Stream removed"
}
I've double checked the ports, and they're open. I've enabled the additional GRPC debugging output logging, and tried tracing the network connections. I see a network connection through to the service on Docker, but then it terminates immediately. When I looked at tcpdump traces, I could see the connection coming in, but no response is provided from my service back out.
I've found other references to 2 UNKNOWN: Stream removed which appear to primarily be related to SSL/TLS setup, but as I'm trying to connect this in an insecure fashion, I'm uncertain what's happening in the course of this failure. I have also verified the service is actively running and logging in the docker container, and it responds perfectly well to HTTP requests on another port from the same process.
I'm at a loss as to what's causing the error and how to further debug it. I'm running the container using docker-compose, alongside a Postgres database.
My docker-compose.yaml looks akin to:
services:
sampleservice:
image: myserviceimage
environment:
NODE_ENV: development
GRPC_PORT: 3333
HTTP_PORT: 8080
GRPC_VERBOSITY: DEBUG
GRPC_TRACE: all
ports:
- 8080:8080
- 3333:3333
db:
image: postgres
ports:
- 5432:5432
Any suggestions on how I could further debug this, or that might explain what's happening so that I can run this service reliably within a container and interact with it from outside the container?
I have been able to set up containerised RabbitMQ server, and reach into it with basic .NET Core clients and check message send and receive working using management portal on http://localhost:15672/.
But I am having real frustrations when I also Containerise my Sender/Receiver .NET Core clients, on being able to establish a connection. I have set up an explicit "shipnetwork", so all containers in the following docker-compose deployment should see each other.
This is the Error I get in the sender attempting the connection:
My SendRabbit .NET core App is as follows. This code was working on my local Windows 10 development machine, with a host of 'localhost' against the RabbitMQ server running as a container. But when I change this to a [linux] docker project, and set the host to "rabbitmq", to correspond to the service name in the docker compose. Now I just get Endpoint Connection errors exceptions within my Sender container.
I have also attempted the same RabbitMQ server and Sender Image with the same docker-compose on a Google Cloud Linux Virtual Machine, and get the same errors. So I do not think it is the Windows 10 docker hosting VM environment hassles.
I thought docker was going to make development and deployment of microservices, but setting up a basic RabbitMQ connections is proving to be a real pain.
I have thought that maybe the rabbitmq server is not up and running, so perhaps ambitious to put in the same docker-compose. But I have checked running my SendRabbit container
$docker run --network shipnetwork sendrabbit
some minutes later. But I still get the same connection error
docker networks **** networks !
When I checked the actual docker networks, I had:
bridge
host
shipnetwork
rabbitship_shipnetwork
The docker compose was actually creating the 'new' network: rabbitship_shipnetwork every time it was spun up, and placing the rabbimq server on that network. The netwrok is named from appending the directory name, with the name in the compsos yaml. So I was using the wrong network in my senders. So I should have been using
$docker run --network rabbitship_shipnetwork sendrabbit
This works fine, and creates messages into the rabbitmq server
So I don't feel that docker-compose is actually very helpful in creating networks, since it is sensitive to the directory name it is run in ! Its unlikely that I can build an app .docker files, and deploy all Apps from a single directory, especially when rabbitmq has to be started separately, before senders and receivers can use it.
docker-compose 0
My dotnet core app running in a docker container on my needs to connect to some external service via their IP one of which is an sql database running separately on a remote server hosted on google cloud. The app runs without issue when not running with docker, however with docker it fails with
An error occurred using the connection to database 'PartnersDb' on server '30.xx.xx.xx,39876'.
fail: Microsoft.EntityFrameworkCore.Update[10000]
An exception occurred in the database while saving changes for context type 'Partners.Api.Infrastructure.Persistence.MoneyTransferDbContext'.
System.InvalidOperationException: Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled conn
ections were in use and max pool size was reached.
My Docker compose file looks like this
version: "3.5"
networks:
my-network:
name: my_network
services:
partners:
image: partners.api:latest
container_name: partners
build:
context: ./
restart: always
ports:
- "8081:80"
environment:
- ASPNETCORE_ENVIRONMENT=Docker
- ConnectionStrings:DefaultConnection=Server=30.xx.xx.xx,39876;Database=PartnersDb;User Id=don;Password=Passwor123$$
networks:
- my-network
volumes:
- /Users/mine/Desktop/logs:/logs
I have
bin bashed into the running container and I'm able to run pings to
the remote sql database server
I've also being able to telnet to
the remote sql database server on the database port
However, problem arises when i do docker-compose up then I get the error above.
Docker version 19.03.5, build 633a0ea is running on MacOS Mojave 10.14.6
I really do not know what to do at this stage.
There are two separate networks. Inside docker is a separate network. Outside docker, on the host machine, it's a different network. If you are accessing it by localhost or IP address it won't work as you are expecting it.
docker network ls
will show you a similar output like below:
NETWORK ID NAME DRIVER SCOPE
58a4dd9893e9 133_default bridge local
424817227b42 bridge bridge local
739297b8107e host host local
b9c4fb3ed4ba none null local
You need to add the host for Java service locally. Try running like a below command:
For Service:
docker run --add-host remoteservice:<ip address of java service> <your image>
Hopefully, this will fix it.
More here: https://docs.docker.com/engine/reference/run/#managing-etchosts
For PartnersDb Database:
If PartnersDb is a SQL database you'll have to configure SQL Server to listen to specific ports. Through SQL Server Configuration Manager > SQL Server Network configuration > TCP/IP Properties.
More here: https://learn.microsoft.com/en-us/sql/database-engine/configure-windows/configure-a-server-to-listen-on-a-specific-tcp-port?view=sql-server-ver15
There are similar settings for MySQL as well.
After which you'll have to run add-host switch
docker run --add-host PartnersDb:<ip address of PartnersDb database> <your image>
You can update the hosts file as well with these settings. I'd prefer it through the command line instead.
I little confused with Linux docker and cosmos db emulator. I have an emulator installed on my local machine. On my Windows 10 I have a Linux docker container with Web API ASP.NET core application. When I try to get access from container to cosmos db I get an exception -> HttpRequestException: Connection refused.
In C# code I get needed options like AuthKey and Uri to database from environment variables. Looks like I have an issue with network between container and localhost but I can not understand how I can connect these.
Below provided docker-compose.yml and docker-compose.override.yml files.
event.webapi:
container_name: event.webapi
image: '${DOCKER_REGISTRY-}eventwebapi'
environment:
**- AzureCollectionName=Events
- AzureDatabaseName=EventsDatabase**
build:
context: .
dockerfile: src/Services/Event/Event.WebApi/Dockerfile
``` docker-compose.override.yml
event.webapi:
environment:
- ASPNETCORE_ENVIRONMENT=Development
- ASPNETCORE_URLS=https://+:443;http://+:80
- ASPNETCORE_HTTPS_PORT=44378
**- AzureEndpointUri=https://127.0.0.1:8081
-AzurePrimaryKey=C2y6yDjf5/R+ob0N8A7Cgv30VRDJIWEHLM+4QDU5DE2nQ9nDuVTqobD4b8mGGyPMbIZnqyMsEcaGQy67XIw/Jw==**
ports:
- "53753:80"
- "44378:443"
volumes:
- ${APPDATA}/Microsoft/UserSecrets:/root/.microsoft/usersecrets:ro
- ${APPDATA}/ASP.NET/Https:/root/.aspnet/https:ro
You can't access vm host from docker container inside by setting https://127.0.0.1:8081 directly.
Please refer to this document, and try to set host.docker.internal:8081 to access vm host.
The host has a changing IP address (or none if you have no network
access). From 18.03 onwards our recommendation is to connect to the
special DNS name host.docker.internal, which resolves to the internal
IP address used by the host. This is for development purpose and will
not work in a production environment outside of Docker Desktop for
Windows.
Cosmos DB emulator needs installed SSL certificate,according to this link. For .net runtime, you could access the certificate directly from the Windows Certificate Store.
However, you run the .net code in linux docker image.So my idea is exporting SSL certificate following these steps.
Save it in the specific path on the host and mounting host directories.Please refer to this guide.