I am debugging a solution that uses docker containers. Its a Visual Studio solution making use of docker-compose to spin up all the required containers.
When spinning up one of each container it works 100%, but the solution is load balanced and I am trying to debug some functionality while there are more than one instance running.
Using docker-compose you can specify replicas and then it will start up more than one instance of that container:
consoledemo:
image: index.docker.io/dustyroberts/consoledemo:latest
environment:
- ASPNETCORE_ENVIRONMENT=Local
networks:
- private_network
deploy:
replicas: 10
However, while debugging with Visual Studio Community it causes an error
One or more container names used by this project is already in use. Retrying 'docker-compose up' with non-conflicting container names.
Here is the docker-compose section when debugging the app
consoledemo:
image: ${DOCKER_REGISTRY-}consoledemo
environment:
- ASPNETCORE_ENVIRONMENT=Local
networks:
- private_network
deploy:
replicas: 10
Is it even possible to debug multiple instances of a containerized project using Visual Studio? If so, could someone please point me in the right direction.
Related
I can successfully bring up a CosmosDb Emulator instance within docker-compose, but the data I am trying to seed has more than 25 static containers, which is more than the default emulator allows. Per https://learn.microsoft.com/en-us/azure/cosmos-db/emulator-command-line-parameters#set-partitioncount you can set this partition count higher with a parameter, but I am unable to find a proper entrypoint into the compose that accepts that parameter.
I have found nothing in my searches that affords any insight into this as most people have either not used compose or not even used Docker for their Cosmos Emulator instance. Any insight would be appreciated.
Here is my docker-compose.yml for CosmosDb:
services:
cosmosdb:
container_name: "azurecosmosemulator"
hostname: "azurecosmosemulator"
image: 'mcr.microsoft.com/cosmosdb/windows/azure-cosmos-emulator'
platform: windows
tty: true
mem_limit: 2GB
ports:
- '8081:8081'
- '8900:8900'
- '8901:8901'
- '8902:8902'
- '10250:10250'
- '10251:10251'
- '10252:10252'
- '10253:10253'
- '10254:10254'
- '10255:10255'
- '10256:10256'
- '10350:10350'
networks:
default:
ipv4_address: 172.16.238.246
volumes:
- '${hostDirectory}:C:\CosmosDB.Emulator\bind-mount'
I have attempted to add a command in there for starting the container, but it does not accept any arguments I have tried.
My answer for this was a work around. Ultimately, running windows and linux containers side-by-side was a sizeable pain. Recently, Microsoft put out a linux container version of the emulator, which allowed me to provide an environment variable for partition counts, and run the process far more efficiently.
Reference here: https://learn.microsoft.com/en-us/azure/cosmos-db/linux-emulator?tabs=ssl-netstd21
After fiddling around for a couple days with what was new to me a week ago, I'm kind of stuck and would like your help. I've created a docker swarm with some Pi's running Ubuntu server 20.04 LTS and when I use the command:
$ docker stack deploy --compose-file docker-compose.visualizer.yml visualizer
The terminal feedback is:
Creating network visualizer_default
Creating service visualizer_visualizersvc
Practically the same output when I run:
$ docker stack deploy --compose-file docker-compose.home-assistant.yml home-assistant
Checking the stacks:
$ docker stack ls
NAME SERVICES ORCHESTRATOR
home-assistant 1 Swarm
visualizer 1 Swarm
Checking services in stacks:
$ docker stack services visualizer
ID NAME MODE REPLICAS IMAGE PORTS
t5nz28hzbzma visualizer_visualizersvc replicated 0/1 dockersamples/visualizer:latest *:8000->8080/tcp
$ docker stack services home-assistant
ID NAME MODE REPLICAS IMAGE PORTS
olj1nbx5vj40 home-assistant_homeassistant replicated 0/1 homeassistant/home-assistant:stable *:8123->8123/tcp
When I then browse to the ports specified in docker-compose.visualizer.yml or docker-compose.home-assistant.yml there is no response on the server side ("can't connect"). Identical for both the manager and worker IP. This is inside a home network, in a single subnet with no traffic rules set for LAN traffic.
EDIT: a portscan reveals no open ports in the specified range on either host.
Any comments on my work are welcome as I'm learning, but I would very much like to see some containers 'operational'.
As a reference I included the docker-compose files:
docker-compose.home-assistant.yml
version: "3"
services:
homeassistant:
image: homeassistant/home-assistant:stable
ports:
- "8123:8123"
volumes:
- './home-assistant:/config'
environment:
TZ: 'Madrid'
restart: unless-stopped
network_mode: host
docker-compose.visualizer.yml
version: "3"
services:
visualizersvc:
image: alexellis2/visualizer-arm:latest
deploy:
placement:
constraints:
- 'node.role==manager'
ports:
- '8000:8080'
volumes:
- '/var/run/docker.sock:/var/run/docker.sock'
Bonus points for telling me if I should always approach the manager through the specified ports or if I have to approach the machine running the service (or any good documentation on the subject.)
Not long after you post a question you happen to find the answer yourself of course:
I never scaled the services (to 1 in my case)
docker service scale [SERVICE_ID]=1
EDIT: The services were not scaling to 1 because of another error, I think in the visualizer, but this brought me to the final answer.
Now I'm getting a mountain of new error messages, but at least those are verbose :)
Any feedback is still welcome.
I want to run a webapp and a db using Docker, is there any way to connect 2 dockers(webApp Docker Container in One Machine and DB Docker container in another Machine) using docker-compose file without docker-swarm-mode
I mean 2 separate server
This is my Mongodb docker-compose file
version: '2'
services:
mongodb_container:
image: mongo:latest
restart: unless-stopped
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
Here is my demowebapp docker-compose file
version: '2'
services:
demowebapp:
image: demoapp:latest
restart: unless-stopped
volumes:
- ./uploads:/app/uploads
environment:
- PORT=3000
- ROOT_URL=http://localhost
- MONGO_URL=mongodb://35.168.21.133/demodb
ports:
- 3000:3000
Can any one suggest me How to do
Using only one docker-compose.yml with compose version: 2 there is no way to deploy 2 services on two different machines. That's what version: 3 using a stack.yml and swarm-mode are used for.
You can however deploy to two different machines using two docker-compose.yml version 2, but will have to connect them using different hostnames/ips than the service-name from the compose-file.
You shouldn't need to change anything in the sample files you show: you have to connect to the other host's IP address (or DNS name) and the published ports:.
Once you're on a different machine (or in a different VM) none of the details around Docker are visible any more. From the point of view of the system running the Web application, the first system is running MongoDB on port 27017; it might be running on bare metal, or in a container, or port-forwarded from a VM, or using something like HAProxy to pass through from another system; there's literally no way to tell.
The configuration you have to connect to the first server's IP address will work. I'd set up a DNS system if you don't already have one (BIND, AWS Route 53, ...) to avoid needing to hard-code the IP address. You also might look at a service-discovery system (I have had good luck with Hashicorp's Consul in the past) which can send you to "the host system running MongoDB" without needing to know which one that is.
I have been search google for a solution to the below problem for longer than I care to admit.
I have a docker-compose.yml file, which allows me to fire up an ecosystem of 2 containers on my local machine. Which is awesome. But I need to be able to deploy to Google Container Engine (GCP). To do so, I am using Kubernetes; deploying to a single node only.
In order to keep the deploying process simple, I am using kompose, which allows me to deploy my containers on Google Container Engine using my original docker-compose.yml. Which is also very cool. The issue is that, by default, Kompose will deploy each docker service (I have 2) in seperate pods; one container per pod. But I really want all containers/services to be in the same pod.
I know there are ways to deploy multiple containers in a single pod, but I am unsure if I can use Kompose to accomplish this task.
Here is my docker-compose.yml:
version: "2"
services:
server:
image: ${IMAGE_NAME}
ports:
- "3000"
command: node server.js
labels:
kompose.service.type: loadbalancer
ui:
image: ${IMAGE_NAME}
ports:
- "3001"
command: npm run ui
labels:
kompose.service.type: loadbalancer
depends_on:
- server
Thanks in advance.
The thing is, that neither dose docker-compose launch them like this. They are completely separate. It means, for example, that you can have two containers listening on port 80, cause they are independent. If you try to pack them into same pod you will get port conflict and end up with a mess. The scenario you want to achieve should be achieved on your Dockerfile level to make any sense (although fat [supervisor based] containers can be considered an antipattern in many cases), in turn making your compose obsolete...
IMO you should embrace how things are, cause it does not make sense to map docker-compose defined stack to single pod.
I'm aware that docker-compose with docker-swarm (which is now legacy) is able to co-schedule some services on one node (using dependency filters such as link)
I was wondering if this kind of co-scheduling is possible using modern docker engine swarm mode and the new stack deployment introduced in Docker 1.13
In docker-compose file version 3, links are said to be ignored while deploying a stack in a swarm, so obviously links aren't the solution.
We have a bunch of servers to run batch short-running jobs and the network between them is not very high speed. We want to run each batch job (which consists of multiple containers) on one server to avoid networking overhead. Is this feature implemented in docker stack or docker swarm mode or we should use the legacy docker-swarm?
Also, I couldn't find co-scheduling with another container in the placement policies.
#Roman: You are right.
To deploy to a specific node you need to use placement policy:
version: '3'
services:
job1:
image: example/job1
deploy:
placement:
node.hostname: node-1
networks:
- example
job2:
image: example/job2
deploy:
placement:
node.hostname: node-1
networks:
- example
networks:
example:
driver: overlay
You can still use depends_on
It worth having a look at dockerize too.