Keep containers alive using docker-compose on windows - docker

I have two services defined in docker-compose.yml file and expecting two containers up and running in output however the containers are getting stopped immediately. When I use the same compose file on Linux it creates and keep twp containers up and running. Is there any known issue with Windows?
I have tried using docker-compose up as well as docker-compose run -dT <servicename> no go.
my docker-compose.yml file is
version: '3'
networks:
default:
external:
name: nat
services:
awi-service:
env_file:
- awi-box.env
image: awi-box:12.0.0
ports:
- 8080:8080
depends_on:
- ae-service
ae-service:
env_file:
- ae-box.env
image: ar-box:12.0.0
ports:
- 2217:2217
- 2218:2218

Related

setup networking of multiple docker containers in different projects using docker-compose

Hello I have multiple projects that have there own dockerfiles and docker-compose.yml files. I am not too familiar on how I would setup the networking between these projects. So they could share the same databases and the project would be able to talk to on another. Does anyone have suggests?
Right now, In one of the projects I am just pulling in all the dockerfile into a docker-compose.yml and setting-up all the services I need from all the other projects in this yml file. I do not think this is ideal and there is a high level a coupling between the services.
version: "3"
services:
db:
image: mysql/mysql-server
ports:
- 3306:3306
mongo:
image: mongo
restart: always
rails_app:
build:
context: ${RAILS_APP_PATH}
dockerfile: Dockerfile
volumes:
- ${RAILS_APP_PATH}:/application
ports:
- 4000:4000
depends_on:
- db
- mongo
links:
- db
- mongo
frontend:
build:
context: ${FRONTEND_PATH}
ports:
- ${EXPOSED_PORT}:${EXPOSED_PORT}
depends_on:
- go_services
links:
- go_services
go_services:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
depends_on:
- db
- mongo
- rails_app
links:
- db
- mongo
- rails_app
The trick is to use an External Docker Network.
Set up the network and the Containers can talk to each other by their Service Names.
Setup the the network on the Host
docker network create my-net
First compose file
version: '3.9'
services:
mymongo:
image: mongo:latest
restart: unless-stopped
container_name: mongo
environment:
MONGO_INITDB_DATABASE: mymongo
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: password
volumes:
- ./database:/data/db
ports:
- "27017:27017"
networks:
default:
external: true
name: my-net
Second compose file
version: '3.9'
services:
ui:
build:
context: ./build
dockerfile: Dockerfile_ui
image: ui
restart: "no"
container_name: ui
ports:
- "8005:3000"
command: ["npm", "start"]
networks:
default:
external: true
name: my-net
You can do this without any special Compose setup, if:
each project is self-contained (they do not share databases)
the service locations are configurable via environment variables
you don't mind communicating via the host
If you're thinking about scaling up this project at all, this approach can look attractive. It will work even if you're running each Compose file on a different host, and it translates well into clustered environments like Kubernetes.
Go ahead and break up your Compose file into several independent ones:
# rails/docker-compose.yml
version: '3.8'
services:
db:
image: mysql/mysql-server
app:
build: .
ports: ['4000:4000']
depends_on: [db]
# go/docker-compose.yml
services:
mongo:
image: mongo
service:
build: .
ports: ['8080:8080']
depends_on: [mongo]
environment:
- RAILS_APP_URL
The very last line here passes the RAILS_APP_URL environment variable from the host environment into the container.
You can start the Rails application independently:
docker-compose -f ./rails/docker-compose.yml up -d
You need to find some hostname where the container can call back to the host. On MacOS and Windows hosts, Docker provides a special hostname host.docker.internal for this. You can then connect the client container to the published port of its server:
export RAILS_APP_URL=http://host.docker.internal:4000
docker-compose -f ./go/docker-compose.yml up
If you're doing development, you can run the service you're working on locally, and its dependencies in containers, and point the environment variable at the container
go build -o ./server ./cmd/server
export RAILS_APP_URL=http://localhost:4000
./server
If you want to run this setup on multiple hosts but without using a dedicated cluster manager like Docker Swarm or Kubernetes, set the environment variable to point at the DNS name of the host running the service. If you did want to translate this to Kubernetes, a Helm "chart" would be analogous, containing the Deployment, Service, etc. and dependencies for a single component, and you could configure the other service's URL through Helm values.

Running docker service in a docker-compose at different time

I have a docker compose which looks like following i want to run service httpd at first and after some time i want to run tomcat is it possible to do that without using two docker-compose and only using the given docker compose file
version: '2'
services:
tomcat:
expose:
- "8009"
ports:
- 8080:8080
image: 192.168.56.1:5000/tomcat
httpd:
volumes:
- ./logs:/var/log/apache2
ports:
- 80:80
image: 192.168.56.1:5000/apache
You can specify a dependency between the services by using the depends_on key eg.
services:
tomcat:
...
depends_on:
- httpd
httpd:
...

How to network between multiple containers of the same image in docker-compose?

I am using docker-compose and my configuration file is simply:
version: '3.7'
volumes:
mongodb_data: {}
services:
mongodb:
image: mongo:4.4.3
restart: always
ports:
- "27017:27017"
volumes:
- mongodb_data:/data/db
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=super-secure-password
rocket:
build:
context: .
depends_on:
- mongodb
image: rocket:dev
dns:
- 1.1.1.1
- 8.8.8.8
volumes:
- .:/var/rocket
ports:
- "30301-30309:30300"
I start MongoDB with docker-compose up, and then in new terminal windows run two Node.js application each with all the source code in /var/rocket with:
# 1st Node.js application
docker-compose run --service-ports rocket
# 2nd Node.js application
docker-compose run --service-ports rocket
The problem is that the 2nd Node.js application service needs to communicate with the 1st Node.js application service on port 30300. I was able to get this working by referencing the 1st Node.js application by the id of the Docker container:
Connect to 1st Node.js application service on: tcp://myapp_myapp_run_837785c85abb:30300 from the 2nd Node.js application service.
Obviously this does not work long term as the container id changes every time I docker-compose up and down. Is there a standard way to do networking when you start multiple of the same container from docker-compose?
You can run the same image multiple times in the same docker-compose.yml file:
version: '3.7'
services:
mongodb: { ... }
rocket1:
build: .
depends_on:
- mongodb
ports:
- "30301:30300"
rocket2:
build: .
depends_on:
- mongodb
ports:
- "30302:30300"
As described in Networking in Compose, the containers can communicate using their respective service names and their "normal" port numbers, like rocket1:30300; any ports: are ignored for this. You shouldn't need to manually docker-compose run anything.
Well you could always give specific names to your two Node containers:
$ docker-compose run --name rocket1 --service-ports rocket
$ docker-compose run --name rocket2 --service-ports rocket
And then use:
tcp://rocket1:30300

How to Share a Docker-Compose Volume in Distributed Cassandra Container using Docker

I have configured distributed version of cassandra using Docker-Compose.
Here is my docker-compose.yml file:
version: '3.0'
services:
cassandra-masters:
image: strapdata/elassandra
environment:
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-masters
cassandra-slaves1:
image: strapdata/elassandra
environment:
CASSANDRA_SEEDS: tasks.cassandra-masters
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-slaves1
depends_on:
- cassandra-masters
After running the docker-compose file using sudo docker stack deploy elassandra --compose-file docker-compose.yml, everything works well and I can see them using docker service ls command.
Problem: What I want is that I don't know how to use volume in distributed of containers. Is it like the normal configuration of docker-compose that found in Docker's site? or it is different?
Solution I have tried the named volumes like the following, There isn't any difference between this approach (distributed) and normal approach. The only thing that should be considered is that the volume should be shared:
version: '3.0'
services:
cassandra-masters:
image: strapdata/elassandra
environment:
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-masters
volumes:
- app-volume:/var/lib/cassandra
cassandra-slaves1:
image: strapdata/elassandra
environment:
CASSANDRA_SEEDS: tasks.cassandra-masters
CASSANDRA_LISTEN_ADDRESS: tasks.cassandra-slaves1
depends_on:
- cassandra-masters
volumes:
- app-volume:/var/lib/cassandra
volumes:
app-volume:

Aspnet core application is not running after docker host or container restart

I'm running containers with restart policy as always through docker-compose, it has two images RabbitMq & AspNetCore,it all works fine for the first time.
when I restart the host or docker containers then RabbitMq is running fine and accessible through localhost:8080 even after restarting, but aspnet core project container is running but I can't access through internal and external ( localhost:5001/api/home/get) ports.
Docker compose
version: '3.4'
services:
aspnetcoreenvironmentvariables.api:
image: aspnetcoreenvironmentvariables
build:
context: .
dockerfile: AspNetCoreEnvironmentVariables.API/Dockerfile
restart: always
depends_on:
- rabbitmq
rabbitmq:
image: rabbitmq:3-management
restart: always
Docker compose override
version: '3.4'
services:
aspnetcoreenvironmentvariables.api:
environment:
- ASPNETCORE_ENVIRONMENT=Production
- ConnectionStrings__DefaultConnection="connection"
ports:
- "5001:80"
rabbitmq:
ports:
- "8080:15672"
Below is error from docker

Resources