puppet agent inside docker stuck in restart loop - docker

I'm trying to play around with configuring agents and such through Docker. I have this compose file:
services:
puppetserver:
container_name: puppetserver
hostname: puppet
image: puppet/puppetserver
links:
- puppetdb
networks:
- puppetnet
depends_on:
- puppetdb
puppetdb:
container_name: puppetdb
hostname: puppetdb
image: puppet/puppetdb
networks:
- puppetnet
restart: always
agent:
container_name: agent
hostname: agent
image: puppet/puppet-agent
links:
- puppetserver
networks:
- puppetnet
restart: always
networks:
puppetnet:
name: puppetnet
After I spin it up, the agent restarts a few times waiting for the puppet server to come up but then goes into restart and stays there forever.

I read the puppet/puppet-agent page closer and it says
Note that this is of limited use outside testing, in that this code changes the running container, which then exits.
So the infinite restarts is to be expected.

Related

Running Multiple Microservices with multiple docker compose with NATS

I am new to microservices and I have a project to setup multiple microservies, The project is setup like this.
Every nest js application has
API Application exposed to a port
database
docker-compose file responsible which creates the containers for each microservice.
Now what I am doing is to have
Nest JS MICROSERVICE APP 1
- API exposed to port 5000
- Postgres database working on 5432
- NATS running on 4222
NEST JS APP MICROSERVICE 2
- API exposed to port 5001
- Postgres database working on 5433
- NATS not running on 4222 as it is already occupied. If I change the port how I am gonna use the same message broker on both services.
The problem is I wanted to use the same NATS message broker on the second microservice and all the newly created microservice. my docker-compose file for NEST JS APP 1 is as follows.
version: '3.9'
services:
api:
container_name: nest_app_1
image: nest_app_1
build:
dockerfile: Dockerfile
context: .
ports:
- 127.0.0.1:5000:5000
env_file:
- .env
depends_on:
- db
- nats
networks:
- main
db:
container_name: postgres
image: postgres:latest
ports:
- 127.0.0.1:5432:5432
volumes:
- ./data:/var/lib/postgresql/data
env_file:
- .env
networks:
- main
pgadmin:
container_name: pgadmin
image: dpage/pgadmin4
ports:
- 127.0.0.1:8080:80
env_file:
- .env
networks:
- main
nats:
image: nats-streaming:latest
entrypoint:
- /nats-streaming-server
- -cid
- main_cluster
ports:
- "127.0.0.1:4222:4222"
- "127.0.0.1:6222:6222"
- "127.0.0.1:8222:8222"
restart: always
tty: true
networks:
- main
networks:
main:
driver: bridge
Second NEST JS microservice docker-compose is as follows
version: '3.9'
services:
api:
container_name: nest_app_2
image: nest_app_2
build:
dockerfile: Dockerfile
context: .
ports:
- 127.0.0.1:5001:5001
env_file:
- .env
depends_on:
- app_db_2
networks:
- main
app_db_2:
container_name: postgres_2
image: postgres:latest
ports:
- 127.0.0.1:5433:5432
volumes:
- ./data:/var/lib/postgresql/data
env_file:
- .env
networks:
- main
pgadmin:
container_name: pgadmin_2
image: dpage/pgadmin4
ports:
- 127.0.0.1:8081:80
env_file:
- .env
networks:
- main
nats:
image: nats-streaming:latest
entrypoint:
- /nats-streaming-server
- -cid
- main_cluster
ports:
- "127.0.0.1:4222:4222"
restart: always
tty: true
networks:
- main
networks:
main:
driver: bridge
Now I want to use NATS to communicate between both apps. So if I publish message from microservice 1 and I subscribe that message to microservice 2 and so on.
yes, sure the host ports are occupied if you link it through the Host network stack. You can only have one service linked to ip:port
It looks like you trying to start two NATS instances and let them join the same NATS cluster. But maybe you need two instances for development. You just want to see messages passing through it.
Option 1: just put everything in one compose and use depends_on and the same NATS node for both services
Option 2: Use a separate compose stack to provision your NATS infrastructure and use
extrnal_links.
Option 3: Define custom network for NATS cluster where every NATS container get's own iP.
But I would start with 1.

Docker-compose: Start container after another with timeout

Is it possible to start myapp-1 with myapp-2, then sleep for 30 seconds and only then start myapp-3?
Tried this docker-compose.yml with no luck.
version: '3'
services:
myapp-1:
container_name: myapp-1
image: myapp:latest
restart: always
myapp-2:
container_name: myapp-2
image: myapp:latest
restart: always
test-sleep:
image: busybox
command: ["/bin/sleep", "30"]
depends_on:
- "myapp-1"
- "myapp-2"
myapp-3:
container_name: myapp-3
image: myapp:latest
restart: always
depends_on:
- "test-sleep"
The docker-compose.yml you proposed could not address your use case, as the depends_on property does not wait for the dependencies to be ready (or terminated), but only for them to be started (i.e., in your example, myapp-3 is started as soon as the /bin/sleep 30 command has been started).
See e.g. the corresponding doc:
depends_on does not wait for [dependencies] to be “ready” before starting [the service] - only until they have been started. If you need to wait for a service to be ready, see Controlling startup order for more on this problem and strategies for solving it.
The link above mentions several tools (including wait-for-it) that could be used to wait that some service dependencies are ready (provided they expose a web service at a given TCP port).
Otherwise, if you just want to wait for 30s before starting myapp-3, assuming the Dockerfile of myapp-3 contains CMD ["/prog", "first argument"], you could just get rid of test-sleep and write something like:
version: '3'
services:
myapp-1:
container_name: myapp-1
image: myapp:latest
restart: always
myapp-2:
container_name: myapp-2
image: myapp:latest
restart: always
myapp-3:
container_name: myapp-3
image: myapp:latest
restart: always
command:
- '/bin/sh'
- '-c'
- '/bin/sleep 30 && /prog "first argument"'
depends_on:
- "myapp-1"
- "myapp-2"

Using docker-compose and starting containers but one of the containers not able to connect to other

I am using docker-compose to start two containers (named fd & fl4j). Second container connects to the first on startup.
If I just use "host" networking, and plain "docker run" everything works fine.
With docker-compose and a defined bridge network (loggernw) the second container fails to connect to the first. May not be relevant but stating - the second container is a java spring-boot app.
Additional info: Even without docker-compose but using a "docker run" and a defined bridge network, the connection attempt fails. Also, within the second app, I am using the string "127.0.0.1" to attempt connection.
docker-compose below -
version: '3.8'
services:
fd:
image: fluentwithes
container_name: fd
ports:
- 24224:24224
expose:
- "24224"
volumes:
- /home/hrishikesh/work/bitbucket/logger/integration/docker/runs/fluentd:/fluentd/etc
networks:
- loggernw
fl4j:
image: fluentl4java
container_name: fl4j
ports:
- 9090:9090
expose:
- "9090"
networks:
- loggernw
networks:
loggernw:
driver: bridge
Probably the second container tries to connect before the first is running properly.
Try to use depends_on in the second container as given below.
However, i think this only prevents the second container from starting before the frist started. You still might have the issue because the first did not finish startup in time. Then your service in the second has to do some retries. So maybe restart: always could be enough.
version: '3.8'
services:
fd:
image: fluentwithes
container_name: fd
ports:
- 24224:24224
expose:
- "24224"
volumes:
- /home/hrishikesh/work/bitbucket/logger/integration/docker/runs/fluentd:/fluentd/etc
networks:
- loggernw
fl4j:
depends_on:
- fd
restart: always
image: fluentl4java
container_name: fl4j
ports:
- 9090:9090
expose:
- "9090"
networks:
- loggernw
networks:
loggernw:
driver: bridge
Edit:
127.0.0.1 is wrong i think. You want put the service name there instead. The Ip might change.
Try to put in second container "fd:24224" as connection string.
Further information found here. https://docs.docker.com/network/bridge/

Docker mis-forwarding ports

I have several domains sharing one public IP (EC2 instance). My setup is like this:
/home/ubuntu contains docker-compose.yml:
version: '3'
services:
nginx-proxy:
image: "jwilder/nginx-proxy"
container_name: nginx-proxy
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
restart: "always"
This creates a network named ubuntu_default which will allow other compose instances to join. The nginx-proxy image creates reverse proxies for these other compose instances so that you can visit example.com and be routed to the appropriate UI within the appropriate compose instance.
/home/ubuntu/example.com/project-1 contains a docker-compose.yml like:
version: '3'
services:
db:
build: "./db" # mongo
volumes:
- "./data:/data/db"
restart: "always"
api:
build: "./api" # a node backend
ports:
- "9005:9005"
restart: "always"
depends_on:
- db
ui:
build: "./ui" # a react front end
ports:
- "8005:8005"
restart: "always"
environment:
- VIRTUAL_HOST=project-1.example.com # this tells nginx-proxy which domain to proxy
- VIRTUAL_PORT=8005 # this tells nginx-proxy which port to proxy
networks:
default:
external:
name: ubuntu_default
/home/ubuntu/testing.com/project-2 contains a docker-compose.yml like:
version: '3'
services:
db:
build: "./db" # postgres
volumes:
- "./data:/var/lib/postgresql/data"
restart: "always"
api:
build: "./api" # a python backend
ports:
- "9000:9000"
restart: "always"
depends_on:
- db
ui:
build: "./ui" # a react front end
ports:
- "8000:8000"
restart: "always"
environment:
- VIRTUAL_HOST=testing.com,www.testing.com # tells nginx-proxy which domains to proxy
- VIRTUAL_PORT=8000 # tells nginx-proxy which port to proxy
networks:
default:
external:
name: ubuntu_default
So basically:
project-1.example.com:80 forwards to the UI running on :8005
project-1.example.com:80/api forwards to the API running on :9005
testing.com forwards to the UI running on :8000
testing.com/api forwards to the API running on :9000
...and that all works perfectly as long as I only run one at a time. The moment I start both Compose instances, the /api urls start clashing. I can sit on one of them and refresh repeatedly and sometimes I'll see the one for example.com/api and sometimes I'll see the one for testing.com/api.
I have no idea whats going on at this point. Maybe the premise I'm working against is fundamentally flawed but it seems like an intended use of Docker/Compose. I'm open to suggestions to accomplish the same otherwise.
Docker containers communicate using DNS lookups on their network. If multiple containers have the same alias on the same network, it will round robin load balance between the containers with each network connection. If you don't want containers to talk to each other, then you don't want them on the same docker network. The good news is you solve this by using more than one network, and not putting the api and db server on the frontend proxy network:
version: '3'
services:
db:
build: "./db" # postgres
volumes:
- "./data:/var/lib/postgresql/data"
restart: "always"
api:
build: "./api" # a python backend
ports:
- "9000:9000"
restart: "always"
depends_on:
- db
ui:
build: "./ui" # a react front end
ports:
- "8000:8000"
restart: "always"
networks:
- default
- proxy
environment:
- VIRTUAL_HOST=testing.com,www.testing.com # tells nginx-proxy which domains to proxy
- VIRTUAL_PORT=8000 # tells nginx-proxy which port to proxy
networks:
proxy:
external:
name: ubuntu_default
If you do not override the default network, docker will create one for your compose project and use it for any containers not assigned to another network.

Docker hostnames are not resolved in a custom network

I have the following configuration in my docker-composer.yml file.
version: '3.3'
services:
service-1:
container_name: 'service-1'
build: './service-1'
depends_on:
- 'mongo'
- 'consul'
networks:
backend:
aliases:
- service-1
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
frontend:
backend:
aliases:
- service-2
depends_on:
- 'mongo'
- 'consul'
consul:
image: 'consul:latest'
networks:
backend:
aliases:
- consul
mongo:
image: 'mongo:latest'
networks:
backend:
aliases:
- mongo
networks:
frontend:
backend:
internal: true
When my containers start they are not able to communicate between each other using host names.
Most of containers use the mongo db container, but they are not able even reach it and I am getting the following error.
Error connecting to mongo : no reachable servers
Please help me to solve the problem, I got stuck.
Thanks.
You've got a lot of unneeded settings in the compose file, here's a stripped down version that would work just as well:
version: '3.3'
services:
service-1:
build: './service-1'
networks:
- backend
service-2:
build: './service-2'
ports:
- '8825:8825'
- '8835:8835'
networks:
- frontend
- backend
consul:
image: 'consul:latest'
networks:
- backend
mongo:
image: 'mongo:latest'
networks:
- backend
networks:
frontend:
backend:
internal: true
You automatically get the alias of the service name for each container, no need to duplicate that. You also lose the ability to scale a service if you give it a container name. I'd also recommend moving the build step out of the compose file and use an image name for the apps you're building locally.
Now for the likely issue, you have a depends_on in your compose file. At best, this will not do what you're looking for. All it checks that the other container has been created and started, but not that the application inside is ready to serve traffic, and a DB may take time to become available. At worst, you'll get an error that it's unsupported if you try to move this into swarm mode.
Instead of depending on docker for this, update your application entrypoint to check for the external dependencies and wait a minute or two for them to become available before failing. A very simple example tool for this is wait-for-it that is written as a bash shell script.

Resources