I have below docker-compose.yml file
freeradius:
image: freeradius/freeradius-server:latest-alpine
restart: always
volumes:
- ./radius/raddb/users:/etc/raddb/users:ro
- ./radius/raddb/clients.conf:/etc/raddb/clients.conf:ro
ports:
- "1812-1813:1812-1813/udp"
command: [ "radiusd", "-X", "-t" ] # Debug mode with colour
my-service:
image: 'my-service:latest'
container_name: my-service
ports:
- '443:443'
environment:
# some env variable
extra_hosts:
- "host.docker.internal:host-gateway"
So above file runs while integration tests runs, after IT, stops the docker container. From one of the test case, I am calling a class where have below codes to test the AAA server communication
radius.authenticate('secret', username, password, host='freeradius', port=1812))
above authentication works well in my local machine (macOS) but when I start the build from jenkins from our jenkins server, giving me error to connect radius server, I guess in jenkins build host name is not resolving
Any idea why it is happening in Jenkins build, what changes I need to make so in Jenkins build, IT will pass.
Related
As a bit of context, I am fairly new to Docker and Docker-compose and until recently I've never even heard of Docker Swarm. I should not be the one responsible for the task I've been given but it's not like I can offload it to someone else...
So, the idea is to have two different physical machines to host a web server. One of the machines will run an Express.js server plus a Redis database, while the other machine hosts the system database (a Postgres DB).
Up until now I had a docker-compose.yaml file which created all these services and ran them.
version: '3.8'
services:
server:
image: server
build:
context: .
target: build-node
volumes:
- ./:/src/app
- /src/app/node_modules
container_name: server
ports:
- 3000:3000
depends_on:
- postgres
- redis
entrypoint:
['./wait-for-it.sh', '-t', '30', 'postgres:5432', '--', 'yarn', 'dev']
networks:
- servernet
# postgres database
postgres:
image: postgres
user: postgres
restart: always
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- ./data:/var/lib/postgresql/data # persist data even if container shuts down
- ./db_scripts/startup.sh:/docker-entrypoint-initdb.d/c_startup.sh
#- ./db_scripts/db.sql:/docker-entrypoint-initdb.d/a_db.sql
#- ./db_scripts/db_population.sql:/docker-entrypoint-initdb.d/b_db_population.sql
ports:
- '5432:5432'
networks:
- servernet
# pgadmin for managing postgis db (runs at localhost:5050)
# To add the above postgres server to pgadmin, use hostname as defined by docker: 'postgres'
pgadmin:
image: dpage/pgadmin4
restart: always
environment:
- PGADMIN_DEFAULT_EMAIL=${PGADMIN_DEFAULT_EMAIL}
- PGADMIN_DEFAULT_PASSWORD=${PGADMIN_DEFAULT_PASSWORD}
depends_on:
- postgres
ports:
- 5050:80
networks:
- servernet
redis:
image: redis
networks:
- servernet
networks:
servernet:
I would naturally run this script with docker-compose up and that was the end of my concerns, everything running together on localhost. But now, with this setup I have no idea what to do. From what I've read, I have to create a swarm, but then how do I go about running everything from the same place (or with one command)? And how do I specify which services are to be executed on which machine?
Additionally, here is my Dockerfile in case it's useful:
FROM node as build-node
WORKDIR /src/app
COPY package.json .
COPY yarn.lock .
COPY wait-for-it.sh .
COPY . .
RUN yarn
RUN yarn style:fix
RUN yarn lint:fix
RUN yarn build
EXPOSE 3000
ENTRYPOINT yarn dev
Is my current docker-compose script even capable of being used with this new setup?
This is really over my head and I've got not idea where to start. Docker documentation is also a bit confusing since I don't have much knowledge of Docker to begin with...
Thanks in advance!
You first need to learn what's docker swarm and how it works
Docker swarm is a container orchestration tool, meaning that it allows
the user to manage multiple containers deployed across multiple hosts
machines.
to answer your questions briefly:
how do I go about running everything from the same place?
you can use docker stack deploy command to deploy a set of services
and yes you run it from one host machine, you don't have to run it on different machines, and that machine we call it master node
The good news is that you can still use your docker-compose file, with slight modifications maybe.
So to summarize the steps that you need to do are the following:
install docker swarm (1 master and 1 worker as you have only 2
machines)
make sure it's working fine (communication between nodes)
prepare your docker-compose file and deploy your stack from the
master node
As part of a school challenge I need to run a Jenkins environment using Docker on port 7070:9090.
I'm trying to change the default access port for Jenkins (8080) on a Docker container unsuccessfully.
Here's my code:
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins-image
ports:
- "7070:8080"
volumes:
- "jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
I managed to change the localhost to 7070, but not the default access port from 8080.
All the tutorials I've found online only explain how to change the localhost.
Any advice on how to change the port 8080 and still manage to have Jenkins running?
Access port is related with Docker instead of Jenkins. The syntax should be like this HOST:CONTAINER if the Jenkins is running at 7070 in your container following code needs to work for you.
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins-image
ports:
- "8080:7070"
volumes:
- "jenkins_home:/var/jenkins_home"
- "/var/run/docker.sock:/var/run/docker.sock"
I am currently setting up a buildkite build pipeline which runs an api in one docker container and an application in another docker container alongside it while running Cypress tests (which are also running within the second container).
I use the following docker compose file:
version: '3'
services:
testing-image:
build:
context: ../
dockerfile: ./deploy/Dockerfile-cypress
image: cypress-testing
cypress:
image: cypress-testing
ipc: host
depends_on:
- api
db:
image: postgres:latest
ports:
- "54320:5432"
redis:
image: redis:latest
ports:
- "63790:6379"
api:
build:
context: ../api/
dockerfile: ./Dockerfile
image: api
command: /env/development/command.sh
links:
- db
- redis
depends_on:
- db
- redis
The application runs in the cypress container when started by buildkite. It then starts the cypress tests and some of them pass. However, any test that requires communication with the API fails because the cypress container is unable to see localhost within the API container. I am able to enter the API container using a terminal and have verified that it is working perfectly internally using cURL.
I have tried various URLs within the cypress container to try to reach the API, which is available on port 8080 within the API container, including api://api:8080 and http://api:8080 but none of them have been able to see the API.
Does anybody know what could be going on here?
I am trying to upload my backend to Google Cloud Run. I'm using Docker-Compose with 2 components: a Golang Server and a Postgres DB.
When I run Docker-Compose locally, everything works great! When I upload to Gcloud with
gcloud builds submit . --tag gcr.io/BACKEND_NAME
gcloud run deploy --image gcr.io/BACKEND_NAME --platform managed
Gcloud's health check fails, getting stuck on Deploying... Revision deployment finished. Waiting for health check to begin. and throws Cloud Run error: Container failed to start. Failed to start and then listen on the port defined by the PORT environment variable. Logs for this revision might contain more information.
I understand that Google Cloud Run provides a PORT env variable, which I tried to account for in my docker-compose.yml. But the command still fails. I'm out of ideas, what could be wrong here?
Here is my docker-compose.yml
version: '3'
services:
db:
image: postgres:latest # use latest official postgres version
container_name: db
restart: "always"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
- database-data:/var/lib/postgresql/data/ # persist data even if container shuts down
api:
container_name: api
depends_on:
- db
restart: on-failure
build: .
ports:
# Bind GCR provided incoming PORT to port 8000 of our api
- "${PORT}:8000"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=db
volumes:
database-data: # named volumes can be managed easier using docker-compose
and the api container is a Golang binary, which waits for a connection to be made with the Postgres DB before calling http.ListenAndServe(":8000", handler).
I have a docker compose file that defines a service that will run my application and a service that that application is dependent on to run:
services:
frontend:
build:
context: .
volumes:
- "../.:/opt/app"
ports:
- "8080:8080"
links:
- redis
image: node
command: ['yarn', 'start']
redis:
image: redis
expose:
- "6379"
For development this compose file exposes 8080 so that I can access the running code from a browser.
In jenkins however I can't expose that port as then two jobs running simultaneously would conflict trying to bind to the same port on jenkins.
Is there a way to prevent docker-compose from binding service ports? Like an inverse of the --service-ports flag?
For context:
In jenkins I run tests using docker-compose run frontend yarn test which won't map ports and so isn't a problem.
The issue presents when I try to run end to end browser tests against the application. I use a container to run CodeceptJS tests against a running instance of the app. In that case I need the frontend to start before I run the tests, as they will fail if the app is not up.
Q. Is there a way to prevent docker-compose from binding service ports?
It has no sense to prevent something that you are asking to do. docker-compose will start stuff as the docker-compose.yml file indicates.
I propose duplicate the frontend service using extends::
version: "2"
services:
frontend-base:
build:
context: .
volumes:
- "../.:/opt/app"
image: node
command: ['yarn', 'start']
frontend:
extends: frontend-base
links:
- redis
ports:
- "8080:8080"
frontend-test:
extends: frontend-base
links:
- redis
command: ['yarn', 'test']
redis:
image: redis
expose:
- "6379"
So use it as this:
docker-compose run frontend # in dev environment
docker-compose run frontend-test # in jenkins
Note that extends: is not available in version: "3", but they will bring it back again in the future.
For preventing to publish ports outside the docker network you just
need to write on a single port in the ports segment.
Instead of using this:
ports:
- 8080:8080
Just use this one(at below):
ports:
- 8080