I have below docker-compose.yml
version: "2"
services:
api:
build:
context: .
dockerfile: ./build/dev/Dockerfile
container_name: "project-api"
volumes:
# 1. mount your workdir path
- .:/app
depends_on:
- mongodb
links:
- mongodb
- mysql
nginx:
image: nginx:1.10.3
container_name: "project-nginx"
ports:
- 80:80
restart: always
volumes:
- ./build/dev/nginx.conf:/etc/nginx/conf.d/default.conf
- .:/app
links:
- api
depends_on:
- api
mongodb:
container_name: "project-mongodb"
image: mongo:latest
environment:
- MONGO_DATA_DIR=/data/db
- MONGO_LOG_DIR=/dev/null
ports:
- "27018:27017"
command: mongod --smallfiles --logpath=/dev/null # --quiet
mysql:
container_name: "gamestore-mysql"
image: mysql:5.7.23
ports:
- "3306:3306"
environment:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
And below .gitlab-ci.yml
test:
stage: test
image: docker:latest
services:
- docker:dind
variables:
DOCKER_DRIVER: overlay2
before_script:
- apk add --no-cache py-pip
- pip install docker-compose
script:
- docker-compose up -d
- docker-compose exec -T api ls -la
- docker-compose exec -T api composer install
- docker-compose exec -T api php core/init --env=Development --overwrite=y
- docker-compose exec -T api vendor/bin/codecept -c core/common run
- docker-compose exec -T api vendor/bin/codecept -c core/rest run
When i running my gitlab pipeline it's become field because i think gitlab can't work with services runned by docker-compose.
The error says that mysql refuse the connection.
I need this connection because my test written by codeception will test my models and api actions.
I want test my branches every time any one push in them and if pass just in develop deploy into test server and in master deploy on production server.
What is best way to run my test in gitlab ci/cd and then deploy them in my server?
You should use GitLab CI services instead of docker-compose.
You have to pick one image as your main, in which your commands will be run, and other containers just as services.
Sadly CI services cannot have mounted files in gitlab, you have to be able to configure them with env variables, or you need to create you own image with files in it (you can do that CI stage)
I would suggest you to don't use nginx, and use built-in php server for tests. It that's not possible (you have spicifix nginx config), you will need to build yourself nginx image with copied files inside it.
Also for PHP (the api service in docker-compose.yaml i assume), you need to either build the image ahed or copy command from your dockerfile to script.
So the result should be something like:
test:
stage: test
image: custom-php-image #build from ./build/dev/Dockerfile
services:
- name: mysql:5.7.23
alias: gamestore-mysql
- name: mongo:latest
alias: project-mongodb
command: mongod --smallfiles --logpath=/dev/null
variables:
MYSQL_DATABASE: project_test
MYSQL_USER: user
MYSQL_PASSWORD: user
MYSQL_ROOT_PASSWORD: root
MONGO_DATA_DIR: /data/db
MONGO_LOG_DIR: /dev/null
script:
- api ls -la
- composer install
- php core/init --env=Development --overwrite=y
- php -S localhost:8000 # You need to configure your built-in php server probably here
- vendor/bin/codecept -c core/common run
- vendor/bin/codecept -c core/rest run
I don't know your app, so you will probably have to made some tweaks.
More on that:
https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#define-image-and-services-from-gitlab-ciyml
https://docs.gitlab.com/ee/ci/services/
http://php.net/manual/en/features.commandline.webserver.php
Related
I have some integration tests that, in order to succesfully run, require a running postgres database, setup via docker-compose, and my go app running from main.go. Here is my docker-compose:
version: "3.9"
services:
postgres:
image: postgres:12.5
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-db
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
networks:
default:
driver: bridge
volumes:
data:
driver: local
and my Github Actions are as follows:
jobs:
unit:
name: Test
runs-on: ubuntu-latest
services:
postgres:
image: postgres:12.5
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-db
ports:
- 5432:5432
env:
GOMODCACHE: "${{ github.workspace }}/.go/mod/cache"
TEST_RACE: true
steps:
- name: Initiate Database
run: psql -f initdb/init.sql postgresql://postgres:password#localhost:5432/my-db
- name: Set up Cloud SDK
uses: google-github-actions/setup-gcloud#v0
- name: Authenticate with GCP
id: auth
uses: "google-github-actions/auth#v0"
with: credentials_json: ${{ secrets.GCP_ACTIONS_SECRET }}
- name: Configure Docker
run: |
gcloud auth configure-docker "europe- docker.pkg.dev,gcr.io,eu.gcr.io"
- name: Set up Docker BuildX
uses: docker/setup-buildx-action#v1
- name: Start App
run: |
VERSION=latest make images
docker run -d -p 3000:3000 -e POSTGRES_DB_URL='//postgres:password#localhost:5432/my-db?sslmode=disable' --name='app' image/app
- name: Tests
env:
POSTGRES_DB_URL: //postgres:password#localhost:5432/my-db?sslmode=disable
GOMODCACHE: ${{ github.workspace }}/.go/pkg/mod
run: |
make test-integration
docker stop app
My tests run just fine locally firing off the docker-compose with docker-compose up and running the app from main.go. However, in Github actions I am getting the following error:
failed to connect to `host=/tmp user=nonroot database=`: dial error (dial unix /tmp/.s.PGSQL.5432: connect: no such file or directory
What am I missing? Thanks
I think this code has more than one problem.
Problem one:
In your code I don't see you run docker-compose up, therefore I would assume that Postgres is not running.
Problem two:
is in this line: docker run -d -p 3000:3000 -e POSTGRES_DB_URL='//postgres:password#localhost:5432/my-app?sslmode=disable' --name='app' image/app
You point the host of Postgres to localhost, which on your local machine works. As there localhost is your local comuter. Though, as you use docker run you are not running this on your local machine, but in a docker container. There localhost is pointing to inside the conmtainer.
Posible solution for both
As you are already using docker-compose I suggest you to also add your test web server there.
Change your docker-compose file to:
version: "3.9"
services:
webapp:
build: image/app
environment:
POSTGRES_DB_URL='//postgres:password#postgres:5432/my-app?sslmode=disable'
ports:
- "3000:3000"
depends_on:
- "postgres"
postgres:
image: postgres:12.5
user: postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_DB: my-app
ports:
- "5432:5432"
volumes:
- data:/var/lib/postgresql/data
- ./initdb:/docker-entrypoint-initdb.d
networks:
default:
driver: bridge
volumes:
data:
driver: local
If you now run docker-compose up, both services will be available. And it should work. Though I am not a github-actions expert, so I might have missed something. At least like this, you can run your tests locally the same way as in CI, something that I always see as a big plus.
What you are missing is setting up the actual Postgres Client inside the Github Actions server (that is why there is no psql tool to be found).
Set it up as a step.
- name: Install PostgreSQL client
run: |
apt-get update
apt-get install --yes postgresql-client
Apart from that, if you run everything through docker-compose you will need to wait for postgres to be up and running (healthy & accepting connections).
Consider the following docker-compose:
version: '3.1'
services:
api:
build: .
depends_on:
- db
ports:
- 8080:8080
environment:
- RUN_UP_MIGRATION=true
- PSQL_CONN_STRING=postgres://gotstock_user:123#host.docker.internal:5432/gotstockapi?sslmode=disable
command: ./entry
db:
image: postgres:9.5-alpine
restart: always
environment:
- POSTGRES_USER=root
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
volumes:
- ./db:/docker-entrypoint-initdb.d/
There are a couple of things you need to notice. First of all, in the environment section of the api we have PSQL_CONN_STRING=postgres://gotstock_user:123#host.docker.internal:5432/gotstockapi?sslmode=disable which is the connection string to the db being passed as an env variable. Notice the host is host.docker.internal.
Besides that we have command: ./entry in the api section. The entry file contains the following #!/bin/ash script:
#!/bin/ash
NOT_READY=1
while [ $NOT_READY -gt 0 ] # <- loop that waits till postgres is ready to accept connections
do
pg_isready --dbname=gotstockapi --host=host.docker.internal --port=5432 --username=gotstock_user
NOT_READY=$?
sleep 1
done;
./gotstock-api # <- actually executes the build of the api
sleep 10
go test -v ./it # <- executes the integration-tests
And finally, in order for the psql client to work in the above script, the docker file of the api is looking like this:
# syntax=docker/dockerfile:1
FROM golang:1.19-alpine3.15
RUN apk add build-base
WORKDIR /app
COPY go.mod go.sum ./
RUN go mod download && go mod verify
COPY . .
RUN apk add postgresql-client
RUN go build -o gotstock-api
EXPOSE 8080
Notice RUN apk add postgresql-client which installs the client.
Happy hacking! =)
I'm building 2 docker containers, "app" and "db", via a docker-compose file.
The app server just installs java/tomcat via a Dockerfile which is what docker-compose uses to build.
The db server uses an MS SQL image.
When I run:
docker-compose up
I follow that with a build process of software I need to load which deploys a war to the tomcat directory in the app server and builds the database in the database server.
My problem is: The build process can reference localhost:8080 to install/patch the software to the app server and reference localhost:1433 to install/patch the database portion of the software to the database server. However, when I start Tomcat the system doesn't come online because the app server can't connect to the database server via "localhost:1433" so it requires me to jump in and update the properties file after the build to the docker internal IP address and THEN it works.
My question is: How am I able to get my localhost and my app container to reference the DB in the same manner in a database url?
Dockerfile for app server:
FROM centos:centos7
COPY apache-tomcat-9.0.20.tar.gz /tmp/
WORKDIR /tmp/
RUN yum -y update
RUN yum -y install java-11-openjdk-devel
RUN tar -xf apache-tomcat-9.0.20.tar.gz
RUN mv apache-tomcat-9.0.20 /opt/tomcat/
RUN export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.212.b04-0.el7_6.x86_64/
RUN export PATH=$PATH:$JAVA_HOME/jre/bin
RUN export CATALINA_HOME=/opt/tomcat/
RUN export PATH=$PATH:$CATALINA_HOME/bin
WORKDIR /opt/tomcat/webapps
RUN mkdir testapp
enter code here
enter code here
Docker-Compose File:
version: '3.3'
services:
db:
image: "mcr.microsoft.com/mssql/server:2017-latest"
restart: always
volumes:
- db_data:/var/lib/mssql
environment:
- ACCEPT_EULA=Y
- SA_PASSWORD=Test123
network_mode: bridge
hostname: db
ports:
- "1433:1433"
app:
build: './testapp'
volumes:
- './system/build:/opt/tomcat/webapps/testapp/'
ports:
- "8080:8080"
- "8009:8009"
network_mode: bridge
tty: true
depends_on:
- db
volumes:
db_data:
Bring your service to the same network and target the service by service name. For that you need to define a docker network like below. For the following example I can access DB with http://mongo:27017.
mongo:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- ./data/db:/data/db
networks:
- my-net
spring:
depends_on:
- mongo
image: docker-spring-http-alpine
ports:
- "8080:8080"
networks:
- my-net
networks:
my-net:
I've been using localstack to develop a service against locally. I've just been running their docker image via docker run --rm -p 4567-4583:4567-4583 -p 8080:8080 localstack/localstack
And then I manually run a small script to set up my S3 buckets, SQS queues, etc.
Now, I'd like to make this easier for others so I thought I'd just add a Dockerfile and docker-compose.yml file. Unfortunately, when I try to get this up and running, using docker-compose up I get an error that the command from my setup script can't connect to the localstack services.
make_bucket failed: s3://localbucket Could not connect to the endpoint URL: "http://localhost:4572/localbucket"
Dockerfile:
FROM localstack/localstack
#since this is just local dev set up, localstack doesn't require
anything specific here.
ENV AWS_DEFAULT_REGION='[useast1]'
ENV AWS_ACCESS_KEY_ID='[lloyd]'
ENV AWS_SECRET_ACCESS_KEY='[christmas]'
COPY bin/localSetup.sh /localSetup.sh
COPY fixtures/notifications.json /notifications.json
RUN ["chmod", "+x", "/localSetup.sh"]
RUN pip install awscli
# expose service & web dashboard ports
EXPOSE 4567-4582 8080
ENTRYPOINT ["/localSetup.sh"]
docker-compose.yml
version: '3'
services:
localstack:
build: .
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localhost:4572 s3 mb s3://localbucket
#additional similar calls but left off for brevity
I've tried switching localhost to 127.0.0.1 in my script commands, but I wind up with the same error. I'm probably missing something silly here.
There is another way to create your custom AWS resources when localstack freshly starts up. Since you already have a bash script for your resources, you can simply volume mount your script to /docker-entrypoint-initaws.d/.
So my docker-compose file would be:
localstack:
image: localstack/localstack:latest
container_name: localstack_aws
ports:
- '4566:4566'
volumes:
- './localSetup.sh:/etc/localstack/init/ready.d/init-aws.sh'
Also, I would prefer awslocal over aws --endpoint in the bash script, as it leverages the credentials work and endpoint for you.
try adding hostname to the docker-compose file and editing your entrypoint file to reflect that hostname.
docker-compose.yml
version: '3'
services:
localstack:
build: .
hostname: localstack
ports:
- "8080:8080"
- "4567-4582:4567-4582"
localSetup.sh
#!/bin/bash
aws --endpoint-url=http://localstack:4572 s3 mb s3://localbucket
This was my docker-compose-dev.yaml I used for testing out an app that was using localstack. I used the command docker-compose -f docker-compose-dev.yaml up, I also used the same localSetup.sh you used.
version: '3'
services:
localstack:
image: localstack/localstack
hostname: localstack
ports:
- "4567-4584:4567-4584"
- "${PORT_WEB_UI-8082}:${PORT_WEB_UI-8082}"
environment:
- SERVICES=s3
- DEBUG=1
- DATA_DIR=${DATA_DIR- }
- PORT_WEB_UI=${PORT_WEB_UI- }
- DOCKER_HOST=unix:///var/run/docker.sock
volumes:
- "${TMPDIR:-/tmp/localstack}:/tmp/localstack"
- "/var/run/docker.sock:/var/run/docker.sock"
networks:
- backend
sample-app:
image: "sample-app/sample-app:latest"
networks:
- backend
links:
- localstack
depends_on:
- "localstack"
networks:
backend:
driver: 'bridge'
I'm trying to dockerize my Laravel app. In my local machine i have Mysql and MongoDB.
I have a script that run mysql restore and mongorestore to restore a production db.
In local environment i don't have problems because mysql and mongodb are installed locally.
Now, i created a docker-compose.yml with the build instructions:
version: '3'
services:
app:
build:
context: .
dockerfile: .docker/Dockerfile.dev
image: laravel-docker
ports:
- 8080:80
volumes:
- /srv/app/vendor
- .:/srv/app
links:
- mariadb
- redis
redis:
image: redis:latest
mariadb:
image: mariadb:latest
environment:
- MYSQL_ALLOW_EMPTY_PASSWORD=yes
- MYSQL_USER=root
mongo:
image: mongo
All works fine but i need to run mysql command from app container (that should call mysql command in mariadb container)
Just install a MySQL client in your app container.
Add something like this in your Dockerfile :
apt-get install mysql-client
Then you should be able to connect to your MySql server from your app container using the service name if they belong to the same network :
mysql -u USERNAME -p PASSWORD -h mariadb
I am trying to create a mysql database schema during the docker-compose.yml file is getting executed
version: "2"
services:
web:
build: docker
ports:
- "8080:8080"
environment:
- MYSQL_ROOT_PASSWORD=root
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
ports:
- "3306:3306"
links:
- web
onrun:
command: "docker exec -i test_mysql_1 mysql -uroot -proot test <dummy1.sql"
I tried onrun but this is not working .
i am building the first image but pulling the second image from the docker hub.
kindly help in how to execute the following command after the docker-compose up
There is nothing like onrun in docker-compose. It will only bring up the containers and execute the command. Now you have few possible options
Use mysql Image Initialization
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
volumes:
- ./dummy1.sql:/docker-entrypoint-initdb.d/dummy1.sql
ports:
- "3306:3306"
You may your sql files inside /docker-entrypoint-initdb.d inside the container
Use bash script
docker-compose up -d
# Give some time for mysql to get up
sleep 20
docker-compose exec mysql mysql -uroot -proot test <dummy1.sql
Use another docker service to initialize the DB
version: "2"
services:
web:
build: docker
ports:
- "8080:8080"
environment:
- MYSQL_ROOT_PASSWORD=root
mysql:
image: mysql:latest
environment:
- MYSQL_ROOT_PASSWORD=root
- MYSQL_DATABASE=test
ports:
- "3306:3306"
mysqlinit:
image: mysql:latest
volumes:
- ./dummy1.sql:/dump/dummy1.sql
command: bash -c "sleep 20 && mysql -h mysql -uroot -proot test < /dump/dummy1.sql"
You run another service which will init the DB for you, like mysqlinit in the above one
When a container is started for the first time, a new database with the specified name will be created and initialized with the provided configuration variables. Furthermore, it will execute files with extensions .sh, .sql and .sql.gz that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order.
From https://hub.docker.com/_/mysql/
That is the convenient way how many databases (postgresql, mysql, ...) are initializing themselves on container-creation. You should create a *.sql / *.sh file and bind it via volume into the new container:
db:
image: mysql:latest
volumes:
- ./db/entrypoint:/docker-entrypoint-initdb.d
environment:
- MYSQL_ROOT_PASSWORD=iamgroot
- MYSQL_DATABASE=gotg
This loads all your sql / sh files into the container which are then automatically executed.