Why database container with mariadb fails during gitlab cicd pipeline? - docker

I'm trying to integrate new job to existing pipeline - build mariadb and to execute tests and migrations on it. I have database job:
.db-job:
image: mariadb:${MARIA_DB_IMAGE_VERSION}
script:
- echo "SELECT 'OK';" | mysql --user=root --password="$DATABASE_PASSWORD" --host="$jdbc:mysql://localhost" "$DATABASE_SCHEMA"
I have stage for database:
db:install:
stage: db
extends: .db-job
services:
- name: mariadb
alias: db
needs: [ ]
script:
- cd "$BACKEND_DIR"
- pwd
cache:
policy: pull-push
db:migrate:
stage: db
extends: .maven-job
script:
- cd "$BACKEND_DIR"
- pwd
- mvn --version
- mvn -Dflyway.user="$DATABASE_PASSWORD" -Dflyway.schemas="DATABASE_SCHEMA" flyway:migrate
cache:
policy: pull-push
So database build passes but in logs I have error:
Health check error:
start service container: Error response from daemon: Cannot link to a non running container
Service container logs:
2022-11-29T11:35:15.589696287Z 2022-11-29 11:35:15+00:00 [Note] [Entrypoint]: Entrypoint script for MariaDB Server 1:10.10.2+maria~ubu2204 started.
2022-11-29T11:35:15.656319311Z 2022-11-29 11:35:15+00:00 [ERROR] [Entrypoint]: mariadbd failed while attempting to check config
How can I fix this problem?

Related

Automation tests passed in local but failed in Jenkins get Error: connect ECONNREFUSED 127.0.0.1:3000

In my project I am using React for frontend, Python/Django for backend and Cypress for end to end testing. Locally all the test cases passess. But when I push my code to github one of this test Fails name: Run tests.
Below is the tests.yml file
tests.yml
name: Tests
on:
push:
branches:
- '*'
- '*/*'
- '!master'
- '!main'
jobs:
tests:
runs-on: ubuntu-20.04
steps:
- name: Fetch apiUrl
run: echo ::set-output name=apiUrl::$(jq -r .env.apiUrl cypress.json)/ping
id: fetchApiUrl
- name: Ensure mochawesome
run: 'jq -r ".[\"devDependencies\"] |= (.mochawesome = \"^6.2.2\")" package.json > package.json.tmp && mv package.json.tmp package.json'
- name: Install deps
run: npm install
- name: Update test reporter
run: 'jq -M ". + {\"reporter\": \"mochawesome\", \"reporterOptions\": { \"reportDir\": \"cypress/results\", \"overwrite\": false, \"html\": false, \"json\": true }}" cypress.json > cypress.json.tmp && mv cypress.json.tmp cypress.json'
- name: Run tests
uses: cypress-io/github-action#v2
with:
build: npm run build
start: npm run start
wait-on: ${{ steps.fetchApiUrl.outputs.apiUrl }}
Below is cypress.json
{
"baseUrl": "http://localhost:3000",
"env": {
"apiUrl": "http://localhost:3000/api"
}
}
I am getting below error
> echo build your fullstack app here
build your fullstack app here
start server "npm run start command "npm run start"
current working directory "/home/runner/work/transaction-management-fullstack-level-1_83d83d9-h61jhd-cihba8/transaction-management-fullstack-level-1_83d83d9-h61jhd-cihba8"
waiting on "http://localhost:3000/api/ping" with timeout of 60 seconds
/usr/local/bin/npm run start
> account-management-fullstack-level-1#0.0.1 start
> echo start your fullstack app here
start your fullstack app here
http://localhost:3000/api/ping timed out on retry 91 of 3, elapsed 90258ms, limit 90000ms
Error: connect ECONNREFUSED 127.0.0.1:3000
Where are you running your tests in Jenkins ? Is it within the docker with help of docker-compose ?? Then you should change your URL to be something like below. If your docker-compose looks like this, then your URL should be
version: "3"
services:
test-repo:
image: test-repo
container_name: test-repo
network_mode: host
http://test-repo:3000

MySQL Container Wait Timeout

When trying to wait for the mysql docker container, I'm met with: Problem with dial: dial tcp 127.0.0.1:3306: connect: connection refused. Sleeping 1s
# This config is equivalent to both the '.circleci/extended/orb-free.yml' and the base '.circleci/config.yml'
version: 2.1
# Orbs are reusable packages of CircleCI configuration that you may share across projects, enabling you to create encapsulated, parameterized commands, jobs, and executors that can be used across multiple projects.
# See: https://circleci.com/docs/2.0/orb-intro/
orbs:
node: circleci/node#5.0.1
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
version: 2
node: # This is the name of the workflow, feel free to change it to better match your workflow.
# Inside the workflow, you define the jobs you want to run.
jobs:
- build_and_test:
# This is the node version to use for the `cimg/node` tag
# Relevant tags can be found on the CircleCI Developer Hub
# https://circleci.com/developer/images/image/cimg/node
# If you are using yarn, change the line below from "npm" to "yarn"
filters:
branches:
only:
- master
executors:
node:
docker:
- image: cimg/node:16.14.2
jobs:
build_and_test:
executor: node
docker:
- image: cimg/mysql:8.0
auth:
username: myuser
password: $DOCKERHUB_PASSWORD
environment:
MYSQL_HOST: 127.0.0.1
MYSQL_DATABASE: mydatabase
MYSQL_USER: user
MYSQL_PASSWORD: passw0rd
steps:
- checkout
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/v$DOCKERIZE_VERSION/dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz
- run:
name: Wait for db
command: dockerize -wait tcp://127.0.0.1:3306 -timeout 10s
I do see that the container is installed under the spin-up environment step, so I believe it should be running:
Starting container cimg/mysql:8.0
cimg/mysql:8.0:
using image cimg/mysql#sha256:76f5b1dbd079f2fef5fe000a5c9f15f61df8747f28c24ad93bb42f8ec017a8df
pull stats: Image was already available so the image was not pulled
time to create container: 21ms
image is cached as cimg/mysql:8.0, but refreshing...
8.0: Pulling from cimg/mysql
Digest: sha256:76f5b1dbd079f2fef5fe000a5c9f15f61df8747f28c24ad93bb42f8ec017a8df
Status: Image is up to date for cimg/mysql:8.0
Time to upload agent and config: 369.899813ms
Time to start containers: 407.510271ms
However, nothing I've been able to look into has pointed me in the direction of coming up with a solution at this point.
look over here, your job should define like following, for test out sql container, you could just use nc -vz localhost 3306, but the sql docker take time to initialize, so wait for about 2 minutes before test that.
jobs:
build_and_test:
docker:
# Primary container image where all steps run.
- image: cimg/node:16.14.2
# Secondary container image on common network.
- image: cimg/mysql:8.0
auth:
username: myuser
password: $DOCKERHUB_PASSWORD
environment:
MYSQL_HOST: 127.0.0.1
MYSQL_DATABASE: mydatabase
MYSQL_USER: user
MYSQL_PASSWORD: passw0rd
steps:
- checkout
- run: sleep 120 && nc -vz localhost 3306

Github Actions db service container not reachable

I have the following Github Actions pipeline:
name: Elixir CI
on:
push:
branches:
- '*'
pull_request:
branches:
- '*'
jobs:
build:
name: Build and test
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_PORT: 5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout#v2
- name: Docker Setup Buildx
uses: docker/setup-buildx-action#v1.6.0
with:
install: true
- name: building image
env:
DATABASE_HOST: postgres
DATABASE_PORT: 5432
run: |
docker build --build-arg DATABASE_HOST=$DATABASE_HOST -t backend:test -f Dockerfile.ci .
I have a single build step for an Elixir app: the dockerfile is a multistage one, the first stage runs the tests and builds the production app, and the second copies the application folder/tar.
DATABASE_HOST is the variable that my Elixir app looks for to connect to the test environment.
I have the need to run tests against Postgres, so I spawn a container service with it. I have executed the build both in a container and outside of it, but I always have the following error:
...
#19 195.9 14:10:58.624 [error] GenServer #PID<0.9316.0> terminating
#19 195.9 ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
#19 195.9 (db_connection 2.4.1) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
#19 195.9 (connection 1.1.0) lib/connection.ex:622: Connection.enter_connect/5
#19 195.9 (stdlib 3.14.2.2) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
#19 195.9 Last message: nil
...
So apparently postgres:5432 is not reachable, am I missing something ?
The problem is in DATABASE_HOST: postgres I think.
The service container exports 5432 port to host, so for docker build, it should use host's ip address to visit that postgres service like next:
- name: building image
env:
DATABASE_PORT: 5432
run: |
DATABASE_HOST=$(ifconfig -a eth0|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print $2}'|tr -d "addr:")
docker build --build-arg DATABASE_HOST=$DATABASE_HOST -t backend:test -f Dockerfile.ci .
Above will first use ifconfig to get virtual machine's ip(docker host's ip), then pass to docker build to let build container to visit the postgres.

Authentication Error when Building and Pushing docker image to ACR using Azure DevOps Pipelines and docker-compose

I am trying to build and push a docker image to ACR using Azure DevOps pipelines. I have to build it with a docker-compose.yml file to be able to use openvpn in the container.
When I run the pipeline I get the following error. Does anyone have an idea of how to solve this?
Starting: DockerCompose
==============================================================================
Task : Docker Compose
Description : Build, push or run multi-container Docker applications. Task can be used with Docker or Azure Container registry.
Version : 0.183.0
Author : Microsoft Corporation
Help : https://aka.ms/azpipes-docker-compose-tsg
==============================================================================
/usr/local/bin/docker-compose -f /home/vsts/work/1/s/src/docker-compose.yml -f /home/vsts/agents/2.188.2/.docker-compose.1624362077551.yml -p Compose up -d
Creating network "composeproject_default" with the default driver
Pulling getstatus (***/getstatus:)...
Head https://***/v2/getstatus/manifests/latest: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
##[error]Creating network "composeproject_default" with the default driver
##[error]Pulling getstatus (***/getstatus:)...
##[error]Head https://***/v2/getstatus/manifests/latest: unauthorized: authentication required, visit https://aka.ms/acr/authorization for more information.
##[error]The process '/usr/local/bin/docker-compose' failed with exit code 1
Finishing: DockerCompose
My azure-pipelines.yml look like this:
# Docker
# Build and push an image to Azure Container Registry
# https://learn.microsoft.com/azure/devops/pipelines/languages/docker
trigger:
- main
resources:
- repo: self
variables:
# Container registry service connection established during pipeline creation
dockerRegistryServiceConnection: '*****************************'
imageRepository: 'getstatus'
containerRegistry: 'composeproject.azurecr.io'
dockerfilePath: '$(Build.SourcesDirectory)/Dockerfile'
tag: '$(Build.BuildId)'
# Agent VM image name
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build and push stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: Docker#2
displayName: Build and push an image to container registry
inputs:
command: buildAndPush
repository: $(imageRepository)
dockerfile: $(dockerfilePath)
containerRegistry: $(dockerRegistryServiceConnection)
tags: |
$(tag)
- task: DockerCompose#0
inputs:
containerregistrytype: 'Azure Container Registry'
dockerComposeFile: '**/docker-compose.yml'
action: 'Run a Docker Compose command'
dockerComposeCommand: 'up -d'
And the docker-compose.yml like this:
version: "3.3"
services:
getstatus:
image: composeproject.azurecr.io/getstatus
restart: always
sysctls:
- net.ipv6.conf.all.disable_ipv6=0
cap_add:
- NET_ADMIN
devices:
- /dev/net/tun
volumes:
- /etc/timezone:/etc/timezone:ro
I think your docker compose task is missing a couple of parameters
try adding azureContainerRegistry: composeproject.azurecr.io
and azureSubscriptionEndpoint: $(dockerRegistryServiceConnection)
Not sure why the credentials supplied in the Docker#2 task don't persist since they're in the same stage but then I could fill an encyclopedia with what I'm not sure on when it comes to Azure pipelines

Getting Error when creating a Windows Docker Container on Kaniko/Gitlab

I'm trying to create a Windows Docker container using Kaniko/Gitlab.
Here is the Error I see:
Resolving secrets
00:00
Preparing the "docker-windows" executor
Using Docker executor with image gcr.io/kaniko-project/executor:v1.6.0-debug ...
Pulling docker image gcr.io/kaniko-project/executor:v1.6.0-debug ...
WARNING: Failed to pull image with policy "always": no matching manifest for windows/amd64 10.0.17763 in the manifest list entries (docker.go:147:0s)
ERROR: Preparation failed: failed to pull image "gcr.io/kaniko-project/executor:v1.6.0-debug" with specified policies [always]: no matching manifest for windows/amd64 10.0.17763 in the manifest list entries (docker.go:147:0s)
For .gitlab-ci.yml file:
image:
name: microsoft/iis:latest
entrypoint: [""]
.build_variables: &build_variables
TAG: "docker-base-windows-2019-std-core"
AWS_ACCOUNT: "XXXXXXXXXX"
AWS_REGION: "XXXXXXX"
REGISTRY: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
.build_script: &build_script
script:
- echo "{\"credsStore\":\"ecr-login\"}" > /kaniko/.docker/config.json
- /kaniko/executor --context $CI_PROJECT_DIR --dockerfile $CI_PROJECT_DIR/Dockerfile --destination $REGISTRY:$TAG
stages:
- build-docker-image
build_image_dev:
variables:
<<: *build_variables
stage: build-docker-image
image:
name: gcr.io/kaniko-project/executor:v1.6.0-debug
entrypoint: [""]
tags: ['XXXXX']
<<: *build_script
rules:
- if: '$CI_PIPELINE_SOURCE == "merge_request_event"'
- if: '$CI_COMMIT_BRANCH == "main"'
- if: $CI_COMMIT_TAG
This is normal text Code for Docker file:
FROM Microsoft/iis:latest
CMD [ "cmd" ]
You have the error:
no matching manifest for windows/amd64
which means that particular image could not be found. It happens if you develop on windows and your server is a linux for instance.
This error implies your host machine's OS is not compatible with the OS docker image you are trying to pull.

Resources