MySQL Container Wait Timeout - docker

When trying to wait for the mysql docker container, I'm met with: Problem with dial: dial tcp 127.0.0.1:3306: connect: connection refused. Sleeping 1s
# This config is equivalent to both the '.circleci/extended/orb-free.yml' and the base '.circleci/config.yml'
version: 2.1
# Orbs are reusable packages of CircleCI configuration that you may share across projects, enabling you to create encapsulated, parameterized commands, jobs, and executors that can be used across multiple projects.
# See: https://circleci.com/docs/2.0/orb-intro/
orbs:
node: circleci/node#5.0.1
# Invoke jobs via workflows
# See: https://circleci.com/docs/2.0/configuration-reference/#workflows
workflows:
version: 2
node: # This is the name of the workflow, feel free to change it to better match your workflow.
# Inside the workflow, you define the jobs you want to run.
jobs:
- build_and_test:
# This is the node version to use for the `cimg/node` tag
# Relevant tags can be found on the CircleCI Developer Hub
# https://circleci.com/developer/images/image/cimg/node
# If you are using yarn, change the line below from "npm" to "yarn"
filters:
branches:
only:
- master
executors:
node:
docker:
- image: cimg/node:16.14.2
jobs:
build_and_test:
executor: node
docker:
- image: cimg/mysql:8.0
auth:
username: myuser
password: $DOCKERHUB_PASSWORD
environment:
MYSQL_HOST: 127.0.0.1
MYSQL_DATABASE: mydatabase
MYSQL_USER: user
MYSQL_PASSWORD: passw0rd
steps:
- checkout
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/v$DOCKERIZE_VERSION/dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz && tar -C /usr/local/bin -xzvf dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-v$DOCKERIZE_VERSION.tar.gz
- run:
name: Wait for db
command: dockerize -wait tcp://127.0.0.1:3306 -timeout 10s
I do see that the container is installed under the spin-up environment step, so I believe it should be running:
Starting container cimg/mysql:8.0
cimg/mysql:8.0:
using image cimg/mysql#sha256:76f5b1dbd079f2fef5fe000a5c9f15f61df8747f28c24ad93bb42f8ec017a8df
pull stats: Image was already available so the image was not pulled
time to create container: 21ms
image is cached as cimg/mysql:8.0, but refreshing...
8.0: Pulling from cimg/mysql
Digest: sha256:76f5b1dbd079f2fef5fe000a5c9f15f61df8747f28c24ad93bb42f8ec017a8df
Status: Image is up to date for cimg/mysql:8.0
Time to upload agent and config: 369.899813ms
Time to start containers: 407.510271ms
However, nothing I've been able to look into has pointed me in the direction of coming up with a solution at this point.

look over here, your job should define like following, for test out sql container, you could just use nc -vz localhost 3306, but the sql docker take time to initialize, so wait for about 2 minutes before test that.
jobs:
build_and_test:
docker:
# Primary container image where all steps run.
- image: cimg/node:16.14.2
# Secondary container image on common network.
- image: cimg/mysql:8.0
auth:
username: myuser
password: $DOCKERHUB_PASSWORD
environment:
MYSQL_HOST: 127.0.0.1
MYSQL_DATABASE: mydatabase
MYSQL_USER: user
MYSQL_PASSWORD: passw0rd
steps:
- checkout
- run: sleep 120 && nc -vz localhost 3306

Related

Github Actions db service container not reachable

I have the following Github Actions pipeline:
name: Elixir CI
on:
push:
branches:
- '*'
pull_request:
branches:
- '*'
jobs:
build:
name: Build and test
runs-on: ubuntu-latest
services:
postgres:
image: postgres
env:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
POSTGRES_PORT: 5432
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
steps:
- uses: actions/checkout#v2
- name: Docker Setup Buildx
uses: docker/setup-buildx-action#v1.6.0
with:
install: true
- name: building image
env:
DATABASE_HOST: postgres
DATABASE_PORT: 5432
run: |
docker build --build-arg DATABASE_HOST=$DATABASE_HOST -t backend:test -f Dockerfile.ci .
I have a single build step for an Elixir app: the dockerfile is a multistage one, the first stage runs the tests and builds the production app, and the second copies the application folder/tar.
DATABASE_HOST is the variable that my Elixir app looks for to connect to the test environment.
I have the need to run tests against Postgres, so I spawn a container service with it. I have executed the build both in a container and outside of it, but I always have the following error:
...
#19 195.9 14:10:58.624 [error] GenServer #PID<0.9316.0> terminating
#19 195.9 ** (DBConnection.ConnectionError) tcp connect (postgres:5432): non-existing domain - :nxdomain
#19 195.9 (db_connection 2.4.1) lib/db_connection/connection.ex:100: DBConnection.Connection.connect/2
#19 195.9 (connection 1.1.0) lib/connection.ex:622: Connection.enter_connect/5
#19 195.9 (stdlib 3.14.2.2) proc_lib.erl:226: :proc_lib.init_p_do_apply/3
#19 195.9 Last message: nil
...
So apparently postgres:5432 is not reachable, am I missing something ?
The problem is in DATABASE_HOST: postgres I think.
The service container exports 5432 port to host, so for docker build, it should use host's ip address to visit that postgres service like next:
- name: building image
env:
DATABASE_PORT: 5432
run: |
DATABASE_HOST=$(ifconfig -a eth0|grep inet|grep -v 127.0.0.1|grep -v inet6|awk '{print $2}'|tr -d "addr:")
docker build --build-arg DATABASE_HOST=$DATABASE_HOST -t backend:test -f Dockerfile.ci .
Above will first use ifconfig to get virtual machine's ip(docker host's ip), then pass to docker build to let build container to visit the postgres.

Port 4466 already in use error after migrating from GraphQL Yoga to Apollo Server 2

I have a local app that had a backend of Prisma and GraphQL Yoga. I migrated from Yoga to Apollo Server 2 and believe I have the configuration set up correctly. However, when I go to 'run dev' I am getting an error that port 4466 is already in use.
I thought perhaps I needed to restart my docker images and did try that.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f14c004ae0d2 prismagraphql/prisma:1.34 "/bin/sh -c /app/sta…" 30 minutes ago Up 30 minutes 0.0.0.0:4466->4466/tcp backend_prisma_1
0c5f3517e990 mysql "docker-entrypoint.s…" 5 months ago Up 21 minutes 3306/tcp, 33060/tcp latinconexiones_mysql-db_1
This is my docker-compose.yml file
version: '3'
services:
prisma:
image: prismagraphql/prisma:1.34
restart: always
ports:
- "4466:4466"
environment:
PRISMA_CONFIG: |
port: 4466
# uncomment the next line and provide the env var PRISMA_MANAGEMENT_API_SECRET=my-secret to activate cluster security
# managementApiSecret: my-secret
databases:
default:
connector: mysql
host: host.docker.internal
database: test_db
user: root
password: root
rawAccess: true
port: '8889'
migrations: false
How can I solve this? It feels like re-initializing Prisma with a different port may work, but that feels like overkill?
check with docker ps if any container uses that port, if so stop it if you don't need it, or change the port of your current container.
it may be also that a non-containerized app uses that port: check this with: sudo lsof -i -P -n | grep LISTEN | grep 4466

How to target container in GitHub actions?

I am trying to use GitHub Actions to fire up a Postgres container for my tests. I have a script called build.sh that gets called when npm run build is called via GitHub actions. This script calls upon restore-schema.sh (shown below).
The issue here is when restore-schema.sh gets ran, I keep getting Error: no such container: postgres. GitHub actions is naming the container some arbitrary string. Is there not a way I can run docker exec on an image or somehow name the postgres container that GitHub actions is creating? I've looked through both documentations to no avail.
How should I go about this? I noticed that in the Docker run ps screenshot, it shows command docker-entrypoint.sh. Should I use this instead? Do I specify the Dockerfile inside .github/workflows/?
I tried to include as much relevant information as possible - comment if you need any other information please.
Screenshots from GitHub Actions
Initialize containers
Docker run ps <- docker ps showing name postgres
Run npm run build --if-present <- WHERE THE ISSUE IS OCCURING
build.sh
#!/bin/sh
# Import core db schema
./.deploy/postgres/restore-schema.sh
.deploy/postgres/restore-schema.sh
#!/bin/sh
docker exec -it postgres psql \
--username postgres \
--password dev \
coredb < .deploy/postgres/db-schema.sql
.github/workflows/test-api-gateway.yml
name: API Gateway CI
on:
push:
branches: [ master ]
pull_request:
branches: [ master, develop ]
jobs:
build:
runs-on: ubuntu-latest
services: # Serivce containers to run with `container-job`
# Label used to access the service container
postgres:
# Docker Hub image
image: postgres
# Provide the password for postgres
env:
POSTGRES_USER: postgres
POSTGRES_DB: coredb
POSTGRES_PASSWORD: dev
# Set health checks to wait until postgres has started
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
ports:
- 5432:5432
strategy:
matrix:
node-version: [14.x]
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- run: docker ps
- run: chmod +x build.sh .deploy/postgres/restore-schema.sh
- run: npm ci
- run: npm run build --if-present
- run: npm test
Try the --name option
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
--name postgres
https://docs.github.com/en/actions/reference/workflow-syntax-for-github-actions#jobsjob_idservices
jobs.<job_id>.services.options: Additional Docker container resource options. For a list of options, see "docker create options."
Another solution I've seen is using last created container
docker exec -it $(docker ps --latest --quiet) bash

docker-compose unlink network from child containers when stopping parent containers?

This is a continuation of my journey of creating multiple docker projects dynamically. I did not mention previously, to make this process dynamica as I want devs to specify what project they want to use, I'm using ansible to up local env.
Logic is:
running ansible-playbook run.yml -e "{projectsList:
['app-admin']}" - providing list of projects I want to start
stop existing main containers (in case they are running from the previous time)
Start the main containers
Depend on the provided list of projects run role tasks () I have a separate role for each supported project
stop the existing child project containers (in case they are running from the previous time)
start the child project containers
make some configuration depend on the role
And here is the issue (again) with the network, when I stop the main containers it's failing with a message:
error while removing network: network appnetwork has active endpoints
it makes sense as child docker containers use the same network, but I do not see so far way to change ordering of tasks as I'm using the roles, so main docker tasks always running before role-specific tasks.
main ansible file:
---
#- import_playbook: './services/old.yml'
- hosts: localhost
gather_facts: true
vars:
# add list of all supported projects, THIS SHOULD BE UPDATED FOREACH NEW PROJECT!
supportedProjects: ['all', 'app-admin', 'app-landing']
vars_prompt:
- name: "ansible_become_pass"
prompt: "Sudo password"
private: yes
pre_tasks:
# List of projects should be provided
- fail: msg="List of projects you want to run playbook for not provided"
when: (projectsList is not defined) or (projectsList|length == 0)
# Remove unsupported projects from list
- name: Filter out not supported projects
set_fact:
filteredProjectsList: "{{ projectsList | intersect(supportedProjects) }}"
# Check if any of projects exist after filtering
- fail: msg="All project you provided not supported. Supported projects {{ supportedProjects }}"
when: filteredProjectsList|length == 0
# Always stop existing docker containers
- name: stop existing common app docker containers
docker_compose:
project_src: ../docker/common/
state: absent
- name: start common app docker containers like nginx proxy, redic, mailcatcher etc. (this can take a while if running by the first time)
docker_compose:
project_src: ../docker/common/
state: present
build: no
nocache: no
- name: Get www-data id
command: docker exec app-php id -u www-data
register: wwwid
- name: Get current user group id
command: id -g
register: userid
- name: Register user and www-data ids
set_fact:
userid: "{{userid.stdout}}"
wwwdataid: "{{wwwid.stdout}}"
roles:
- { role: app-landing, when: '"app-landing" in filteredProjectsList or "all" in filteredProjectsList' }
- { role: app-admin, when: ("app-admin" in filteredProjectsList) or ("all" in filteredProjectsList) }
and role example app-admin/tasks/mian.yml:
---
- name: Sync {{name}} with git (can take while to clone repo by the first time)
git:
repo: "{{gitPath}}"
dest: "{{destinationPath}}"
version: "{{branch}}"
- name: stop existing {{name}} docker containers
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: absent
- name: start {{name}} docker containers (this can take a while if running by the first time)
docker_compose:
project_src: "{{dockerComposeFileDestination}}"
state: present
build: no
nocache: no
- name: Copy {{name}} env file
copy:
src: development.env
dest: "{{destinationPath}}.env"
force: no
- name: Set file permissions for local {{name}} project files
command: chmod -R ug+w {{projectPath}}
become: yes
- name: Set execute permissions for local {{name}} bin folder
command: chmod -R +x {{projectPath}}/bin
become: yes
- name: Set user/group for {{name}} to {{wwwdataid}}:{{userid}}
command: chown -R {{wwwdataid}}:{{userid}} {{projectPath}}
become: yes
- name: Composer install for {{name}}
command: docker-compose -f {{mainDockerComposeFileDestination}}docker-compose.yml exec -T app-php sh -c "cd {{containerProjectPath}} && composer install"
Maybe there is a way to somehow unlink the network if the main container stop. I thought when a child container network set like external:
networks:
appnetwork:
external: true
solves the issue, but it's not.
A quick experiment with an external network:
dc1/dc1.yml
version: "3.0"
services:
nginx:
image: nginx
ports:
- "8080:80"
networks:
- an0
networks:
an0:
external: true
dc2/dc2.yml
version: "3.0"
services:
redis:
image: redis
ports:
- "6379:6379"
networks:
- an0
networks:
an0:
external: true
Starting and stopping:
$ docker network create -d bridge an0
1e07251e32b0d3248b6e70aa70a0e0d0a94e457741ef553ca5f100f5cec4dea3
$ docker-compose -f dc1/dc1.yml up -d
Creating dc1_nginx_1 ... done
$ docker-compose -f dc2/dc2.yml up -d
Creating dc2_redis_1 ... done
$ docker-compose -f dc1/dc1.yml down
Stopping dc1_nginx_1 ... done
Removing dc1_nginx_1 ... done
Network an0 is external, skipping
$ docker-compose -f dc2/dc2.yml down
Stopping dc2_redis_1 ... done
Removing dc2_redis_1 ... done
Network an0 is external, skipping

Elasticsearch not ready on CircleCI

When I run CircleCI, the first few tests fail due to Elasticsearch not being fully set up yet.
Usually I would use the dockerize library to wait for Elasticsearch to be finished, however this does not seem to detect Elasticsearch. Any ideas why? The dockerize command simply times out. However, the Elasticsearch container seems to be running because if I do not wait, eventually Elasticsearch starts to work with the tests.
Here is my docker file
version: 2
jobs:
build:
parallelism: 3
working_directory: ~/export-opportunities
docker:
- image: circleci/ruby:2.5.5-node
environment:
BUNDLE_JOBS: 3
BUNDLE_RETRY: 3
BUNDLE_PATH: vendor/bundle
PGHOST: localhost
PGUSER: user
RAILS_ENV: test
- image: circleci/postgres:latest
environment:
POSTGRES_USER: user
POSTGRES_DB: circle_test
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
- image: circleci/redis:4.0.9
environment:
REDIS_URL: "redis://localhost:6379/"
- image: docker.elastic.co/elasticsearch/elasticsearch:6.3.2
environment:
cluster.name: elasticsearch
xpack.security.enabled: false
transport.host: localhost
network.host: 127.0.0.1
http.port: 9200
discovery.type: single-node
branches:
only: chore/XOT-597-circleci
steps:
- checkout # check out the code in the project directory
# restore bundle cache
- restore_cache:
keys:
- exops-{{ checksum "Gemfile.lock" }}
- run:
name: Bundle Install
command: bundle check || bundle install
# store bundle cache
- save_cache:
key: exops-{{ checksum "Gemfile.lock" }}
paths:
- vendor/bundle
# Database setup
- run:
name: install dockerize
command: wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && sudo tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz && rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
environment:
DOCKERIZE_VERSION: v0.3.0
- run:
name: Wait for DB
command: dockerize -wait tcp://localhost:5432 -timeout 1m
- run:
name: Database setup
command: |
bundle exec rake db:create
bundle exec rake db:migrate
# Redis setup
- run:
name: Wait for Redis
command: dockerize -wait tcp://localhost:6379 -timeout 1m
# DOES NOT WORK:
- run:
name: Wait for Elasticsearch
command: dockerize -wait http://localhost:9200 -timeout 2m
# Run rspec in parallel
- run: |
echo Running test ...
bundle exec rspec --profile 10 \
--format RspecJunitFormatter \
--out test_results/rspec.xml \
--format progress \
$(circleci tests glob "spec/**/*_spec.rb" | circleci tests split --split-by=timings)
# Save test results for timing analysis
- store_test_results:
path: test_results
Note I've also tried dockerize -wait tcp://localhost:9200 -timeout 2m, and dockerize -wait http://127.0.0.1:9200 -timeout 2m, and dockerize -wait tcp://127.0.0.1:9200 -timeout 2m to no effect.
I tried adding sleep 10 and sleep 100 however the issue persisted.
The issue was that the tests were running before the index was created. The index creation was being triggered by the first test to run, but took a few seconds, so the first few tests always failed.
My solution with this was to add the following code to trigger building the index, if not present, in rails_helper.rb which runs on starting the rails environment. For most other environments, the indices do exist so it does not slow down other processes.
# Build initial indices if not present, e.g. CircleCI
[Opportunity, Subscription].each do |model|
unless model.__elasticsearch__.index_exists? index: model.__elasticsearch__.index_name
model.__elasticsearch__.create_index!(force: true)
sleep 2
end
end

Resources