Docker Compose MariaDB ends with code 0 when using command - docker

I have a dockercompose with MariaDB+PHPMyadmin. I'm running some commands inside db service but after them it ends with code 0 while i'm expecting a mariadb server running.
I checked my docker-compose.yml without commands and it worked fine.
This is my compose:
version: '3.1'
services:
db:
image: mariadb:10.3
command: |
sh -c " echo 'Starting Commands'&& apt-get update && apt-get install -y wget && wget https://downloads.mysql.com/docs/sakila-db.tar.gz && tar xzf sakila-db.tar.gz & & echo 'Extraction Finished' && mv sakila-db/sakila-schema.sql /docker-entrypoint-initdb.d/1.sql && mv sakila-db/sakila-data.sql /docker-entrypoint-initdb.d/2.sql && echo ' Finished Commands'"
environment:
MYSQL_ROOT_PASSWORD: notSecureChangeMe
phpmyadmin:
image: phpmyadmin
restart: always
ports:
- 8080:80
This is the output:
db_1 | Starting Commands
db_1 | Get:1
db_1 | Get:2
db_1 | Get:3
db_1 | Get:BLA BLA BLA
db_1 | Unpacking wget (1.20.3-1ubuntu1) ...
db_1 | Setting up wget (1.20.3-1ubuntu1) ...
db_1 | BLA BLA BLA
db_1 | Connecting to downloads.mysql.com (downloads.mysql.com)|137.254.60.14|:443... connected.
db_1 | HTTP request sent, awaiting response... 200 OK
db_1 | Length: 732133 (715K) [application/x-gzip]
db_1 | Saving to: 'sakila-db.tar.gz'
db_1 | BLA BLA BLA
db_1 | 2021-11-10 23:28:49 (1.25 MB/s) - 'sakila-db.tar.gz' saved [732133/732133]
db_1 |
db_1 | Extraction Finished
db_1 | Finished Commands
root_db_1 exited with code 0
I supose thay "command" function could be overriding something but cannot find what.

If you look at the original Dockerfile for mariadb you will see that they have an ENTRYPOINT and CMD which start the database.
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["mysqld"]
So try adding this to the list of command you run, like so (notice the last line in the command listing):
db:
image: mariadb:10.3
command: |
sh -c "echo 'Starting Commands' && \
apt-get update && \
apt-get install -y wget && \
wget https://downloads.mysql.com/docs/sakila-db.tar.gz && \
tar xzf sakila-db.tar.gz && \
echo 'Extraction Finished' && \
mv sakila-db/sakila-schema.sql /docker-entrypoint-initdb.d/1.sql && \
mv sakila-db/sakila-data.sql /docker-entrypoint-initdb.d/2.sql && \
echo 'Finished Commands' && \
docker-entrypoint.sh mysqld"
environment:
MYSQL_ROOT_PASSWORD: notSecureChangeMe
exec'ing into the container and checking existing databases:
root#c875454e15cb:/# mysql -u root -pnotSecureChangeMe -e "show databases"
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| sakila | <<<<<<<<<<<<<<<<<<<
+--------------------+

This is the MariaDB definition in my docker-compose.yaml and I don't have a problem with it.
services:
mariadb:
image: mariadb:10.6-focal
restart: always
ports:
- 3306:3306
environment:
MYSQL_ROOT_PASSWORD: <password>
MYSQL_DATABASE: <database>
volumes:
- mariadb-data:/var/lib/mysql

Related

Database is not created on docker compose up -d

I'm following the docs of mariadb. It says that the db should be created if it find a .sql in /docker-entrypoint-initdb.d.
I'm working on a Ubuntu Server in a Oracle Virtual BOX VM.
My docker-compose.yml looks like this:
version: "3.9"
services:
db:
image: mariadb:10
container_name: mariadb
ports:
- 3306:3306
environment:
- MYSQL_USER=user
- MYSQL_ROOT_PASSWORD=password
- MYSQL_PASSWORD=password
- MARIADB_DATABASE=database // tried with MYSQL_DATABASE and without this line
volumes:
- "db_data:/var/lib/mysql"
- ".database/initdb/dump.sql:/docker-entrypoint-initdb.d/initdb.sql"
# networks:
# - network
volumes:
db_data:
My initdb.sql looks like this (the one that should work in the end looks different but out of simplicity I reduced it to the max and could not even this simple one working):
CREATE DATABASE NEWDB;
I honestly don't know where to look or what to do now because everywhere I looked for a possible solution I found that this is the bare minimum example that should work.
I tried to restarted docker, deleted all containers, images and volumes, modified the initdb.sql into:
CREATE USER user WITH PASSWORD 'password';
CREATE DATABASE IF NOT EXISTS database;
GRANT ALL PRIVILEGES ON DATABASE database TO user;
but the database is not initialized when I docker compose up.
I looked up the container and the initdb.sql was there.
EDIT: It somehow worked, when I docker compose up with MARIADB_DATABASE=database but the script initdb.sql still doesn't work and it's the most important thing because it set's up the whole database.
(NOTE: On top of that I want to set up another PHP-container that runs a PHP-script in order to collect data that is being stored in the above MariaDB-container. The MariaDB is connected with a website that calls data from the container)
Well I'm using the following stack and it works fine for me.
php-apache:
This is an Apache server that runs all my php scripts. You can place your scripts in ./src directory and it will automatically be mounted to DocumentRoot directory of the Apache server.
db:
This the latest docker container of MariaDb
adminer:
This is the lite-weight database browser which I use for creating and altering my databases. You can just visit localhost:8081 and then enter the following credentials. It becomes simpler to manage the databases this way.
username: root
password: example
version: '3.8'
services:
php-apache:
container_name: php-apache
build:
context: .
dockerfile: Dockerfile
image: php:8.0-apache
volumes:
- ./src:/var/www/html/
ports:
- 8080:80
db:
image: mariadb
restart: always
environment:
MARIADB_ROOT_PASSWORD: example
adminer:
image: adminer
restart: always
ports:
- 8081:8080
Dockerfile:
This is a simple docker container which is extended from the base php:8.0-apache image, with mysql extensions installed in it for PDO support.
FROM php:8.0-apache
RUN docker-php-ext-install pdo_mysql
RUN docker-php-ext-install mysqli && docker-php-ext-enable mysqli
RUN apt-get update && apt-get upgrade -y
P.S:
Here you'll have to create all your databases manually via GUI of Adminer. But if you prefer SQL queries via initdb.sql then be my guest. I've just provided this configuration as a suggestion.
I came up with a solution. I used a base image of laravel (installed the laravel project with <curl -s "https://laravel.build/project-name?with=mariadb" | sudo bash> and modified it a little bit. So here's the docker-compose.yml:
# For more information: https://laravel.com/docs/sail
version: '3'
services:
laravel.test:
build:
context: ./vendor/laravel/sail/runtimes/8.1
dockerfile: Dockerfile
args:
WWWGROUP: '${WWWGROUP}'
image: sail-8.1/app
extra_hosts:
- 'host.docker.internal:host-gateway'
ports:
- '${APP_PORT:-80}:80'
- '${VITE_PORT:-5173}:${VITE_PORT:-5173}'
environment:
WWWUSER: '${WWWUSER}'
LARAVEL_SAIL: 1
XDEBUG_MODE: '${SAIL_XDEBUG_MODE:-off}'
XDEBUG_CONFIG: '${SAIL_XDEBUG_CONFIG:-client_host=host.docker.internal}'
volumes:
- '.:/var/www/html'
networks:
- sail
depends_on:
- mariadb
mariadb:
image: 'mariadb:10'
container_name: 'mariadb-10'
ports:
- '${FORWARD_DB_PORT:-3306}:3306'
environment:
MYSQL_ROOT_PASSWORD: '${DB_PASSWORD}'
MYSQL_ROOT_HOST: "%"
MYSQL_DATABASE: '${DB_DATABASE}'
MYSQL_USER: '${DB_USERNAME}'
MYSQL_PASSWORD: '${DB_PASSWORD}'
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
volumes:
- 'sail-mariadb:/var/lib/mysql'
- './vendor/laravel/sail/database/mysql/create-testing-database.sh:/docker-entrypoint-initdb.d/10-create-testing-database.sh'
networks:
- sail
healthcheck:
test: ["CMD", "mysqladmin", "ping", "-p${DB_PASSWORD}"]
retries: 3
timeout: 5s
networks:
sail:
driver: bridge
volumes:
sail-mariadb:
driver: local
Here you can see that the "10-create-testing-database.sh" is executed on startup. I tested this container and it created a database, so I just had to modify it a little bit and now the container creates a database and tables on container startup. Here's the "10-create-testing-database.sh":
#!/usr/bin/env bash
mysql --user=root --password="$MYSQL_ROOT_PASSWORD" <<-EOSQL
CREATE DATABASE IF NOT EXISTS database_name;
GRANT ALL PRIVILEGES ON \`testing%\`.* TO '$MYSQL_USER'#'%';
USE database_name;
CREATE TABLE IF NOT EXISTS table_name(
table_entries ...
);
EOSQL
I still don't know why my initial setup did not work. The only difference I see is that this working file is a .sh and the not working one is a .sql (this does not make sence to me but it what it is).
Dockerfile:
FROM ubuntu:22.04
LABEL maintainer="Taylor Otwell"
ARG WWWGROUP
ARG NODE_VERSION=16
ARG POSTGRES_VERSION=14
WORKDIR /var/www/html
ENV DEBIAN_FRONTEND noninteractive
ENV TZ=UTC
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime && echo $TZ > /etc/timezone
RUN apt-get update \
&& apt-get install -y gnupg gosu curl ca-certificates zip unzip git supervisor sqlite3 libcap2-bin libpng-dev python2 \
&& mkdir -p ~/.gnupg \
&& chmod 600 ~/.gnupg \
&& echo "disable-ipv6" >> ~/.gnupg/dirmngr.conf \
&& echo "keyserver hkp://keyserver.ubuntu.com:80" >> ~/.gnupg/dirmngr.conf \
&& gpg --recv-key 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c \
&& gpg --export 0x14aa40ec0831756756d7f66c4f4ea0aae5267a6c > /usr/share/keyrings/ppa_ondrej_php.gpg \
&& echo "deb [signed-by=/usr/share/keyrings/ppa_ondrej_php.gpg] https://ppa.launchpadcontent.net/ondrej/php/ubuntu jammy main" > /etc/apt/sources.list.d/ppa_ondrej_php.list \
&& apt-get update \
&& apt-get install -y php8.1-cli php8.1-dev \
php8.1-pgsql php8.1-sqlite3 php8.1-gd \
php8.1-curl \
php8.1-imap php8.1-mysql php8.1-mbstring \
php8.1-xml php8.1-zip php8.1-bcmath php8.1-soap \
php8.1-intl php8.1-readline \
php8.1-ldap \
php8.1-msgpack php8.1-igbinary php8.1-redis php8.1-swoole \
php8.1-memcached php8.1-pcov php8.1-xdebug \
&& php -r "readfile('https://getcomposer.org/installer');" | php -- --install-dir=/usr/bin/ --filename=composer \
&& curl -sLS https://deb.nodesource.com/setup_$NODE_VERSION.x | bash - \
&& apt-get install -y nodejs \
&& npm install -g npm \
&& curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | gpg --dearmor | tee /usr/share/keyrings/yarn.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/yarn.gpg] https://dl.yarnpkg.com/debian/ stable main" > /etc/apt/sources.list.d/yarn.list \
&& curl -sS https://www.postgresql.org/media/keys/ACCC4CF8.asc | gpg --dearmor | tee /usr/share/keyrings/pgdg.gpg >/dev/null \
&& echo "deb [signed-by=/usr/share/keyrings/pgdg.gpg] http://apt.postgresql.org/pub/repos/apt jammy-pgdg main" > /etc/apt/sources.list.d/pgdg.list \
&& apt-get update \
&& apt-get install -y yarn \
&& apt-get install -y mysql-client \
&& apt-get install -y postgresql-client-$POSTGRES_VERSION \
&& apt-get -y autoremove \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN setcap "cap_net_bind_service=+ep" /usr/bin/php8.1
RUN groupadd --force -g $WWWGROUP sail
RUN useradd -ms /bin/bash --no-user-group -g $WWWGROUP -u 1337 sail
COPY start-container /usr/local/bin/start-container
COPY supervisord.conf /etc/supervisor/conf.d/supervisord.conf
COPY php.ini /etc/php/8.1/cli/conf.d/99-sail.ini
RUN chmod +x /usr/local/bin/start-container
EXPOSE 8000
ENTRYPOINT ["start-container"]

How to access dockerized rails server using "localhost"

I don't play often with Docker so I'm really confused here.
Here is my Dockerfile:
ARG RUBY_VERSION
FROM ruby:${RUBY_VERSION}-slim-buster
ARG PG_MAJOR
ARG NODE_MAJOR
ARG BUNDLER_VERSION
ARG YARN_VERSION
# common dependencies
RUN apt-get update -qq \
&& DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
build-essential \
gnupg2 \
curl \
less \
git \
&& apt-get clean \
&& rm -fr /var/cache/apt/archives/* \
&& rm -fr /var/lib/apt/lists/* /tmp/* /var/tmp/* \
&& truncate -s 0 /var/log/*log
# add postgresql to sources list
RUN curl -sSL https://www.postgresql.org/media/keys/ACCC4CF8.asc | apt-key add - \
&& echo 'deb http://apt.postgresql.org/pub/repos/apt/ buster-pgdg main' $PG_MAJOR > /etc/apt/sources.list.d/pgdg.list
# add nodejs to sources list
RUN curl -sL https://deb.nodesource.com/setup_$NODE_MAJOR.x | bash -
# add yarn to sources list
RUN curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - \
&& echo 'deb http://dl.yarnpkg.com/debian/ stable main' > /etc/apt/sources.list.d/yarn.list
# application dependencies
# we use an external aptfile for that
COPY Aptfile /tmp/Aptfile
RUN apt-get update -qq && DEBIAN_FRONTEND=noninteractive apt-get -yq dist-upgrade && \
DEBIAN_FRONTEND=noninteractive apt-get install -yq --no-install-recommends \
libpq-dev \
postgresql-client-$PG_MAJOR \
nodejs \
yarn=$YARN_VERSION-1 \
$(cat /tmp/Aptfile | xargs) && \
apt-get clean && \
rm -fr /var/lib/apt/lists/* /tmp/* /var/tmp/* && \
truncate -s 0 /var/log/*log
# configure bundler
ENV LANG=C.UTF-8 \
BUNDLE_JOBS=4 \
BUNDLE_RETRY=3
# upgrade rubygems and install required bundler version
RUN gem update --system && \
gem install bundler:${BUNDLER_VERSION}
# create a directory for the app code
RUN mkdir -p /app
WORKDIR /app
And here is my docker-compose.yml:
version: '3.8'
services:
app: &app
build:
context: .dockerdev
dockerfile: Dockerfile
args:
BUNDLER_VERSION: '2.1.4'
NODE_MAJOR: '11'
PG_MAJOR: '13'
RUBY_VERSION: '2.7.2'
YARN_VERSION: '1.22.5'
image: example-dev:1.0.0
tmpfs:
- /tmp
backend: &backend
<<: *app
stdin_open: true
tty: true
volumes:
- .:/app:cached
- rails_cache:/app/tmp/cache
- bundle:/usr/local/bundle
- node_modules:/app/node_modules
- packs:/app/public/packs
- .dockerdev/.psqlrc:/root/.psqlrc:ro
environment:
- NODE_ENV=development
- RAILS_ENV=${RAILS_ENV:-development}
- REDIS_URL=redis://redis:6379/
- DATABASE_URL=postgres://postgres:postgres#postgres:5432
- BOOTSNAP_CACHE_DIR=/usr/local/bundle/_bootsnap
- WEBPACKER_DEV_SERVER_HOST=webpacker
- HISTFILE=/app/log/.bash_history
- PSQL_HISTFILE=/app/log/.psql_history
- EDITOR=vi
- MALLOC_ARENA_MAX=2
- WEB_CONCURRENCY=${WEB_CONCURRENCY:-1}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
runner:
<<: *backend
command: /bin/bash
ports:
- '3000:3000'
- '3002:3002'
rails:
<<: *backend
command: bundle exec rails server -b 0.0.0.0
ports:
- '3000:3000'
sidekiq:
<<: *backend
command: bundle exec sidekiq -C configs/sidekiq.yml
postgres:
image: postgres:13.1
volumes:
- .psqlrc:/root/.psqlrc:ro
- postgres:/var/lib/postgresql/data
- ./log:/root/log:cached
- ./.dockerdev/init.sql:/docker-entrypoint-initdb.d/init.sql
environment:
- PSQL_HISTFILE=/root/log/.psql_history
- POSTGRES_PASSWORD=postgres
ports:
- 5432
healthcheck:
test: pg_isready -U postgres -h 127.0.0.1
interval: 5s
redis:
image: redis:5.0-alpine
volumes:
- redis:/data
ports:
- 6379
healthcheck:
test: redis-cli ping
interval: 1s
timeout: 3s
retries: 30
webpacker:
<<: *app
command: ./bin/webpack-dev-server
ports:
- '3035:3035'
volumes:
- .:/app:cached
- bundle:/usr/local/bundle
- node_modules:/app/node_modules
- packs:/app/public/packs
environment:
- NODE_ENV=${NODE_ENV:-development}
- RAILS_ENV={RAILS_ENV:-development}
- WEBPACKER_DEV_SERVER_HOST=0.0.0.0
volumes:
postgres:
redis:
bundle:
node_modules:
rails_cache:
packs:
Everything works just fine, I can run my runner, install gems, play with db, run rails, etc.
Until I try to reach http://localhost:3000 which only shows:
This site can’t be reached
localhost refused to connect.
Here is the step I do to run rails server:
$ docker-compose run --rm rails
Creating okamii-saas_rails_run ... done
=> Booting Puma
=> Rails 6.0.3.4 application starting in development
=> Run `rails server --help` for more startup options
Puma starting in single mode...
* Version 4.3.6 (ruby 2.7.2-p137), codename: Mysterious Traveller
* Min threads: 5, max threads: 5
* Environment: development
* Listening on tcp://0.0.0.0:3000
Use Ctrl-C to stop
And here is the result of docker-compose ps:
Name Command State Ports
------------------------------------------------------------------------------------------------------------
okamii-saas_postgres_1 docker-entrypoint.sh postgres Up (healthy) 0.0.0.0:32789->5432/tcp
okamii-saas_rails_run_6fff67202995 bundle exec rails server - ... Up
okamii-saas_redis_1 docker-entrypoint.sh redis ... Up (healthy) 0.0.0.0:32788->6379/tcp
I have the feeling that the fact that nothing shows in the Ports column for okamii-saas_rails_run_6fff67202995 is a sign something is wrong but I don't know why it is empty and what I am supposed to do here. (cf. EDIT 1)
As a note, I know the title say how to using localhost but I really can't access it at all AFAIK :)
===
EDIT 1:
That's not entirely true. I figured that by adding EXPOSE 3000 in my Dockerfile, docker-compose ps will show something in the column Ports for my container but that did not change things a bit.
Here is an updated view of docker-composer ps when using EXPOSE 3000:
Name Command State Ports
------------------------------------------------------------------------------------------------------------
okamii-paas_postgres_1 docker-entrypoint.sh postgres Up (healthy) 0.0.0.0:32771->5432/tcp
okamii-paas_rails_run_d812907346b4 bundle exec rails server - ... Up 3000/tcp
okamii-paas_redis_1 docker-entrypoint.sh redis ... Up (healthy) 0.0.0.0:32770->6379/tcp
EDIT 2:
From what I can read from the doc about EXPOSE, it is only acting as a documentation. It does not do anything else which explains why using it does not change anything.
EDIT 3:
I just tried running docker-compose up -d rails instead of docker-compose run rails and this message started spawning:
$ dc up -d rails
Creating network "okamii-paas_default" with the default driver
Creating okamii-paas_postgres_1 ... done
Creating okamii-paas_redis_1 ... done
Creating okamii-paas_rails_1 ...
Creating okamii-paas_rails_1 ... error
ERROR: for okamii-paas_rails_1 Cannot start service rails: driver failed programming external connectivity on endpoint okamii-paas_rails_1 (5d07dfedfc5c979133ce61a237327edb149a0a6793a85f61f6ad8218a60a510b): Bind for 0.0.0.0:3000 failed: port is already allocated
ERROR: for rails Cannot start service rails: driver failed programming external connectivity on endpoint okamii-paas_rails_1 (5d07dfedfc5c979133ce61a237327edb149a0a6793a85f61f6ad8218a60a510b): Bind for 0.0.0.0:3000 failed: port is already allocated
ERROR: Encountered errors while bringing up the project.
I don't understand where the conflict comes from.
After #dbugger's suggestion, I tweaked the ports publication from "3000:3000" to "3000". And now, stuff shows in the Ports column but of course, the mapping is wrong.
Name Command State Ports
------------------------------------------------------------------------------------------------
okamii-paas_postgres_1 docker-entrypoint.sh postgres Up (healthy) 0.0.0.0:32784->5432/tcp
okamii-paas_rails_1 bundle exec rails server - ... Up 0.0.0.0:32786->3000/tcp
okamii-paas_redis_1 docker-entrypoint.sh redis ... Up (healthy) 0.0.0.0:32785->6379/tcp

CircleCI 2.1 build failing

I am having some issues setting up my CircleCI config.yml file to accommodate a Cypress e2e test after I upgraded it to version 2.1. It keeps failing with the following error:
#!/bin/sh -eo pipefail
# ERROR IN CONFIG FILE:
# [#/jobs/build] 0 subschemas matched instead of one
# 1. [#/jobs/build] only 1 subschema matches out of 2
# | 1. [#/jobs/build] 2 schema violations found
# | | 1. [#/jobs/build] extraneous key [branches] is not permitted
# | | | Permitted keys:
# | | | - description
# | | | - parallelism
# | | | - macos
# | | | - resource_class
# | | | - docker
# | | | - steps
# | | | - working_directory
# | | | - machine
# | | | - environment
# | | | - executor
# | | | - shell
# | | | - parameters
# | | | Passed keys:
# | | | - working_directory
# | | | - docker
# | | | - steps
# | | | - branches
# | | 2. [#/jobs/build/docker/0] extraneous key [env] is not permitted
# | | | Permitted keys:
# | | | - image
# | | | - name
# | | | - entrypoint
# | | | - command
# | | | - user
# | | | - environment
# | | | - aws_auth
# | | | - auth
# | | | Passed keys:
# | | | - image
# | | | - env
# 2. [#/jobs/build] expected type: String, found: Mapping
# | Job may be a string reference to another job
#
# -------
# Warning: This configuration was auto-generated to show you the message above.
# Don't rerun this job. Rerunning will have no effect.
false
This is my yml file:
version: 2.1
jobs:
build:
working_directory: ~/myapp-web
docker:
- image: node:10.13.0-stretch
env:
- DISPLAY=:99
- CHROME_BIN=/usr/bin/google-chrome
steps:
- checkout
- restore_cache:
keys:
- v1-dependencies-{{ checksum "package.json" }}
# fallback to using the latest cache if no exact match is found
- v1-dependencies-
- run:
name: Install Dependencies
command: |
npm install -g #angular/cli
npm install
npm install -g firebase-tools
apt-get -y -qq update
apt-get -y -qq install gconf-service libasound2 libatk1.0-0 libc6 libcairo2 libcups2 libdbus-1-3 libexpat1 libfontconfig1 libgcc1 libgconf-2-4 libgdk-pixbuf2.0-0 libglib2.0-0 libgtk-3-0 libnspr4 libpango-1.0-0 libpangocairo-1.0-0 libstdc++6 libx11-6 libx11-xcb1 libxcb1 libxcomposite1 libxcursor1 libxdamage1 libxext6 libxfixes3 libxi6 libxrandr2 libxrender1 libxss1 libxtst6 ca-certificates fonts-liberation libappindicator1 libnss3 lsb-release xdg-utils wget
if [[ "$CIRCLE_BRANCH" == "master" ]]; then
apt-get -y -qq update
apt-get -y -qq install python-dev
curl -O https://bootstrap.pypa.io/get-pip.py
python get-pip.py --user
echo 'export PATH=/root/.local/bin:$PATH' >> ~/.bash_profile
source ~/.bash_profile
pip install awscli --upgrade --user
~/.local/bin/aws configure set default.s3.signature_version s3v4
fi
cd /root/myapp-web/src/app/functions/ && npm install
- save_cache:
paths:
- node_modules
key: v1-dependencies-{{ checksum "package.json" }}
- run:
name: Run Tests
command:
npm run test-headless
- run:
name: Deploy to AWS
command: |
if [[ "$CIRCLE_BRANCH" == "master" ]]; then
ng build --prod --configuration=production --progress=false
~/.local/bin/aws --region eu-west-2 s3 sync /root/myapp-web/dist/myapp-web/ s3://$AWS_BUCKET_TARGET --delete --exclude '.git/*'
fi
- run:
name: Deploy to Firebase
command: |
cd /root/myapp-web/src/app/functions/
if [[ "$CIRCLE_BRANCH" == "develop" ]]; then
firebase use myapp-dev
fi
if [[ "$CIRCLE_BRANCH" == "master" ]]; then
firebase use myapp-live
fi
firebase deploy --token=$FIREBASE_TOKEN --non-interactive
branches:
only:
- develop
- master
orbs:
cypress: cypress-io/cypress#1
workflows:
test_then_build:
jobs:
- cypress/run:
start: npm run serve
wait-on: 'http://localhost:4200'
I guess the location where you are filtering branch is wrong. You should filter branches in workflow and not in jobs. Also not worked with orbs, so I'm not sure of the orbs location also.
branches:
only:
- develop
- master
This may help: https://support.circleci.com/hc/en-us/articles/115015953868-Filter-branches-for-jobs-and-workflows
You can only filter branches in jobs if you have only one job and you aren't use the workflows keyword.
i.e.
jobs:
build:
branches:
only:
- master
- /rc-.*/
Otherwise you need to use it like so with workflows:
workflows:
version: 2
build:
jobs:
- test:
filters:
branches:
only:
- master

Can't get dep and dockerize working together in docker-compose (but they work separately). Why?

I have a curious situation where my docker-compose build won't complete when I use dockerize to wait for databases etc to be ready, and use dep to load my Go dependencies.
Here's an extract from docker-compose.yml (there are mosquitto, postgres, and python containers in addition to the golang container shown below)
version '3.3'
services:
foobar_container:
image: foobar_image
container_name: foobar
build:
context: ./build_foobar
dockerfile: Dockerfile.foobar
#command: dockerize -wait tcp://mosquitto:1883 -wait tcp://postgres:5432 -timeout 200s /go/src/foobar/main
volumes:
- ./foobar:/go
stdin_open: true
tty: true
external_links:
- mosquitto
- postgres
ports:
- 1833
- 8001
depends_on:
- mosquitto
- postgres
Here's my Dockerfile.foobar
FROM golang:latest
WORKDIR /go
RUN apt-get update && apt-get install -y wget mosquitto-clients net-tools
ENV DOCKERIZE_VERSION v0.6.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
ADD foobar.sh /foobar.sh
#RUN go build main.go
RUN chmod +x /foobar.sh
Here's my build script foobar.sh:
#!/bin/bash
mkdir -p /go/bin # required directory that may have been overwriten by docker-compose `volumes` param
echo "++++++++ Downloading Golang dependencies ... ++++++++"
cd /go/src/foobar
curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
echo "++++++++ Installing Golang dependencies ... ++++++++"
dep ensure
echo "++++++++ Testing MQTT message broker ... ++++++++"
until [[ $(mosquitto_sub -h "mosquitto" -t '$SYS/#' -C 1 | cut -c 1-9) = "mosquitto" ]]; do
echo "++++++++ Message broker is not ready. Waiting one second... ++++++++"
sleep 1
done
echo "++++++++ Building application... ++++++++"
go build main.go
If I uncomment the command line of docker-compose.yml my foobar.sh won't run past the curl line. No error is outputted, the execution just stops.
If I comment from curl onwards, and uncomment the command line, I can setup to completion (however the foobar container needs to me started manually). My python container (which depends on all postgres, go, and mosquitto containers) sets up fine.
What's going wrong?
There are a couple of things I found, first the execution order, you must ensure the foobar.sh gets executed first. As another recommendation, I wouldn't override the entire /go folder inside the container using volumes, instead use another subfolder, e.g /go/github.com/my-project.
I got an app running using this configuration, based on yours:
build_foobar/Dockerfile.foobar:
FROM golang:latest
WORKDIR /go
RUN apt-get update && apt-get install -y wget mosquitto-clients net-tools
ENV DOCKERIZE_VERSION v0.6.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
ADD foobar.sh /foobar.sh
# RUN go build main.go
RUN chmod +x /foobar.sh
build_foobar/foobar.sh:
#!/bin/bash
# mkdir -p /go/bin # required directory that may have been overwriten by docker-compose `volumes` param
echo "++++++++ Downloading Golang dependencies ... ++++++++"
cd /go/src/foobar
curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
echo "++++++++ Installing Golang dependencies ... ++++++++"
dep ensure
echo "++++++++ Testing MQTT message broker ... ++++++++"
until [[ $(mosquitto_sub -h "mosquitto" -t '$SYS/#' -C 1 | cut -c 1-9) = "mosquitto" ]]; do
echo "++++++++ Message broker is not ready. Waiting one second... ++++++++"
sleep 1
done
echo "++++++++ Building application... ++++++++"
go build main.go
dockerize -wait tcp://mosquitto:1883 -wait tcp://postgres:5432 -timeout 200s /go/src/foobar/main
foobar/main.go: place your app main file
docker-compose.yml:
version: '3.3'
services:
foobar_container:
image: foobar_image
container_name: foobar
build:
context: ./build_foobar
dockerfile: Dockerfile.foobar
# command: dockerize -wait tcp://mosquitto:1883 -wait tcp://postgres:5432 -timeout 200s /go/src/foobar/main
# command: /bin/bash
command: /foobar.sh
volumes:
- ./foobar:/go/src/foobar
stdin_open: true
tty: true
external_links:
- mosquitto
- postgres
depends_on:
- mosquitto
- postgres
ports:
- 1833
- 8001
mosquitto:
image: eclipse-mosquitto
postgres:
image: postgres

Docker compose connection issue

I try to run a rails application in docker but I have an issue with docker-compose network, I think...
My Dockerfile looks like this:
FROM ruby:2.3-slim
RUN apt-get update \
&& apt-get install -qq -y --no-install-recommends \
build-essential \
nodejs \
libpq-dev \
git \
tzdata \
libxml2-dev \
libxslt-dev \
ssh \
&& rm -rf /var/lib/apt/lists/*
ENV APP_HOME /var/apps/books-store
RUN mkdir -p $APP_HOME
WORKDIR $APP_HOME
ENV GEM_HOME /var/apps/books-store/vendor/bundle
ENV PATH $GEM_HOME/bin:$PATH
ENV BUNDLE_PATH $GEM_HOME
ENV BUNDLE_BIN $BUNDLE_PATH/bin
EXPOSE 3000
my docker-compose.yml looks like this:
version: '2'
services:
database:
image: postgres
volumes:
- ./data/pgdata:/pgdata
ports:
- '5555:5432'
env_file:
- '.env'
web:
links:
- database
build: .
volumes:
- .:/var/apps/books-store
ports:
- '3000:3000'
command: [bundle, exec, puma]
env_file:
- '.env'
stdin_open: true
tty: true
When I try docker-compose up, from logs I see rails server starts successfully but when I try to access localhost:3000 from host browser it does not work and I could not understand why. What am I doing wrong?
docker ps:
407b59a2fa99 bookstore_web "bundle exec puma" About a minute ago Up 41 seconds 0.0.0.0:3000->3000/tcp bookstore_web
1837fc3e3f387 postgres "docker-entrypoint..." About a minute ago Up 49 seconds 0.0.0.0:5555->5432/tcp bookstore_database_1
docker-compose logs web:
Attaching to bookstore_web_1
web_1 | Puma starting in single mode...
web_1 | * Version 3.6.2 (ruby 2.3.3-p222), codename: Sleepy Sunday Serenity
web_1 | * Min threads: 0, max threads: 16
web_1 | * Environment: development
web_1 | * Listening on tcp://0.0.0.0:9292
web_1 | Use Ctrl-C to stop

Resources