Using node caching dependencies from bitbucket when docker build runs Dockerfile - docker

I'm trying to use custom cache dependencies from bitbucket, using Dockerfile.
This is my bitbucket-pipelines.yml:
hml:
- step:
caches:
- node-cache
name: Tests and build
services:
- docker
volumes:
- "$BITBUCKET_CLONE_DIR/node_modules:/root/node_modules"
- "$BITBUCKET_CLONE_DIR:/code"
script:
# - apt update
# - apt-get install -y curl
# - curl -L "https://github.com/docker/compose/releases/download/1.25.0/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
# - chmod +x /usr/local/bin/docker-compose
# - echo 'DIALOGFLOW_PROJECT_ID=a' > .env
# - docker-compose up -d
# - docker exec api npm run test
- docker image inspect $(docker image ls -aq) --format {{.Size}} | awk '{totalSizeInBytes += $0} END {print totalSizeInBytes}'
- echo $BITBUCKET_CLONE_DIR/node_modules
- docker build -t cloudia/api .
- docker save --output api.docker cloudia/api
artifacts:
- api.docker
- step:
name: Deploy
services:
- docker
deployment: staging
script:
- apt-get update
- apt-get install -y curl unzip python jq
- curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
- mv awscli-bundle.zip /tmp/awscli-bundle.zip
- unzip /tmp/awscli-bundle.zip -d /tmp
- /tmp/awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
- docker load --input ./api.docker
- chmod +x ./deploy_hml.sh
- ./deploy_hml.sh
definitions:
caches:
node-cache: node_modules
services:
docker:
memory: 2048
And here's my Dockerfile:
FROM node:10.15.3
WORKDIR /code
# Using some comments for tests
COPY [ "package*.json", "/code/" ]
RUN npm install --silent
COPY . /code
RUN npm run build
EXPOSE 5000
CMD npm start
My pipeline run without problem, but the cache is not working.
Message received when I tried to run pipeline:
Cache "node-cache": Downloading
Cache "node-cache": Not found
How can I set up the pipeline when docker build runs a Dockerfile?

Related

Travis-CI Deployment (sort of/technically) fails because cannot find decrypted file

One of 5 of my Docker images (client/frontend) fails to build in the deploy section of my travis.yml config because it cannot find a decrypted file (mdb5-react-ui-kit-pro-essential-master.tar.gz). In my before_install script I build a test image and the Dockerfile.travis file successfully runs COPY mdb5-react-ui-kit-pro-essential-master.tar.gz ./, but when proceeding to the deploy script which runs a 'deploy.sh' script using Dockerfile it cannot run COPY mdb5-react-ui-kit-pro-essential-master.tar.gz ./. I have added the secret unencrypted files to .gitignore (but Travis CI linux VM seems to ignore this file and throw a warning I have untracked changes after decrypting the files).
travis.yml:
sudo: required
language: generic
services:
- docker
env:
global:
- SHA=$(git rev-parse HEAD)
- CLOUDSDK_CORE_DISABLE_PROMPTS=1
- secure: <some secret>
before_install:
- openssl aes-256-cbc -K $encrypted_<some key>_key -iv $encrypted_<some iv>_iv
-in secretfiles.tar.enc -out secretfiles.tar -d
- tar xvf secretfiles.tar
- ls && cd packages/SPA && ls
- cd ../../
- sudo apt-get update
- sudo apt-get -y install libxml2-dev
- sudo apt install git openssh-client bash
- git config --global user.name “theCosmicGame”
- git config --global user.email bridgetmelvin42#gmail.com
- mkdir -p -m 0600 ~/.ssh && ssh-keyscan git.mdbootstrap.com >> ~/.ssh/known_hosts
- curl https://sdk.cloud.google.com | bash > /dev/null;
- source $HOME/google-cloud-sdk/path.bash.inc
- gcloud components update kubectl
- gcloud auth activate-service-account --key-file service-account.json
- gcloud config set project mavata
- gcloud config set compute/zone us-east1-b
- gcloud container clusters get-credentials mavata-cluster
- echo "$DOCKER_PASSWORD" | docker login -u "$DOCKER_ID" --password-stdin
- docker build -t bridgetmelvin/react-test -f ./packages/SPA/Dockerfile.travis ./packages/SPA
--build-arg MDB_INSTALL="$MDB_INSTALL" --build-arg MDB_TOKEN="$MDB_TOKEN" --build-arg
MDB_URL="$MDB_URL"
script:
- docker run -e CI=true bridgetmelvin/react-test npm test
before_script:
- echo \n .env \n mdb5-react-ui-kit-pro-essential-master.tar.gz \n secretfiles.tar \n service-account.json \n > .gitignore
deploy:
provider: script
script: bash ./deploy.sh # THIS FAILS
on:
branch: main
Dockerfile.travis:
FROM node:16-alpine as builder
WORKDIR /app
ENV DOCKER_TRAVIS_RUNNING=true
ENV SKIP_PREFLIGHT_CHECK=true
ARG MDB_INSTALL
ENV MDB_INSTALL $MDB_INSTALL
ARG MDB_URL
ARG MDB_TOKEN
ENV MDB_URL $MDB_URL
ENV MDB_TOKEN $MDB_TOKEN
COPY package.json ./
COPY postinstall.js ./
COPY postinstall-travis.sh ./
COPY mdb5-react-ui-kit-pro-essential-master.tar.gz ./ # THIS WORKS
COPY .env ./
RUN echo ${MDB_TOKEN}
RUN npm install
RUN npm install mdb5-react-ui-kit-pro-essential-master.tar.gz --save
RUN ls node_modules
COPY . .
RUN ls
# FROM nginx
EXPOSE 9000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
# COPY --from=builder /app/build /usr/share/nginx/html
CMD ["npm", "run", "start"]
Dockerfile (SPA):
# syntax=docker/dockerfile:1.2
FROM node:16-alpine as builder
WORKDIR /app
ENV DOCKER_RUNNING_PROD=true
RUN ls
COPY package.json ./
COPY postinstall.js ./
COPY mdb5-react-ui-kit-pro-essential-master.tar.gz ./ # THIS FAILS
RUN npm install
RUN npm install mdb5-react-ui-kit-pro-essential-master.tar.gz --save
RUN ls node_modules
COPY . .
RUN npm run build
FROM nginx
EXPOSE 9000
COPY ./nginx/default.conf /etc/nginx/conf.d/default.conf
COPY --from=builder /app/build /usr/share/nginx/html
project directory:
.github
packages/
| infra/
| k8s/
| k8s-dev/
| k8s-prod/
| server/
| SPA/
| config/
| nginx/
| public/
| src/
| Dockerfile
| Dockerfile.travis
| **(when decrypted) mdb5-react-ui-kit-pro-essential-master.tar.gz
| package.json
| postinstall.js
.gitignore
travis.yml
deploy.sh
secretfiles.tar.enc
service-account.json.enc
skaffold.yaml

Gitlab CI Split Docker Build Into Multiple Stages

I have a react/django app that's dockerized. There's 2 stages to the GitLab CI process. Build_Node and Build_Image. Build node just builds the react app and stores it as an artifact. Build image runs docker build to build the actual docker image, and relies on the node step because it copies the built files into the image.
However, the build process on the image takes a long time if package dependencies have changed (apt or pip), because it has to reinstall everything.
Is there a way to split the docker build job into multiple parts, so that I can say install the apt and pip packages in the dockerfile while build_node is still running, then finish the docker build once that stage is done?
gitlab-ci.yml:
stages:
- Build Node Frontend
- Build Docker Image
services:
- docker:18.03-dind
variables:
DOCKER_DRIVER: overlay2
DOCKER_HOST: tcp://localhost:2375
DOCKER_TLS_CERTDIR: ""
build_node:
stage: Build Node Frontend
only:
- staging
- production
image: node:14.8.0
variables:
GIT_SUBMODULE_STRATEGY: recursive
artifacts:
paths:
- http
cache:
key: "node_modules"
paths:
- frontend/node_modules
script:
- cd frontend
- yarn install --network-timeout 100000
- CI=false yarn build
- mv build ../http
build_image:
stage: Build Docker Image
only:
- staging
- production
image: docker
script:
#- sleep 10000
- tar -cvf app.tar api/ discordbot/ helpers/ http/
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
#- docker pull $CI_REGISTRY_IMAGE:latest
#- docker build --network=host --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker build --network=host --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
- docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
- docker push $CI_REGISTRY_IMAGE:latest
Dockerfile:
FROM python:3.7-slim
# Add user
ARG APP_USER=abc
RUN groupadd -r ${APP_USER} && useradd --no-log-init -r -g ${APP_USER} ${APP_USER}
WORKDIR /app
ENV PYTHONUNBUFFERED=1
EXPOSE 80
EXPOSE 8080
ADD requirements.txt /app/
RUN set -ex \
&& BUILD_DEPS=" \
gcc \
" \
&& RUN_DEPS=" \
ffmpeg \
postgresql-client \
nginx \
dumb-init \
" \
&& apt-get update && apt-get install -y $BUILD_DEPS \
&& pip install --no-cache-dir --default-timeout=100000 -r /app/requirements.txt \
&& apt-get purge -y --auto-remove -o APT::AutoRemove::RecommendsImportant=false $BUILD_DEPS \
&& apt-get update && apt-get install -y --no-install-recommends $RUN_DEPS \
&& rm -rf /var/lib/apt/lists/*
# Set uWSGI settings
ENV UWSGI_WSGI_FILE=/app/api/api/wsgi.py UWSGI_HTTP=:8000 UWSGI_MASTER=1 UWSGI_HTTP_AUTO_CHUNKED=1 UWSGI_HTTP_KEEPALIVE=1 UWSGI_LAZY_APPS=1 UWSGI_WSGI_ENV_BEHAVIOR=holy PYTHONUNBUFFERED=1 UWSGI_WORKERS=2 UWSGI_THREADS=4
ENV UWSGI_STATIC_EXPIRES_URI="/static/.*\.[a-f0-9]{12,}\.(css|js|png|jpg|jpeg|gif|ico|woff|ttf|otf|svg|scss|map|txt) 315360000"
ENV PYTHONPATH=$PYTHONPATH:/app/api:/app
ENV DB_PORT=5432 DB_NAME=shittywizard DB_USER=shittywizard DB_HOST=localhost
ADD nginx.conf /etc/nginx/nginx.conf
# Set entrypoint
ADD entrypoint.sh /
RUN chmod 755 /entrypoint.sh
ENTRYPOINT ["dumb-init", "--", "/entrypoint.sh"]
ADD app.tar /app/
RUN python /app/api/manage.py collectstatic --noinput
Sure! Check out the gitlab docs on stages and on building docker images with gitlab-ci.
If you have multiple pipeline steps defined within a stage they will run in parallel. For example, the following pipeline would build the node and image artifacts in parallel and then build the final image using both artifacts.
stages:
- build
- bundle
build-node:
stage: build
script:
- # steps to build node and push to artifact registry
build-base-image:
stage: build
script:
- # steps to build image and push to artifact registry
bundle-node-in-image:
stage: bundle
script:
- # pull image artifact
- # download node artifact
- # build image on top of base image with node artifacts embedded
Note that all the pushing and pulling and starting and stopping might not save you time depending on your image size relative to build time, but this will do what you're asking for.

GCP Cloud Run error on deploying my Docker image to Google Container Registry / Cloud Run

i´m quite new to Docker and GCP and try to find a working way, to deploy my Laravel App on GCP.
I already set up CI and and selected "cloudbuild.yaml" as build configuration. I followed innumerable tutorials and read the Google Container Docs, so i created a "cloudbuild.yaml" which includes arguments to use my docker-composer.yaml, to create the stack of my app (app code, database, server).
During the Google Cloud Build process i get:
Step #0: Creating workspace_app_1 ...
Step #0: Creating workspace_web_1 ...
Step #0: Creating workspace_db_1 ...
Step #0: Creating workspace_app_1 ... done
Step #0: Creating workspace_web_1 ... done
Step #0: Creating workspace_db_1 ... done
Finished Step #0
Starting Step #1
Step #1: Already have image (with digest): gcr.io/cloud-builders/docker
Step #1: unable to prepare context: unable to evaluate symlinks in Dockerfile path: lstat /workspace/Dockerfile: no such file or directory
Finished Step #1
ERROR
ERROR: build step 1 "gcr.io/cloud-builders/docker" failed: step exited with non-zero status: 1
docker-compose.yml:
version: "3.8"
volumes:
php-fpm-socket:
db-store:
services:
app:
build:
context: .
dockerfile: ./infra/docker/php/Dockerfile
volumes:
- php-fpm-socket:/var/run/php-fpm
- ./backend:/work/backend
environment:
- DB_CONNECTION=mysql
- DB_HOST=db
- DB_PORT=3306
- DB_DATABASE=${DB_NAME:-laravel_local}
- DB_USERNAME=${DB_USER:-phper}
- DB_PASSWORD=${DB_PASS:-secret}
web:
build:
context: .
dockerfile: ./infra/docker/nginx/Dockerfile
ports:
- ${WEB_PORT:-80}:80
volumes:
- php-fpm-socket:/var/run/php-fpm
- ./backend:/work/backend
db:
build:
context: .
dockerfile: ./infra/docker/mysql/Dockerfile
ports:
- ${DB_PORT:-3306}:3306
volumes:
- db-store:/var/lib/mysql
environment:
- MYSQL_DATABASE=${DB_NAME:-laravel_local}
- MYSQL_USER=${DB_USER:-phper}
- MYSQL_PASSWORD=${DB_PASS:-secret}
- MYSQL_ROOT_PASSWORD=${DB_PASS:-secret}
cloudbuild.yaml
steps:
# running docker-compose
- name: 'docker/compose:1.28.4'
args: ['up', '-d']
# Build the container image
- name: 'gcr.io/cloud-builders/docker'
args: ['build', '-t', 'gcr.io/MY_PROJECT_ID/laravel-docker-1', '.']
# Push the container image to Container Registry
- name: 'gcr.io/cloud-builders/docker'
args: ['push', 'gcr.io/MY_PROJECT_ID/laravel-docker-1']
# Deploy container image to Cloud Run
- name: 'gcr.io/google.com/cloudsdktool/cloud-sdk'
entrypoint: gcloud
args: ['run', 'deploy', 'laravel-docker-1', '--image', 'gcr.io/MY_PROJECT_ID/laravel-docker-1', '--region', 'europe-west3', '--platform', 'managed']
images:
- gcr.io/MY_PROJECT_ID/laravel-docker-1
What is wrong in this configuration?
I solved this issue to deploy a running Laravel 8 application to Google Cloud with the following Dockerfile. PS: Any optimization regarding the FROM and RUN steps are appreciated:
#
# PHP Dependencies
#
FROM composer:2.0 as vendor
WORKDIR /app
COPY database/ database/
COPY composer.json composer.json
COPY composer.lock composer.lock
RUN composer install \
--no-interaction \
--no-plugins \
--no-scripts \
--no-dev \
--prefer-dist
COPY . .
RUN composer dump-autoload
#
# Frontend
#
FROM node:14.9 as frontend
WORKDIR /app
COPY artisan package.json webpack.mix.js package-lock.json ./
RUN npm audit fix
RUN npm cache clean --force
RUN npm cache verify
RUN npm install -f
COPY resources/js ./resources/js
COPY resources/sass ./resources/sass
RUN npm run development
#
# Application
#
FROM php:7.4-fpm
WORKDIR /app
# Install PHP dependencies
RUN apt-get update -y && apt-get install -y build-essential libxml2-dev libonig-dev
RUN docker-php-ext-install pdo pdo_mysql opcache tokenizer xml ctype json bcmath pcntl
# Install Linux and Python dependencies
RUN apt-get install -y curl wget git file ruby-full locales vim
# Run definitions to make Brew work
RUN localedef -i en_US -f UTF-8 en_US.UTF-8
RUN useradd -m -s /bin/zsh linuxbrew && \
usermod -aG sudo linuxbrew && \
mkdir -p /home/linuxbrew/.linuxbrew && \
chown -R linuxbrew: /home/linuxbrew/.linuxbrew
USER linuxbrew
RUN /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install.sh)"
USER root
#RUN chown -R $CONTAINER_USER: /home/linuxbrew/.linuxbrew
ENV PATH "$PATH:/home/linuxbrew/.linuxbrew/bin"
#Install Chrome
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN apt install -y ./google-chrome-stable_current_amd64.deb
# Install Python modules (dependencies) of scraper
RUN brew install python3
RUN pip3 install selenium
RUN pip3 install bs4
RUN pip3 install pandas
# Copy Frontend build
COPY --from=frontend app/node_modules/ ./node_modules/
COPY --from=frontend app/public/js/ ./public/js/
COPY --from=frontend app/public/css/ ./public/css/
COPY --from=frontend app/public/mix-manifest.json ./public/mix-manifest.json
# Copy Composer dependencies
COPY --from=vendor app/vendor/ ./vendor/
COPY . .
RUN cp /app/drivers/chromedriver /usr/local/bin
#COPY .env.prod ./.env
COPY .env.local-docker ./.env
# Copy the scripts to docker-entrypoint-initdb.d which will be executed on container startup
COPY ./docker/ /docker-entrypoint-initdb.d/
COPY ./docker/init_db.sql .
RUN php artisan config:cache
RUN php artisan route:cache
CMD php artisan serve --host=0.0.0.0 --port=8080
EXPOSE 8080

How get report file from docker and insert to gitlab repository

I created docker which runs automated tests. I run it by gitlab script. All works except the report file. I cannot get a report file from docker and insert it to the repository. Command docker cp not working. My GitLab script and docker file:
Gitlab script:
stages:
- test
test:
image: docker:latest
services:
- name: docker:dind
entrypoint: ["env", "-u", "DOCKER_HOST"]
command: ["dockerd-entrypoint.sh"]
variables:
DOCKER_HOST: tcp://docker:2375/
DOCKER_DRIVER: overlay2
# See https://github.com/docker-library/docker/pull/166
DOCKER_TLS_CERTDIR: ""
stage: test
before_script:
- docker login -u "xxx" -p "yyy" docker.io
script:
- docker run --name authContainer "rrr/image:0.0.1"
after_script:
- docker cp authContainer:/artifacts $CI_PROJECT_DIR/artifacts/
artifacts:
when: always
paths:
- $CI_PROJECT_DIR/artifacts/test-result.xml
reports:
junit:
- $CI_PROJECT_DIR/artifacts/test-result.xml
Dockerfile:
FROM mcr.microsoft.com/dotnet/core/sdk:2.1
COPY /publish /Spinelle.AutomaticTests
WORKDIR /Spinelle.AutomaticTests
RUN apt-get update -y
RUN apt install unzip
RUN wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
RUN dpkg -i google-chrome-stable_current_amd64.deb; apt-get -fy install
RUN curl https://chromedriver.storage.googleapis.com/84.0.4147.30/chromedriver_linux64.zip -o /usr/local/bin/chromedriver
RUN unzip -o /usr/local/bin/chromedriver -d /Spinelle.AutomaticTests
RUN chmod 777 /Spinelle.AutomaticTests
CMD dotnet vstest /Parallel Spinelle.AutomaticTests.dll --TestAdapterPath:. --logger:"nunit;LogFilePath=/artifacts/test-result.xml;MethodFormat=Class;FailureBodyFormat=Verbose"

Can't get dep and dockerize working together in docker-compose (but they work separately). Why?

I have a curious situation where my docker-compose build won't complete when I use dockerize to wait for databases etc to be ready, and use dep to load my Go dependencies.
Here's an extract from docker-compose.yml (there are mosquitto, postgres, and python containers in addition to the golang container shown below)
version '3.3'
services:
foobar_container:
image: foobar_image
container_name: foobar
build:
context: ./build_foobar
dockerfile: Dockerfile.foobar
#command: dockerize -wait tcp://mosquitto:1883 -wait tcp://postgres:5432 -timeout 200s /go/src/foobar/main
volumes:
- ./foobar:/go
stdin_open: true
tty: true
external_links:
- mosquitto
- postgres
ports:
- 1833
- 8001
depends_on:
- mosquitto
- postgres
Here's my Dockerfile.foobar
FROM golang:latest
WORKDIR /go
RUN apt-get update && apt-get install -y wget mosquitto-clients net-tools
ENV DOCKERIZE_VERSION v0.6.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
ADD foobar.sh /foobar.sh
#RUN go build main.go
RUN chmod +x /foobar.sh
Here's my build script foobar.sh:
#!/bin/bash
mkdir -p /go/bin # required directory that may have been overwriten by docker-compose `volumes` param
echo "++++++++ Downloading Golang dependencies ... ++++++++"
cd /go/src/foobar
curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
echo "++++++++ Installing Golang dependencies ... ++++++++"
dep ensure
echo "++++++++ Testing MQTT message broker ... ++++++++"
until [[ $(mosquitto_sub -h "mosquitto" -t '$SYS/#' -C 1 | cut -c 1-9) = "mosquitto" ]]; do
echo "++++++++ Message broker is not ready. Waiting one second... ++++++++"
sleep 1
done
echo "++++++++ Building application... ++++++++"
go build main.go
If I uncomment the command line of docker-compose.yml my foobar.sh won't run past the curl line. No error is outputted, the execution just stops.
If I comment from curl onwards, and uncomment the command line, I can setup to completion (however the foobar container needs to me started manually). My python container (which depends on all postgres, go, and mosquitto containers) sets up fine.
What's going wrong?
There are a couple of things I found, first the execution order, you must ensure the foobar.sh gets executed first. As another recommendation, I wouldn't override the entire /go folder inside the container using volumes, instead use another subfolder, e.g /go/github.com/my-project.
I got an app running using this configuration, based on yours:
build_foobar/Dockerfile.foobar:
FROM golang:latest
WORKDIR /go
RUN apt-get update && apt-get install -y wget mosquitto-clients net-tools
ENV DOCKERIZE_VERSION v0.6.0
RUN wget https://github.com/jwilder/dockerize/releases/download/$DOCKERIZE_VERSION/dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& tar -C /usr/local/bin -xzvf dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz \
&& rm dockerize-linux-amd64-$DOCKERIZE_VERSION.tar.gz
ADD foobar.sh /foobar.sh
# RUN go build main.go
RUN chmod +x /foobar.sh
build_foobar/foobar.sh:
#!/bin/bash
# mkdir -p /go/bin # required directory that may have been overwriten by docker-compose `volumes` param
echo "++++++++ Downloading Golang dependencies ... ++++++++"
cd /go/src/foobar
curl https://raw.githubusercontent.com/golang/dep/master/install.sh | sh
echo "++++++++ Installing Golang dependencies ... ++++++++"
dep ensure
echo "++++++++ Testing MQTT message broker ... ++++++++"
until [[ $(mosquitto_sub -h "mosquitto" -t '$SYS/#' -C 1 | cut -c 1-9) = "mosquitto" ]]; do
echo "++++++++ Message broker is not ready. Waiting one second... ++++++++"
sleep 1
done
echo "++++++++ Building application... ++++++++"
go build main.go
dockerize -wait tcp://mosquitto:1883 -wait tcp://postgres:5432 -timeout 200s /go/src/foobar/main
foobar/main.go: place your app main file
docker-compose.yml:
version: '3.3'
services:
foobar_container:
image: foobar_image
container_name: foobar
build:
context: ./build_foobar
dockerfile: Dockerfile.foobar
# command: dockerize -wait tcp://mosquitto:1883 -wait tcp://postgres:5432 -timeout 200s /go/src/foobar/main
# command: /bin/bash
command: /foobar.sh
volumes:
- ./foobar:/go/src/foobar
stdin_open: true
tty: true
external_links:
- mosquitto
- postgres
depends_on:
- mosquitto
- postgres
ports:
- 1833
- 8001
mosquitto:
image: eclipse-mosquitto
postgres:
image: postgres

Resources