I get exit error 6 when building a docker container - docker

I am running sudo docker compose -f docker-compose.development.yml up --build to build my project container
but I'm getting this error:
failed to solve: executor failed running [/bin/sh -c curl -fLo install.sh https://raw.githubusercontent.com/cosmtrek/air/master/install.sh && chmod +x install.sh && sh install.sh && cp ./bin/air /bin/air]: exit code: 6
here's my docker file:
FROM golang:1.17.5-stretch
ARG GIT_DEPLOY_TOKEN_USER
ARG GIT_DEPLOY_TOKEN_PASSWORD
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y git
# private gitlab package credentials
RUN go env -w GOPRIVATE=gitlab.com
RUN touch ~/.netrc
RUN chmod 600 ~/.netrc
RUN echo "machine gitlab.com login ${GIT_DEPLOY_TOKEN_USER} password ${GIT_DEPLOY_TOKEN_PASSWORD}" >> ~/.netrc
WORKDIR /app
# RUN go mod download
# # COPY the source code as the last step
# COPY . .
RUN curl -fLo install.sh https://raw.githubusercontent.com/cosmtrek/air/master/install.sh \
&& chmod +x install.sh && sh install.sh && cp ./bin/air /bin/air
RUN curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.45.2
CMD air
and this is my docker-compose.development.yml file:
services:
graphql:
container_name: chimp-bff-development
build:
context: .
dockerfile: dev.Dockerfile
args:
GIT_DEPLOY_TOKEN_USER: $GIT_DEPLOY_TOKEN_USER
GIT_DEPLOY_TOKEN_PASSWORD: $GIT_DEPLOY_TOKEN_PASSWORD
ports:
- "$PORT:$PORT"
depends_on:
- redis
volumes:
- ./:/app
networks:
- chimpbff-bridge
redis:
hostname: $RDB_HOST_NAME
container_name: chimpbff_redis
build:
context: ./Docker/redis
args:
- APQ_RDB_USERNAME=$APQ_RDB_USERNAME
- APQ_RBD_PASSWORD=$APQ_RBD_PASSWORD
- RBD_PORT=$RBD_PORT
expose:
- $RBD_PORT
volumes:
- ./Docker/data/redis:/data
sysctls:
- net.core.somaxconn=511
# restart: always
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
networks:
- chimpbff-bridge
networks:
chimpbff-bridge:
driver: bridge
driver_opts:
com.docker.network.bridge.name: chimp-bridge
what does exit code 6 means? and what should I do to fix it?

Related

Jenkins agent container is failing to init:

I have the following docker compose configuration:
version: '3.3'
services:
jenkins:
image: jenkins-ansible
build: ansible
restart: on-failure
privileged: true
user: root
ports:
- 8080:8080
- 5000:5000
container_name: jenkins
volumes:
- /home/juliano/workspace/docker-projects/jenkins/volume/:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/docker:/usr/local/bin/docker
jenkins-agent-1:
build:
context: jenkins-agent
restart: on-failure
expose:
- "22"
container_name: jenkins-agent-1
environment:
- JENKINS_AGENT_SSH_PUBKEY=ssh-rsa omitted
- JAVA_HOME=/opt/java/openjdk/bin/java
depends_on:
- jenkins
volumes:
- /home/juliano/workspace/docker-projects/jenkins/volume/:/var/jenkins_home
jenkins-agent-2:
# image: jenkins/ssh-agent:jdk11
build:
context: jenkins-agent
restart: on-failure
expose:
- "22"
container_name: jenkins-agent-2
environment:
- JENKINS_AGENT_SSH_PUBKEY=ssh-rsa omitted
- JAVA_HOME=/opt/java/openjdk/bin/java
depends_on:
- jenkins
volumes:
- /home/juliano/workspace/docker-projects/jenkins/volume/:/var/jenkins_home
remote_host:
container_name: remote-host
image: remote-host
build:
context: ubuntu18.04
And I'm receiving the following error message:
+ env
+ [[ ssh-rsa omitted == ssh-* ]]
+ write_key 'ssh-rsa omitted'
+ local ID_GROUP
++ stat -c %U:%G /home/jenkins
+ ID_GROUP=jenkins:jenkins
+ mkdir -p /home/jenkins/.ssh
+ echo 'ssh-rsa omitted'
+ chown -Rf jenkins:jenkins /home/jenkins/.ssh
+ chmod 0700 -R /home/jenkins/.ssh
+ [[ '' == ssh-* ]]
+ env
+ grep _
/usr/local/bin/setup-sshd: line 54: /etc/environment: Permission denied
The jenkins-agent dockerfile is:
FROM jenkins/ssh-agent
USER root
RUN apt-get update && apt-get install python3 -y
RUN apt-get install curl -y
RUN apt-get install python3-distutils -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py && \
python3 get-pip.py && \
pip install ansible --upgrade
USER jenkins
Previously, I was using jenkins/ssh-agent:jdk11 to build the agents and it was working well. Then I unsuccessfully tried to install Ansible into the agents through the jenkins-agent Dockerfile (receiving the aforementioned error). Now, even if I change jenkins-agent to jenkins/ssh-agent:jdk11, it is incurring the same problem.
Anyone could kindly help me, please?
I changed the jenkins-agent/Dockerfile and removed USER root and USER jenkins.
Now it is working.

I want to do a cron-job with the help of docker (Aipine) and the whenever gem

Aasumption
I m using docker-compose and wanted to use the whenever gem to do a cron process that deletes at a certain time in Rails, but upon research I found that I have to install and run cron in docker. So I looked into it, but I can't find any information about Alpine regarding cron processing in Rails. Can an
yone tell me how to do this?
What we have achieved
I want to execute a specific process once a day.
Code
Here is my Dockerfile:
FROM ruby:2.7.1-alpine
ARG WORKDIR
ENV RUNTIME_PACKAGES="linux-headers libxml2-dev make gcc libc-dev nodejs tzdata postgresql-dev postgresql git" \
DEV_PACKAGES="build-base curl-dev" \
HOME=/${WORKDIR} \
LANG=C.UTF-8 \
TZ=Asia/Tokyo
RUN echo ${HOME}
WORKDIR ${HOME}
COPY Gemfile* ./
RUN apk update && \
apk upgrade && \
apk add --no-cache ${RUNTIME_PACKAGES} && \
apk add --virtual build-dependencies --no-cache ${DEV_PACKAGES} && \
bundle install -j4 && \
apk del build-dependencies
COPY . .
CMD ["rails", "server", "-b", "0.0.0.0"]
Here is my Docker Compose file:
version: '3.8'
services:
db:
image: postgres:12.3-alpine
environment:
TZ: UTC
PGTZ: UTC
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
volumes:
- ./api/tmp/db:/var/lib/postgresql/data
api:
build:
context: ./api
args:
WORKDIR: $WORKDIR
command: /bin/sh -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
environment:
POSTGRES_PASSWORD: $POSTGRES_PASSWORD
API_DOMAIN: "localhost:$FRONT_PORT"
APP_URL: "http://localhost:$API_PORT"
volumes:
- ./api:/$WORKDIR
depends_on:
- db
ports:
- "$API_PORT:$CONTAINER_PORT"
mailcatcher:
image: schickling/mailcatcher
ports:
- "1080:1080"
- "1025:1025"
front:
build:
context: ./front
args:
WORKDIR: $WORKDIR
CONTAINER_PORT: $CONTAINER_PORT
API_URL: "http://localhost:$API_PORT"
command: yarn run dev
volumes:
- ./front:/$WORKDIR
ports:
- "$FRONT_PORT:$CONTAINER_PORT"
depends_on:
- api
Actual processing
/config/schedule.rb
require File.expand_path(File.dirname(__FILE__) + "/environment")
ENV.each { |k, v| env(k, v) }
set :output, "#{Rails.root}/log/cron.log"
set :environment, :development
every 1.days do
runner "User.guest_reset"
end
What we tried
I did a lot of research and found a lot of information on using cron with apt, but could not find any information on using apk.
Separate cron into another service in your docker-compose.yml by using the same image as your rails app image (which built by Dockerfile). Then run cron and whenever --update-crontab in the command.
docker-compose.yml
version: '3'
services:
app:
image: myapp
depends_on:
- 'db'
build:
context: .
command: bash -c "rm -f tmp/pids/server.pid &&
bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- ".:/myapp"
cron:
image: myapp
command: bash -c "touch log/cron.log && cron && whenever --update-crontab &&
crontab -l && tail -f log/cron.log"
volumes:
- '.:/myapp'
db:
image: postgres:13
ports: # 127.0.0.1 to only expose the port to loopback
- '127.0.0.1:5432:5432'
volumes:
- 'postgres_dev:/var/lib/postgresql/data'
Dockerfile
FROM ruby:3.0.1
RUN apt-get update -qq && apt-get install -y postgresql-client cron vim \
&& mkdir /myapp
WORKDIR /myapp
ENV BUNDLE_WITHOUT=development:test
COPY Gemfile Gemfile.lock ./
RUN bundle install --jobs 20 --retry 5
COPY package.json ./
RUN npm install --check-files --production

'Can't open ERROR' while Using docker-compose in a dockerized Jenkins container

I want to Running docker-compose inside a dockerized jenkins container.
but my docker-compose file is work on my local but when I try to CD in jenkins it doesn't work
with this error
[0m[32mgunicorn-backend |[0m sh: 0: Can't open /code/gunicorn/gunicorn_start.sh
jenkinsfile
#!groovy
node {
environment {
Django_secret_key = credentials('Django_secret_key')
}
stage("Checkout") {
checkout scm
}
stage('Stop previous containers') {
dir('backend') {
withEnv(["PATH=$PATH:/usr/local/bin"]){
sh """
docker-compose -p LBS_Platform down
"""
}
}
}
stage('Run current containers') {
dir('backend') {
withEnv(["PATH=$PATH:/usr/local/bin"]){
sh """
docker-compose -p LBS_Platform up --build
"""
}
}
}
}
jenkins docker, docker-compose
# dockerfile
FROM jenkins/jenkins:lts
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
RUN curl -L "https://github.com/docker/compose/releases/download/1.28.6/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose ; chmod +x /usr/local/bin/docker-compose
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins
# docker-compose file
version: "3"
services:
jenkins:
privileged: true
build:
context: ./
container_name: jenkins
restart: always
user: root
ports:
- 8083:8080
- 50003:50000
expose:
- "8080"
- "50000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./jenkins_home:/var/jenkins_home"
environment:
TZ: "Asia/Seoul"
volumes:
jenkins_home:
driver: local
docker-compose what i want to run on jenkins coniatiner
# dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED 1
RUN apt-get -y update
ARG Django_secret_key
ENV Django_secret_key $Django_secret_key
ENV BOARD_DEBUG 1
# 유저, 그룹 나중에 수정 TODO
# the user to run as
ENV USER root
# how many worker processes should Gunicorn spawn
ENV NUM_WORKERS 3
# which settings file should Django use
ENV DJANGO_SETTINGS_MODULE backend.settings
# WSGI module name
ENV DJANGO_WSGI_MODULE backend.wsgi
ENV PORT 8000
RUN echo "Starting $NAME as $(whoami)"
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y netcat
RUN chmod 755 /code/gunicorn/gunicorn_start.sh
ENTRYPOINT ["sh", "/code/gunicorn/gunicorn_start.sh"]
# docker-compose file
networks:
app-tier:
driver: bridge
services:
gunicorn-backend:
restart: always
container_name: gunicorn-backend
build:
context: .
args:
Django_secret_key: "${Django_secret_key}"
command: bash -c "pipenv run python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
networks:
- app-tier
ports:
- "8000:8000"
nginx-backend:
restart: always
container_name: nginx-backend
image: nginx:latest
volumes:
- ./nginx/config:/etc/nginx/conf.d
- ./nginx/logs:/var/backend-logs
expose:
- "80"
- "443"
ports:
- "80:80"
- "443:443"
networks:
- app-tier
depends_on:
- gunicorn-backend
environment:
- NGINX_HOST=0.0.0.0
- NGINX_PORT=80
# gunicorn/gunicorn_start.sh
#!/bin/bash
# Name of the application
NAME="backend"
# https://stackoverflow.com/questions/4774054/reliable-way-for-a-bash-script-to-get-the-full-path-to-itself
SCRIPT_PATH=$(dirname `which $0`)
# Django project directory
# . 경로
DJANGODIR=$SCRIPT_PATH
# /Users/Han/programming/DjangoCRUDBoard
PARENT_PATH=$(cd $SCRIPT_PATH ; cd .. ; pwd)
# we will communicte using this unix socket
SOCKFILE=$PARENT_PATH/run/gunicorn.sock
echo $PARENT_PATH
# Activate the virtual environment
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
cd $PARENT_PATH
# # DB 연결될때까지 블로킹 (미그레이션은 DB가 연결되어야 가능하다)
# while ! nc -z database 5432; do sleep 1; done;
pip install --upgrade pip
pip install pipenv
pipenv install
pipenv run python manage.py makemigrations
pipenv run python manage.py migrate
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
# pipenv 사용
exec pipenv run gunicorn ${DJANGO_WSGI_MODULE}:application \
-b 0.0.0.0:$PORT \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
docker & docker-compose work on jenkins container but I don't understand why
[0m[32mgunicorn-backend |[0m sh: 0: Can't open /code/gunicorn/gunicorn_start.sh
this error showed.....
Is there any solution to solve this problem?!

Gitlab CI pipeline pull fails without trace

After updating Docker (to newest 19.03.13) and postgres (from 12 to 13) my gitlab pipeline now fails - without an traces. It is triggered but pull fails after 1second, without any traces.
Gitlab runner is running, and is not shared with other projects.
Docker is connected to registry and can build and push updated images.
Have tried to clone to new repo and redo gitlab runner registration. Haven't found any other similar issues posted here. Have run out of ideas of what to try.
Any help will be much appreciated !
The pipeline output (ie no output)
My .gitlab-ci.yml
stages:
- pull
- build
- lint
- push
- cleanup
- deploy
before_script:
- docker login -u "gitlab-ci-token" -p "$CI_BUILD_TOKEN" "$CI_REGISTRY"
pull:
stage: pull
allow_failure: true
script:
- docker pull "$CI_REGISTRY_IMAGE":latest
build:
stage: build
script:
- docker build --tag="$CI_PIPELINE_ID":"$CI_COMMIT_REF_NAME" --cache-from="$CI_REGISTRY_IMAGE":latest .
lint:
stage: lint
script:
- docker-compose -p "$CI_PIPELINE_ID" -f docker-compose.ci.yml run app ls
- docker-compose -p "$CI_PIPELINE_ID" -f docker-compose.ci.yml run app cat tox.ini
- export CI_PIPELINE_ID=$CI_PIPELINE_ID
- export CI_COMMIT_REF_NAME=$CI_COMMIT_REF_NAME
- docker-compose -p "$CI_PIPELINE_ID" -f docker-compose.ci.yml run app flake8 .
push image:
stage: push
only:
- master
- tags
script:
- docker tag "$CI_PIPELINE_ID":"$CI_COMMIT_REF_NAME" "$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_NAME"
- docker push "$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_NAME"
push latest:
stage: push
script:
- docker tag "$CI_PIPELINE_ID":"$CI_COMMIT_REF_NAME" "$CI_REGISTRY_IMAGE":latest
- docker push "$CI_REGISTRY_IMAGE":latest
cleanup:
stage: cleanup
when: always
script:
- docker rmi -f "$CI_PIPELINE_ID":"$CI_COMMIT_REF_NAME"
- docker-compose -p "$CI_PIPELINE_ID" -f docker-compose.ci.yml down --remove-orphans
deploy:
stage: deploy
when: manual
only:
- master
- tags
script:
- docker-compose -f docker-compose.deploy.yml pull
- docker-compose -f docker-compose.deploy.yml down --remove-orphans
- docker-compose -f docker-compose.deploy.yml up -d
My docker-compose.ci.yml
version: "3"
services:
app:
image: "${CI_PIPELINE_ID}:${CI_COMMIT_REF_NAME}"
My docker-compose.yml
version: "3"
services:
backend:
image: registry.gitlab.com/my_account/my_project:latest
env_file:
- dev.env
ports:
- "8000:8000"
- "4777:22"
volumes:
- ./backend:/backend
command: "/usr/sbin/sshd -D"
depends_on:
- postgres
postgres:
image: postgres:latest
restart: always
env_file:
- dev.env
ports:
- "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:
static-files:
media-files:
My docker-compose.deploy.yml
version: "3"
services:
backend:
image: registry.gitlab.com/my_account/my_project:latest
command: "sh /scripts/run.sh"
env_file:
- dev.env
depends_on:
- postgres
volumes:
- media-files:/media-files
- static-files:/static-files
- frontend:/frontend-build
postgres:
image: postgres:latest
env_file:
- dev.env
ports:
- "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx:/etc/nginx/conf.d
- /etc/letsencrypt:/etc/letsencrypt
- static-files:/static-files/
- media-files:/media-files/
- frontend:/frontend
volumes:
postgres:
static-files:
media-files:
frontend:
My Dockerfile
FROM continuumio/miniconda:latest
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update && apt-get upgrade -y && apt-get install -qqy \
wget \
bzip2 \
graphviz \
libssl-dev \
openssh-server
RUN apt-get update && apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get update && apt-get install -y nodejs
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i '/PermitRootLogin/c\PermitRootLogin yes' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
RUN mkdir -p /backend
COPY ./backend/requirements.yml /backend/requirements.yml
RUN /opt/conda/bin/conda env create -f /backend/requirements.yml
ENV PATH /opt/conda/envs/backend/bin:$PATH
ENV PYTHONDONTWRITEBYTECODE 1
RUN echo "source activate backend" >~/.bashrc
COPY ./scripts /scripts
RUN chmod +x ./scripts*
COPY ./backend /backend
COPY ./frontend /frontend
WORKDIR /frontend
RUN npm install && npm run build
WORKDIR /backend

docker-compose build throwing error ERROR: Untar re-exec error: signal: killed: output:

I'm trying to setup docker for a existing QuorraJs application.
(https://quorrajs.org/docs/v1/preface/quickstart.html) however i'm having issues when trying to run docker-compose build.
I am still quite new to docker, not sure what i am doing wrong.
docker file
FROM node:latest
MAINTAINER Erkan Demir <erkan.demir#peopleplan.com.au>
#Add everything in the current directory to our image
ADD . /var/www
RUN cd /var/www; \
npm install \
npm install -g quorra-cli \
EXPOSE 3000:3000
CMD["quorra ride"]
docker-compose.yml
version: '2'
services:
web:
container_name: quorra-web
build: .
ports:
- '3000:3000'
volumes:
- .:/var/www
links:
- db
depends_on:
- db
db:
container_name: quorra-db
image: mysql
ports:
- '3000:3000'
volumes:
- /var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: Petbarn_DB
MYSQL_USER: root
MYSQL_PASSWORD: password
Apparently there are some things wrong in your Dockerfile, try running it as follows:
FROM node:latest
MAINTAINER Erkan Demir <erkan.demir#peopleplan.com.au>
#Add everything in the current directory to our image
ADD . /var/www
RUN cd /var/www/ && \
npm install && \
npm install -g quorra-cli
EXPOSE 3000
CMD['quorra', 'ride']
Try adding && and remove last \ in your Dockerfile:
...
RUN cd /var/www; \
npm install \
&& npm install -g quorra-cli
...

Resources