After updating Docker (to newest 19.03.13) and postgres (from 12 to 13) my gitlab pipeline now fails - without an traces. It is triggered but pull fails after 1second, without any traces.
Gitlab runner is running, and is not shared with other projects.
Docker is connected to registry and can build and push updated images.
Have tried to clone to new repo and redo gitlab runner registration. Haven't found any other similar issues posted here. Have run out of ideas of what to try.
Any help will be much appreciated !
The pipeline output (ie no output)
My .gitlab-ci.yml
stages:
- pull
- build
- lint
- push
- cleanup
- deploy
before_script:
- docker login -u "gitlab-ci-token" -p "$CI_BUILD_TOKEN" "$CI_REGISTRY"
pull:
stage: pull
allow_failure: true
script:
- docker pull "$CI_REGISTRY_IMAGE":latest
build:
stage: build
script:
- docker build --tag="$CI_PIPELINE_ID":"$CI_COMMIT_REF_NAME" --cache-from="$CI_REGISTRY_IMAGE":latest .
lint:
stage: lint
script:
- docker-compose -p "$CI_PIPELINE_ID" -f docker-compose.ci.yml run app ls
- docker-compose -p "$CI_PIPELINE_ID" -f docker-compose.ci.yml run app cat tox.ini
- export CI_PIPELINE_ID=$CI_PIPELINE_ID
- export CI_COMMIT_REF_NAME=$CI_COMMIT_REF_NAME
- docker-compose -p "$CI_PIPELINE_ID" -f docker-compose.ci.yml run app flake8 .
push image:
stage: push
only:
- master
- tags
script:
- docker tag "$CI_PIPELINE_ID":"$CI_COMMIT_REF_NAME" "$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_NAME"
- docker push "$CI_REGISTRY_IMAGE":"$CI_COMMIT_REF_NAME"
push latest:
stage: push
script:
- docker tag "$CI_PIPELINE_ID":"$CI_COMMIT_REF_NAME" "$CI_REGISTRY_IMAGE":latest
- docker push "$CI_REGISTRY_IMAGE":latest
cleanup:
stage: cleanup
when: always
script:
- docker rmi -f "$CI_PIPELINE_ID":"$CI_COMMIT_REF_NAME"
- docker-compose -p "$CI_PIPELINE_ID" -f docker-compose.ci.yml down --remove-orphans
deploy:
stage: deploy
when: manual
only:
- master
- tags
script:
- docker-compose -f docker-compose.deploy.yml pull
- docker-compose -f docker-compose.deploy.yml down --remove-orphans
- docker-compose -f docker-compose.deploy.yml up -d
My docker-compose.ci.yml
version: "3"
services:
app:
image: "${CI_PIPELINE_ID}:${CI_COMMIT_REF_NAME}"
My docker-compose.yml
version: "3"
services:
backend:
image: registry.gitlab.com/my_account/my_project:latest
env_file:
- dev.env
ports:
- "8000:8000"
- "4777:22"
volumes:
- ./backend:/backend
command: "/usr/sbin/sshd -D"
depends_on:
- postgres
postgres:
image: postgres:latest
restart: always
env_file:
- dev.env
ports:
- "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
volumes:
postgres:
static-files:
media-files:
My docker-compose.deploy.yml
version: "3"
services:
backend:
image: registry.gitlab.com/my_account/my_project:latest
command: "sh /scripts/run.sh"
env_file:
- dev.env
depends_on:
- postgres
volumes:
- media-files:/media-files
- static-files:/static-files
- frontend:/frontend-build
postgres:
image: postgres:latest
env_file:
- dev.env
ports:
- "5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
nginx:
image: nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx:/etc/nginx/conf.d
- /etc/letsencrypt:/etc/letsencrypt
- static-files:/static-files/
- media-files:/media-files/
- frontend:/frontend
volumes:
postgres:
static-files:
media-files:
frontend:
My Dockerfile
FROM continuumio/miniconda:latest
ENV LANG=C.UTF-8 LC_ALL=C.UTF-8
RUN apt-get update && apt-get upgrade -y && apt-get install -qqy \
wget \
bzip2 \
graphviz \
libssl-dev \
openssh-server
RUN apt-get update && apt-get install -y curl
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash -
RUN apt-get update && apt-get install -y nodejs
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i '/PermitRootLogin/c\PermitRootLogin yes' /etc/ssh/sshd_config
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
RUN mkdir -p /backend
COPY ./backend/requirements.yml /backend/requirements.yml
RUN /opt/conda/bin/conda env create -f /backend/requirements.yml
ENV PATH /opt/conda/envs/backend/bin:$PATH
ENV PYTHONDONTWRITEBYTECODE 1
RUN echo "source activate backend" >~/.bashrc
COPY ./scripts /scripts
RUN chmod +x ./scripts*
COPY ./backend /backend
COPY ./frontend /frontend
WORKDIR /frontend
RUN npm install && npm run build
WORKDIR /backend
Related
This error message appears to 3 images in composed docker container.
exec /usr/bin/entrypoint.sh: no such file or directory
All images related to Ruby execution of services
Sidekiq, Webpack runned by Ruby executable and Web(rails) services
I have tried change every execution to run loading de Gemfile environment using bundle exec, but nothing worked.
Dockerfile
FROM ruby:2.6.6
RUN apt-get update -qq \
&& apt-get install -y curl build-essential libpq-dev postgresql \
nodejs postgresql-client &&\
curl -sL https://deb.nodesource.com/setup_14.x | bash - && \
curl -sS https://dl.yarnpkg.com/debian/pubkey.gpg | apt-key add - && \
echo "deb https://dl.yarnpkg.com/debian/ stable main" | tee /etc/apt/sources.list.d/yarn.list && \
apt-get update && apt-get install -y nodejs yarn
ADD . /app
WORKDIR /app
RUN gem install bundler:2.3.22
RUN bundle install
RUN yarn install --check-files
RUN gem install foreman
COPY entrypoint.sh /usr/bin/
RUN chmod +x /usr/bin/entrypoint.sh
ENTRYPOINT ["entrypoint.sh"]
EXPOSE 80
CMD ["bash"]
docker-compose.yml
version: '3.3'
services:
db:
image: postgres
ports:
- 5423:5432
volumes:
- ./tmp/db:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: *****
redis:
image: redis
ports:
- "6379:6379"
volumes:
- 'redis:/data'
depends_on:
- db
webpack:
build: .
command: sh -c 'rm -rf public/packs/* || true && bin/webpack-dev-server --host 0.0.0.0 --port 3035 -w'
volumes:
- .:/app
- /app/node_modules
ports:
- "3035:3035"
depends_on:
- db
web:
build: .
command: bash -c "rm -f tmp/pids/server.pid && rails s -b 0.0.0.0 -p 80"
volumes:
- .:/app
ports:
- "80:80"
depends_on:
- db
- redis
- webpack
- chrome
env_file: .env_docker
environment:
RAILS_ENV: development
RAILS_MAX_THREADS: 5
sidekiq:
build: .
command: bundle exec sidekiq -C config/sidekiq.yml
volumes:
- .:/app
depends_on:
- db
- redis
env_file: .env_docker
environment:
RAILS_MAX_THREADS: 5
chrome:
image: selenium/standalone-chrome
ports:
- "4444:4444"
volumes:
- /dev/shm:/dev/shm
depends_on:
- db
- redis
- webpack
- sidekiq
volumes:
redis:
postgres:
Equal to entrypoint.sh exec: #: not found but not resolved
I really want to change my Debian development OS to Windows and work only with containers, not looking to Linux or WSL alternatives
I am running sudo docker compose -f docker-compose.development.yml up --build to build my project container
but I'm getting this error:
failed to solve: executor failed running [/bin/sh -c curl -fLo install.sh https://raw.githubusercontent.com/cosmtrek/air/master/install.sh && chmod +x install.sh && sh install.sh && cp ./bin/air /bin/air]: exit code: 6
here's my docker file:
FROM golang:1.17.5-stretch
ARG GIT_DEPLOY_TOKEN_USER
ARG GIT_DEPLOY_TOKEN_PASSWORD
RUN apt-get update && apt-get upgrade -y && \
apt-get install -y git
# private gitlab package credentials
RUN go env -w GOPRIVATE=gitlab.com
RUN touch ~/.netrc
RUN chmod 600 ~/.netrc
RUN echo "machine gitlab.com login ${GIT_DEPLOY_TOKEN_USER} password ${GIT_DEPLOY_TOKEN_PASSWORD}" >> ~/.netrc
WORKDIR /app
# RUN go mod download
# # COPY the source code as the last step
# COPY . .
RUN curl -fLo install.sh https://raw.githubusercontent.com/cosmtrek/air/master/install.sh \
&& chmod +x install.sh && sh install.sh && cp ./bin/air /bin/air
RUN curl -sSfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | sh -s -- -b $(go env GOPATH)/bin v1.45.2
CMD air
and this is my docker-compose.development.yml file:
services:
graphql:
container_name: chimp-bff-development
build:
context: .
dockerfile: dev.Dockerfile
args:
GIT_DEPLOY_TOKEN_USER: $GIT_DEPLOY_TOKEN_USER
GIT_DEPLOY_TOKEN_PASSWORD: $GIT_DEPLOY_TOKEN_PASSWORD
ports:
- "$PORT:$PORT"
depends_on:
- redis
volumes:
- ./:/app
networks:
- chimpbff-bridge
redis:
hostname: $RDB_HOST_NAME
container_name: chimpbff_redis
build:
context: ./Docker/redis
args:
- APQ_RDB_USERNAME=$APQ_RDB_USERNAME
- APQ_RBD_PASSWORD=$APQ_RBD_PASSWORD
- RBD_PORT=$RBD_PORT
expose:
- $RBD_PORT
volumes:
- ./Docker/data/redis:/data
sysctls:
- net.core.somaxconn=511
# restart: always
logging:
driver: "json-file"
options:
max-file: "5"
max-size: "10m"
networks:
- chimpbff-bridge
networks:
chimpbff-bridge:
driver: bridge
driver_opts:
com.docker.network.bridge.name: chimp-bridge
what does exit code 6 means? and what should I do to fix it?
I have the following docker compose configuration:
version: '3.3'
services:
jenkins:
image: jenkins-ansible
build: ansible
restart: on-failure
privileged: true
user: root
ports:
- 8080:8080
- 5000:5000
container_name: jenkins
volumes:
- /home/juliano/workspace/docker-projects/jenkins/volume/:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
- /usr/local/bin/docker:/usr/local/bin/docker
jenkins-agent-1:
build:
context: jenkins-agent
restart: on-failure
expose:
- "22"
container_name: jenkins-agent-1
environment:
- JENKINS_AGENT_SSH_PUBKEY=ssh-rsa omitted
- JAVA_HOME=/opt/java/openjdk/bin/java
depends_on:
- jenkins
volumes:
- /home/juliano/workspace/docker-projects/jenkins/volume/:/var/jenkins_home
jenkins-agent-2:
# image: jenkins/ssh-agent:jdk11
build:
context: jenkins-agent
restart: on-failure
expose:
- "22"
container_name: jenkins-agent-2
environment:
- JENKINS_AGENT_SSH_PUBKEY=ssh-rsa omitted
- JAVA_HOME=/opt/java/openjdk/bin/java
depends_on:
- jenkins
volumes:
- /home/juliano/workspace/docker-projects/jenkins/volume/:/var/jenkins_home
remote_host:
container_name: remote-host
image: remote-host
build:
context: ubuntu18.04
And I'm receiving the following error message:
+ env
+ [[ ssh-rsa omitted == ssh-* ]]
+ write_key 'ssh-rsa omitted'
+ local ID_GROUP
++ stat -c %U:%G /home/jenkins
+ ID_GROUP=jenkins:jenkins
+ mkdir -p /home/jenkins/.ssh
+ echo 'ssh-rsa omitted'
+ chown -Rf jenkins:jenkins /home/jenkins/.ssh
+ chmod 0700 -R /home/jenkins/.ssh
+ [[ '' == ssh-* ]]
+ env
+ grep _
/usr/local/bin/setup-sshd: line 54: /etc/environment: Permission denied
The jenkins-agent dockerfile is:
FROM jenkins/ssh-agent
USER root
RUN apt-get update && apt-get install python3 -y
RUN apt-get install curl -y
RUN apt-get install python3-distutils -y
RUN curl -O https://bootstrap.pypa.io/get-pip.py && \
python3 get-pip.py && \
pip install ansible --upgrade
USER jenkins
Previously, I was using jenkins/ssh-agent:jdk11 to build the agents and it was working well. Then I unsuccessfully tried to install Ansible into the agents through the jenkins-agent Dockerfile (receiving the aforementioned error). Now, even if I change jenkins-agent to jenkins/ssh-agent:jdk11, it is incurring the same problem.
Anyone could kindly help me, please?
I changed the jenkins-agent/Dockerfile and removed USER root and USER jenkins.
Now it is working.
Here is my docker-compose file
It does not even run wait_for_db command
I tried to put the commands in a bash script but it didn't work also
If somebody could help me writing these commands
I want to run manage.py commands and also
run celer and celery beat
version: "3.7"
services:
web:
build: .
command: >
sh -c "
python app/manage.py wait_for_db &&
python app/manage.py makemigrations &&
python app/manage.py makemigrations csvreader &&
python app/manage.py migrate &&
python app/manage.py wait_for_migrate &&
python app/manage.py create_admin --username admin --password admin --noinput --email admin#admin.com &&
python app/manage.py runserver 0.0.0.0:8000 &
celery -A app --workdir app worker --loglevel=info &
celery -A app --workdir app beat -l INFO --scheduler django_celery_beat.schedulers:DatabaseScheduler "
volumes:
- .:/djangoapp
ports:
- "8000:8000"
- "23:22"
depends_on:
- db
- broker
environment:
- DB_HOST=db
- DB_PORT=5432
- DB_NAME=mycsv
- DB_USER=postgres
- DB_PASSWORD=password
- CELERY_BROKER=amqp://admin:password#broker:5672//
restart: on-failure
db:
image: postgres:13.3-alpine
environment:
- POSTGRES_DB=mycsv
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
ports:
- "5432:5432"
broker:
image: rabbitmq
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=password
And here is my Dockerfile
FROM python:3.8
ENV PYTHONUNBUFFERED=1
WORKDIR /djangoapp
RUN apt-get update
RUN apt-get install -y python3-dev build-essential
COPY requirements.txt requirements.txt
RUN pip install -U pip setuptools wheel
RUN pip install -r requirements.txt
EXPOSE 8000
EXPOSE 22
COPY . /djangoapp
It seems that none of the commands are being run
I want to Running docker-compose inside a dockerized jenkins container.
but my docker-compose file is work on my local but when I try to CD in jenkins it doesn't work
with this error
[0m[32mgunicorn-backend |[0m sh: 0: Can't open /code/gunicorn/gunicorn_start.sh
jenkinsfile
#!groovy
node {
environment {
Django_secret_key = credentials('Django_secret_key')
}
stage("Checkout") {
checkout scm
}
stage('Stop previous containers') {
dir('backend') {
withEnv(["PATH=$PATH:/usr/local/bin"]){
sh """
docker-compose -p LBS_Platform down
"""
}
}
}
stage('Run current containers') {
dir('backend') {
withEnv(["PATH=$PATH:/usr/local/bin"]){
sh """
docker-compose -p LBS_Platform up --build
"""
}
}
}
}
jenkins docker, docker-compose
# dockerfile
FROM jenkins/jenkins:lts
ARG HOST_UID=1004
ARG HOST_GID=999
USER root
RUN curl -fsSL https://get.docker.com -o get-docker.sh && sh get-docker.sh
RUN curl -L "https://github.com/docker/compose/releases/download/1.28.6/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose ; chmod +x /usr/local/bin/docker-compose
RUN usermod -u $HOST_UID jenkins
RUN groupmod -g $HOST_GID docker
RUN usermod -aG docker jenkins
USER jenkins
# docker-compose file
version: "3"
services:
jenkins:
privileged: true
build:
context: ./
container_name: jenkins
restart: always
user: root
ports:
- 8083:8080
- 50003:50000
expose:
- "8080"
- "50000"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock"
- "./jenkins_home:/var/jenkins_home"
environment:
TZ: "Asia/Seoul"
volumes:
jenkins_home:
driver: local
docker-compose what i want to run on jenkins coniatiner
# dockerfile
FROM python:3.9
ENV PYTHONUNBUFFERED 1
RUN apt-get -y update
ARG Django_secret_key
ENV Django_secret_key $Django_secret_key
ENV BOARD_DEBUG 1
# 유저, 그룹 나중에 수정 TODO
# the user to run as
ENV USER root
# how many worker processes should Gunicorn spawn
ENV NUM_WORKERS 3
# which settings file should Django use
ENV DJANGO_SETTINGS_MODULE backend.settings
# WSGI module name
ENV DJANGO_WSGI_MODULE backend.wsgi
ENV PORT 8000
RUN echo "Starting $NAME as $(whoami)"
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive apt-get install -y netcat
RUN chmod 755 /code/gunicorn/gunicorn_start.sh
ENTRYPOINT ["sh", "/code/gunicorn/gunicorn_start.sh"]
# docker-compose file
networks:
app-tier:
driver: bridge
services:
gunicorn-backend:
restart: always
container_name: gunicorn-backend
build:
context: .
args:
Django_secret_key: "${Django_secret_key}"
command: bash -c "pipenv run python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
networks:
- app-tier
ports:
- "8000:8000"
nginx-backend:
restart: always
container_name: nginx-backend
image: nginx:latest
volumes:
- ./nginx/config:/etc/nginx/conf.d
- ./nginx/logs:/var/backend-logs
expose:
- "80"
- "443"
ports:
- "80:80"
- "443:443"
networks:
- app-tier
depends_on:
- gunicorn-backend
environment:
- NGINX_HOST=0.0.0.0
- NGINX_PORT=80
# gunicorn/gunicorn_start.sh
#!/bin/bash
# Name of the application
NAME="backend"
# https://stackoverflow.com/questions/4774054/reliable-way-for-a-bash-script-to-get-the-full-path-to-itself
SCRIPT_PATH=$(dirname `which $0`)
# Django project directory
# . 경로
DJANGODIR=$SCRIPT_PATH
# /Users/Han/programming/DjangoCRUDBoard
PARENT_PATH=$(cd $SCRIPT_PATH ; cd .. ; pwd)
# we will communicte using this unix socket
SOCKFILE=$PARENT_PATH/run/gunicorn.sock
echo $PARENT_PATH
# Activate the virtual environment
export DJANGO_SETTINGS_MODULE=$DJANGO_SETTINGS_MODULE
export PYTHONPATH=$DJANGODIR:$PYTHONPATH
# Create the run directory if it doesn't exist
RUNDIR=$(dirname $SOCKFILE)
test -d $RUNDIR || mkdir -p $RUNDIR
cd $PARENT_PATH
# # DB 연결될때까지 블로킹 (미그레이션은 DB가 연결되어야 가능하다)
# while ! nc -z database 5432; do sleep 1; done;
pip install --upgrade pip
pip install pipenv
pipenv install
pipenv run python manage.py makemigrations
pipenv run python manage.py migrate
# Start your Django Unicorn
# Programs meant to be run under supervisor should not daemonize themselves (do not use --daemon)
# pipenv 사용
exec pipenv run gunicorn ${DJANGO_WSGI_MODULE}:application \
-b 0.0.0.0:$PORT \
--name $NAME \
--workers $NUM_WORKERS \
--user=$USER \
--bind=unix:$SOCKFILE \
--log-level=debug \
--log-file=-
docker & docker-compose work on jenkins container but I don't understand why
[0m[32mgunicorn-backend |[0m sh: 0: Can't open /code/gunicorn/gunicorn_start.sh
this error showed.....
Is there any solution to solve this problem?!