Error when deploying a service to the dock using jenkins and ansible
"stderr_lines": [
"remote: fatal: bad object worktrees/dokku-18872-git_build_app_repo.gn4dvN/HEAD ",
"fatal: bad object worktrees/dokku-18872-git_build_app_repo.gn4dvN/HEAD",
PlayBook
Jenkins cloning git repository
Create tmp dir
Copy repository to tmp dir
Git init
Git add Dokku
Git push to Dokku
- name: Copy project files and files needed by Dokku to directory
shell: >
mkdir -p {{ tempdir.path }} &&
cp -R ../api {{ tempdir.path }} &&
cp -R ../fixtures {{ tempdir.path }} &&
cp -R ../{{ app_settings_dir }} {{ tempdir.path }} &&
cp -R ../flc {{ tempdir.path }} &&
cp -R ../periodic_tasks {{ tempdir.path }} &&
cp -R ../service {{ tempdir.path }} &&
cp -R ../manage.py {{ tempdir.path }} &&
cp -R ../requirements.txt {{ tempdir.path }} &&
cp -R ../manage.py {{ tempdir.path }} &&
cp ../conf/DOKKU_SCALE {{ tempdir.path }} &&
cp ../conf/Procfile {{ tempdir.path }} &&
cp ../conf/nginx.static.conf {{ tempdir.path }} &&
cp ../conf/app.json {{ tempdir.path }} &&
cp ../conf/runtime.txt {{ tempdir.path }}
- name: Copy files needed by Docker to the root of temporary directory
shell: >
cp ../conf/Dockerfile {{ tempdir.path }}
- name: Make new git repository from temporary directory
shell:
cmd: >
git init &&
git add . &&
git commit -m 'Init commit'
chdir: "{{ tempdir.path }}"
- name: Push our new repository to the Dokku
shell:
cmd: >
git remote add {{ app_name }} ssh://dokku#{{ ansible_host }}:{{ ansible_port }}/{{ app_name }} &&
git push -f {{ app_name }} master
chdir: "{{ tempdir.path }}"
environment:
GIT_SSH_COMMAND: ssh -o StrictHostKeyChecking=no -i {{ dokku_ssh_private_key_file}}
Related
I need to copy one file from the project to the server where the application will be deployed. This must be done before deploying the application.
At the moment, I can connect to the server and create the folder I need there. But how to put the necessary file there?
The stage in which this should be done.
deploy_image:
stage: deploy_image
image: alpine:latest
services:
- docker:20.10.14-dind
before_script:
- chmod og= $ID_RSA
- apk update && apk add openssh-client
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
docker login -u $REGISTER_USER -p $REGISTER_PASSWORD $REGISTER
script:
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
mkdir $PROJECT_NAME || true
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
cd $PROJECT_NAME
# here you need to somehow send the file to the server
after_script:
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP docker logout
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP exit
only:
- main
Help me please.
Use rsync:
# you can install it in the before_script as well
apk update && apk add rsync
rsync -avx <local files> root#${SERVER_IP}/${PROJECT_NAME}/
Alternatively (to rsync), use scp (which, since ssh package is installed, should come with it)
scp <local files> root#${SERVER_IP}/${PROJECT_NAME}/
That way, no additional apk update/add needed.
We have project on bitbucket jb_common with address bitbucket.org/company/jb_common
I'm trying to run a container that will requareq package from another private repo bitbucket.org/company/jb_utils
Dockerfile:
FROM golang
# create a working directory
WORKDIR /app
# add source code
COPY . .
### ADD ssh keys for bitbucket
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && apt-get install -y ca-certificates git-core ssh
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
echo "StrictHostKeyChecking no " > /root/.ssh/config && ls /root/.ssh/config
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/" && cat /root/.gitconfig
RUN cat /root/.ssh/id_rsa
RUN export GOPRIVATE=bitbucket.org/company/
RUN echo "${ssh_prv_key}"
RUN go get bitbucket.org/company/jb_utils
RUN cp -R .env.example .env && ls -la /app
#RUN go mod download
RUN go build -o main .
RUN cp -R /app/main /main
### Delete ssh credentials
RUN rm -rf /root/.ssh/
ENTRYPOINT [ "/main" ]
and have bitbucket-pipelines.yml
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- echo $SSH_PRV_KEY
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(echo $SSH_PRV_KEY)" --build-arg ssh_pub_key="$(echo $SSH_PUB_KEY)" .
- docker push $IMAGE:$TAG
in pipeline I build image and push on ECR
I have already add repository variables on bitbucket with ssh private and public keys
[https://i.stack.imgur.com/URAsV.png][1]
On local machine Docker image build successfull using command
docker build -t jb_common --build-arg ssh_prv_key="$(cat ~/docker_key/id_rsa)" --build-arg ssh_pub_key="$(cat ~/docker_key/id_rsa.pub)" .
[https://i.stack.imgur.com/FZuNo.png][2]
But on bibucket have error:
go: bitbucket.org/compaany/jb_utils#v0.1.2: reading https://api.bitbucket.org/2.0/repositories/company/jb_utils?fields=scm: 403 Forbidden
server response: Access denied. You must have write or admin access.
This user with ssh keys have admin access on both private repo.
While debug my problem I add some steps inside bitbucket-pipelines.yml to assert that the variables are forwarded inside the container on bitbucket: echo $SSH_PRV_KEY at the result:
[ https://i.stack.imgur.com/FjRof.png][1]
RESOLVED!!!
Pipelines does not currently support line breaks in environment variables, so base-64 encode the private key by running:
base64 -w 0 < private_key
Output result copy to bitbucket repository variables for your variables.
And I edit my bitbucket-pipelines.yml to:
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- apk add --update coreutils
- mkdir -p ~/.ssh
- (umask 077 ; echo $SSH_PRV_KEY | base64 --decode > ~/.ssh/id_rsa)
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" .
- docker push $IMAGE:$TAG
I need to mount an s3 bucket in a kubernetes pod. I am using this guide to help me. It works perfectly, however, the pod is stuck indefinitely in the status of "terminating" when giving the command to delete the pod. I don't know why that is.
Here the .yaml
apiVersion: v1
kind: Pod
metadata:
name: worker
spec:
volumes:
- name: mntdatas3fs
emptyDir: {}
- name: devfuse
hostPath:
path: /dev/fuse
restartPolicy: Always
containers:
- image: nginx
name: s3-test
securityContext:
privileged: true
volumeMounts:
- name: mntdatas3fs
mountPath: /var/s3fs:shared
- name: s3fs
image: meain/s3-mounter
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
env:
- name: S3_REGION
value: "us-east-1"
- name: S3_BUCKET
value: "xxxxxxx"
- name: AWS_KEY
value: "xxxxxx"
- name: AWS_SECRET_KEY
value: "xxxxxx"
volumeMounts:
- name: devfuse
mountPath: /dev/fuse
- name: mntdatas3fs
mountPath: /var/s3fs:shared
Here the Dockerfile of meain/s3-mounter used by s3fs container
FROM alpine:3.3
ENV MNT_POINT /var/s3fs
ARG S3FS_VERSION=v1.86
RUN apk --update --no-cache add fuse alpine-sdk automake autoconf libxml2-dev fuse-dev curl-dev git bash; \
git clone https://github.com/s3fs-fuse/s3fs-fuse.git; \
cd s3fs-fuse; \
git checkout tags/${S3FS_VERSION}; \
./autogen.sh; \
./configure --prefix=/usr; \
make; \
make install; \
make clean; \
rm -rf /var/cache/apk/*; \
apk del git automake autoconf;
RUN mkdir -p "$MNT_POINT"
COPY run.sh run.sh
CMD ./run.sh
Here the run.sh copied into the container
#!/bin/sh
set -e
echo "$AWS_KEY:$AWS_SECRET_KEY" > passwd && chmod 600 passwd
s3fs "$S3_BUCKET" "$MNT_POINT" -o passwd_file=passwd && tail -f /dev/null
I had this exact problem with a very similiar setup. s3fs mounts s3 to /var/s3fs. The mount has to be unmounted before the pod can happily be terminated. This is done done with: umount /var/s3fs. See https://manpages.ubuntu.com/manpages/xenial/man1/s3fs.1.html
So in your case adding
lifecycle:
preStop:
exec:
command: ["sh","-c","umount /var/mnts3fs"]
Should fix it.
I'm actually trying to finish my first GitHub action with CI/CD and Heroku deploy and a i get this error.
Error image:
This is my public repo.
https://github.com/jovicon/the_empire_strikes_back_challenge
Everything is updated in "development" branch
This is my test job: (full file)
Note: When I comment Pylint step everything works fine.
test:
name: Test Docker Image
runs-on: ubuntu-latest
needs: build
steps:
- name: Checkout master
uses: actions/checkout#v1
- name: Log in to GitHub Packages
run: echo ${GITHUB_TOKEN} | docker login -u ${GITHUB_ACTOR} --password-stdin docker.pkg.github.com
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
- name: Pull image
run: |
docker pull ${{ env.IMAGE }}:latest || true
- name: Build image
run: |
docker build \
--cache-from ${{ env.IMAGE }}:latest \
--tag ${{ env.IMAGE }}:latest \
--file ./backend/Dockerfile.prod \
"./backend"
- name: Run container
run: |
docker run \
-d \
--name fastapi-tdd \
-e PORT=8765 \
-e ENVIRONMENT=dev \
-e DATABASE_TEST_URL=sqlite://sqlite.db \
-p 5003:8765 \
${{ env.IMAGE }}:latest
- name: Pytest
run: docker exec fastapi-tdd python -m pytest .
- name: Pylint
run: docker exec fastapi-tdd python -m pylint app/
- name: Black
run: docker exec fastapi-tdd python -m black . --check
- name: isort
run: docker exec fastapi-tdd /bin/sh -c "python -m isort ./*/*.py --check-only"
I let here my Dockerfile.prod too:
# pull official base image
FROM python:3.8.3-slim-buster
# create directory for the app user
RUN mkdir -p /home/app
# create the app user
RUN addgroup --system app && adduser --system --group app
# create the appropriate directories
ENV HOME=/home/app
ENV APP_HOME=/home/app/web
RUN mkdir $APP_HOME
WORKDIR $APP_HOME
# set environment varibles
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV ENVIRONMENT prod
ENV TESTING 0
# install system dependencies
RUN apt-get update \
&& apt-get -y install netcat gcc postgresql \
&& apt-get clean
# install python dependencies
RUN pip install --upgrade pip
COPY ./requirements.txt .
COPY ./dev-requirements.txt .
RUN pip install -r requirements.txt
RUN pip install -r dev-requirements.txt
# add app
COPY . .
RUN chmod 755 $HOME
# chown all the files to the app user
RUN chown -R app:app $APP_HOME
# change to the app user
USER app
# run gunicorn
CMD gunicorn --bind 0.0.0.0:$PORT app.main:app -k uvicorn.workers.UvicornWorker
You're setting the $HOME directory permissions to 755 from the default user. chown -R app:app $APP_HOME targets only $APP_HOME, which is only a subdirectory of $HOME.
In consequence, the user app doesn't have write permissions to $HOME and pylint can't create the directory /home/app/.pylint.d.
I have a server in Iran and i want use gitlab ci to open an ssh tunnel to my server.
But thanks to Google cloud services, gitlab can not see Iran IPs.
Is there any way to use a middle server out of iran to open a proxy tunnel from gitlab to my proxy server and from that to my Iran server, then use docker to pull an image from gitlab registery?
Consider Iran servers can't connect to gitlab an gitlab can not connect to Iran servers too.
Thank you
I have succeeded with such code
before_script:
- apt-get update -y
- apt-get install openssh-client curl -y
integration:
stage: integration
script:
- mkdir ~/.ssh/
- eval $(ssh-agent -s)
- echo "$SSH_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh -fN -L 1029:localhost:1729 user#$HOST -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no 2>&1
- ssh -fN -L 9013:localhost:9713 user#$HOST -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no 2>&1
Also this is worked for me
deploy:
environment:
name: production
url: http://example.com
image: ubuntu:latest
stage: deploy
only:
- master
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
## Install rsync to create mirror between runner and host.
- apt-get install -y rsync
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- ssh-add <(echo "$SSH_PRIVATE_KEY" | base64 --decode)
- ssh -o StrictHostKeyChecking=no $SSH_USER#"$SSH_HOST" 'ls -la && ssh user#host "cd ~/api && docker-compose pull && docker-compose up -d"'
I also described everything that i did in Farsi here:
https://virgol.io/#aminkt