Copy file to remote server from gitlab ci - docker

I need to copy one file from the project to the server where the application will be deployed. This must be done before deploying the application.
At the moment, I can connect to the server and create the folder I need there. But how to put the necessary file there?
The stage in which this should be done.
deploy_image:
stage: deploy_image
image: alpine:latest
services:
- docker:20.10.14-dind
before_script:
- chmod og= $ID_RSA
- apk update && apk add openssh-client
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
docker login -u $REGISTER_USER -p $REGISTER_PASSWORD $REGISTER
script:
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
mkdir $PROJECT_NAME || true
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
cd $PROJECT_NAME
# here you need to somehow send the file to the server
after_script:
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP docker logout
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP exit
only:
- main
Help me please.

Use rsync:
# you can install it in the before_script as well
apk update && apk add rsync
rsync -avx <local files> root#${SERVER_IP}/${PROJECT_NAME}/

Alternatively (to rsync), use scp (which, since ssh package is installed, should come with it)
scp <local files> root#${SERVER_IP}/${PROJECT_NAME}/
That way, no additional apk update/add needed.

Related

Docker, go get from bitbucket private repo

We have project on bitbucket jb_common with address bitbucket.org/company/jb_common
I'm trying to run a container that will requareq package from another private repo bitbucket.org/company/jb_utils
Dockerfile:
FROM golang
# create a working directory
WORKDIR /app
# add source code
COPY . .
### ADD ssh keys for bitbucket
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && apt-get install -y ca-certificates git-core ssh
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
echo "StrictHostKeyChecking no " > /root/.ssh/config && ls /root/.ssh/config
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/" && cat /root/.gitconfig
RUN cat /root/.ssh/id_rsa
RUN export GOPRIVATE=bitbucket.org/company/
RUN echo "${ssh_prv_key}"
RUN go get bitbucket.org/company/jb_utils
RUN cp -R .env.example .env && ls -la /app
#RUN go mod download
RUN go build -o main .
RUN cp -R /app/main /main
### Delete ssh credentials
RUN rm -rf /root/.ssh/
ENTRYPOINT [ "/main" ]
and have bitbucket-pipelines.yml
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- echo $SSH_PRV_KEY
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(echo $SSH_PRV_KEY)" --build-arg ssh_pub_key="$(echo $SSH_PUB_KEY)" .
- docker push $IMAGE:$TAG
in pipeline I build image and push on ECR
I have already add repository variables on bitbucket with ssh private and public keys
[https://i.stack.imgur.com/URAsV.png][1]
On local machine Docker image build successfull using command
docker build -t jb_common --build-arg ssh_prv_key="$(cat ~/docker_key/id_rsa)" --build-arg ssh_pub_key="$(cat ~/docker_key/id_rsa.pub)" .
[https://i.stack.imgur.com/FZuNo.png][2]
But on bibucket have error:
go: bitbucket.org/compaany/jb_utils#v0.1.2: reading https://api.bitbucket.org/2.0/repositories/company/jb_utils?fields=scm: 403 Forbidden
server response: Access denied. You must have write or admin access.
This user with ssh keys have admin access on both private repo.
While debug my problem I add some steps inside bitbucket-pipelines.yml to assert that the variables are forwarded inside the container on bitbucket: echo $SSH_PRV_KEY at the result:
[ https://i.stack.imgur.com/FjRof.png][1]
RESOLVED!!!
Pipelines does not currently support line breaks in environment variables, so base-64 encode the private key by running:
base64 -w 0 < private_key
Output result copy to bitbucket repository variables for your variables.
And I edit my bitbucket-pipelines.yml to:
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- apk add --update coreutils
- mkdir -p ~/.ssh
- (umask 077 ; echo $SSH_PRV_KEY | base64 --decode > ~/.ssh/id_rsa)
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" .
- docker push $IMAGE:$TAG

How do I setup GitLab CI/CD for a tor hidden service, using Docker images?

I have been trying to do it, already. Although, some errors appeared and I don't know what else to do.
I think the problem here is that rsync is trying to connect to the host using port 22, when it should be 9150.
The only job that ever fails is deploy:tor. Compilation is fine.
This is the error output:
1615506885 PERROR torsocks[15]: socks5 libc connect: Connection refused (in socks5_connect() at socks5.c:202)
ssh: connect to host [MASKED] port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(228) [sender=v3.13.0_rc2-264-g725ac7fb]
torrc:
## The directory for keeping all the keys/etc
DataDirectory /var/lib/tor
## Tor opens a socks proxy on port 9150
SocksPort 0.0.0.0:9150
Dockerfile:
FROM alpine:3.13
RUN apk update \
&& apk upgrade \
&& apk add --no-cache \
tor --update-cache --repository http://dl-4.alpinelinux.org/alpine/edge/community/ --allow-untrusted \
bash \
torsocks \
rsync \
openssh-client \
sshpass \
ca-certificates \
&& update-ca-certificates \
&& rm -rf /var/cache/apk/*
EXPOSE 9150
ADD ./torrc /etc/tor/torrc
RUN chown -R tor /etc/tor
USER tor
CMD /usr/bin/tor -f /etc/tor/torrc
.gitlab-ci.yml:
variables:
JEKYLL_ENV: production
LC_ALL: C.UTF-8
stages:
- compile
- deploy
compile:pages:
image: ruby
stage: compile
before_script:
- gem install bundler
- bundle install
script:
- bundle exec jekyll build -d public
artifacts:
paths:
- public
cache:
paths:
- public/
only:
- master
deploy:tor:
image: riservatoxyz/pages-hidden-service
stage: deploy
before_script:
- echo "starting hidden service deploy job!"
script:
- torsocks rsync -zv --progress --rsh="/usr/bin/sshpass -p $TOR_SSH_PASSWORD ssh -o StrictHostKeyChecking=no -l $TOR_SSH_USERNAME" public/_site/* $TOR_SSH_HOST:www
- echo "build sent to hidden service!"
artifacts:
paths:
- public
only:
- master
Other Details
I am using sshpass because I am unable to setup ssh passwordless login, on my hidden service host.
It's not a big deal, though. Since I'm using secret and masked variables.

SSH into Azure web-app container running with non root user

I am running an Elastic and Kibana service within a container using an Azure Web app container service. I was keen on checking the SSH connectivity for this container using Azures Web SSH console feature. Followed the microsoft documentation for SSH into custom containers https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#enable-ssh which shows the example of running the container as default root user.
My issue is Elasticsearch process does not run as a root user so I had to make the sshd process run as an elastic user. I was able to get the sshd process running which accepts the SSH connection from my host however the credentials I am setting in the docker file (elasticsearch:Docker!) are throwing Access Denied error.Any idea where i am going wrong here?
Dockerfile
FROM openjdk:jre-alpine
ARG ek_version=6.5.4
RUN apk add --quiet --no-progress --no-cache nodejs wget \
&& adduser -D elasticsearch \
&& apk add openssh \
&& echo "elasticsearch:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY startup.sh /home/elasticsearch/
RUN chmod +x /home/elasticsearch/startup.sh && \
chown elasticsearch /home/elasticsearch/startup.sh
COPY sshd_config /home/elasticsearch/
USER elasticsearch
WORKDIR /home/elasticsearch
ENV ES_TMPDIR=/home/elasticsearch/elasticsearch.tmp ES_DATADIR=/home/elasticsearch/elasticsearch/data
RUN wget -q -O - https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-${ek_version}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ek_version} elasticsearch \
&& mkdir -p ${ES_TMPDIR} ${ES_DATADIR} \
&& wget -q -O - https://artifacts.elastic.co/downloads/kibana/kibana-oss-${ek_version}-linux-x86_64.tar.gz \
| tar -zx \
&& mv kibana-${ek_version}-linux-x86_64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm
EXPOSE 9200 5601 2222
ENTRYPOINT ["/home/elasticsearch/startup.sh"]
startup.sh script
#!/bin/sh
# Generating hostkey
ssh-keygen -f /home/elasticsearch/ssh_host_rsa_key -N '' -t rsa
# starting sshd process
echo "Starting SSHD"
/usr/sbin/sshd -f sshd_config
# Staring the ES stack
echo "Starting ES"
sh elasticsearch/bin/elasticsearch -E http.host=0.0.0.0 & kibana/bin/kibana --host 0.0.0.0
sshd_config file
Port 2222
HostKey /home/elasticsearch/ssh_host_rsa_key
PidFile /home/elasticsearch/sshd.pid
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
Error i am getting
Please check and verify that your docker image supports SSH. It would appear that you have done everything correctly so one of the final troubleshooting steps left as this point is to verify that your image supports SSH to begin with.

Unable to build a docker image in a Bitbucket pipeline

When I try to build an image for my application, an image that relies upon buildkit, I receive an error: failed to dial gRPC: unable to upgrade to h2c, received 403
I can build standard docker images, but if it relies on Buildkit, I get errors
Specifically, the command that fails is:
docker build --ssh default --no-cache -t worker $BITBUCKET_CLONE_DIR/worker
My bitbucket-pipelines.yml is as follows, the first two docker build commands work, and the images are generated, however the third, that relies on buildkit does not.
image: docker:stable
pipelines:
default:
- step:
name: build
size: 2x
script:
- docker build -t alpine-base $BITBUCKET_CLONE_DIR/supporting/alpine-base
- docker build -t composer-xv:latest $BITBUCKET_CLONE_DIR/supporting/composer-xv
- apk add openssh-client
- eval `ssh-agent`
- export DOCKER_BUILDKIT=1
- docker build --ssh default --no-cache -t worker $BITBUCKET_CLONE_DIR/worker
- docker images
services:
- docker
caches:
- docker
My Dockerfile is as follows:
# syntax=docker/dockerfile:1.0.0-experimental
FROM composer:1.7 as phpdep
COPY application/database/ database/
COPY application/composer.json composer.json
COPY application/composer.lock composer.lock
# Install PHP dependencies in 'vendor'
RUN --mount=type=ssh composer install \
--ignore-platform-reqs \
--no-dev \
--no-interaction \
--no-plugins \
--no-scripts \
--prefer-dist
#
# Final image build stage
#
FROM alpine-base:latest as final
ADD application /app/application
COPY --from=phpdep /app/vendor/ /app/application/vendor/
ADD entrypoint.sh /entrypoint.sh
RUN \
apk update && \
apk upgrade && \
apk add \
php7 php7-mysqli php7-mcrypt php7-gd \
php7-curl php7-xml php7-bcmath php7-mbstring \
php7-zip php7-bz2 ca-certificates php7-openssl php7-zlib \
php7-bcmath php7-dom php7-json php7-phar php7-pdo_mysql php7-ctype \
php7-session php7-fileinfo php7-xmlwriter php7-tokenizer php7-soap \
php7-simplexml && \
cd /app/application && \
cp .env.example .env && \
chown nobody:nobody /app/application/.env && \
sed -i 's/;openssl.capath=/openssl.capath=\/etc\/ssl\/certs/' /etc/php7/php.ini && \
sed -i 's/memory_limit = 128M/memory_limit = 1024M/' /etc/php7/php.ini && \
apk del --purge curl wget && \
mkdir -p /var/log/workers && \
mkdir -p /run/php && \
echo "export PS1='WORKER \h:\w\$ '" >> /etc/profile
COPY files/logrotate.d/ /etc/logrotate.d/
CMD ["/entrypoint.sh"]
Bitbucket pipelines don't support DOCKER_BUILDKIT, it seems, see: https://jira.atlassian.com/browse/BCLOUD-17590?focusedCommentId=3019597&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-3019597 . They say they are waiting for this; https://github.com/moby/buildkit/pull/2723 to be fixed...
You could try again as, since July 2022, you have:
Announcing support for Docker BuildKit in Bitbucket Pipelines
(Jayant Gawali, Atlassian Team)
We are happy to announce that one of the top voted features for Bitbucket Pipelines, Docker BuildKit is now available. You can now build Docker images with the BuildKit utility.
With BuildKit you can take advantage of the various features it provides like:
Performance: BuildKit uses parallelism and caching internally to build images faster.
Secrets: Mount secrets and build images safely.
Cache: Mount caches to save re-downloading all external dependencies every time.
SSH: Mount SSH Keys to build images.
Configuring your bitbucket-pipelines.yaml
BuildKit is now available with the Docker Daemon service.
It is not enabled by default and can be enabled by setting the environment variable DOCKER_BUILDKIT=1 in the pipelines configuration.
pipelines:
default:
- step:
script:
- export DOCKER_BUILDKIT=1
- docker build --secret id=mysecret,src=mysecret.txt .
services:
- docker
To learn more about how to set it up please refer to the support documentation and for information on Docker Buildkit, visit: Docker Docs ? Build images with BuildKit.
Please note:
Use multi-stage builds to utilise parallelism.
Caching is not shared across different builds and it’s limited to the build running on the same docker node where the build runs.
With BuildKit, secrets can be mounted securely as shown above.
For restrictions and limitations please refer to the restrictions section of our support documentation.

How use ssh proxy to run deploy job in gitlab ci?

I have a server in Iran and i want use gitlab ci to open an ssh tunnel to my server.
But thanks to Google cloud services, gitlab can not see Iran IPs.
Is there any way to use a middle server out of iran to open a proxy tunnel from gitlab to my proxy server and from that to my Iran server, then use docker to pull an image from gitlab registery?
Consider Iran servers can't connect to gitlab an gitlab can not connect to Iran servers too.
Thank you
I have succeeded with such code
before_script:
- apt-get update -y
- apt-get install openssh-client curl -y
integration:
stage: integration
script:
- mkdir ~/.ssh/
- eval $(ssh-agent -s)
- echo "$SSH_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh -fN -L 1029:localhost:1729 user#$HOST -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no 2>&1
- ssh -fN -L 9013:localhost:9713 user#$HOST -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no 2>&1
Also this is worked for me
deploy:
environment:
name: production
url: http://example.com
image: ubuntu:latest
stage: deploy
only:
- master
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
## Install rsync to create mirror between runner and host.
- apt-get install -y rsync
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- ssh-add <(echo "$SSH_PRIVATE_KEY" | base64 --decode)
- ssh -o StrictHostKeyChecking=no $SSH_USER#"$SSH_HOST" 'ls -la && ssh user#host "cd ~/api && docker-compose pull && docker-compose up -d"'
I also described everything that i did in Farsi here:
https://virgol.io/#aminkt

Resources