How use ssh proxy to run deploy job in gitlab ci? - docker

I have a server in Iran and i want use gitlab ci to open an ssh tunnel to my server.
But thanks to Google cloud services, gitlab can not see Iran IPs.
Is there any way to use a middle server out of iran to open a proxy tunnel from gitlab to my proxy server and from that to my Iran server, then use docker to pull an image from gitlab registery?
Consider Iran servers can't connect to gitlab an gitlab can not connect to Iran servers too.
Thank you

I have succeeded with such code
before_script:
- apt-get update -y
- apt-get install openssh-client curl -y
integration:
stage: integration
script:
- mkdir ~/.ssh/
- eval $(ssh-agent -s)
- echo "$SSH_KEY" | tr -d '\r' > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- ssh -fN -L 1029:localhost:1729 user#$HOST -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no 2>&1
- ssh -fN -L 9013:localhost:9713 user#$HOST -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no 2>&1

Also this is worked for me
deploy:
environment:
name: production
url: http://example.com
image: ubuntu:latest
stage: deploy
only:
- master
before_script:
- 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'
## Install rsync to create mirror between runner and host.
- apt-get install -y rsync
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
- eval $(ssh-agent -s)
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- echo "$SSH_KNOWN_HOSTS" > ~/.ssh/known_hosts
- chmod 644 ~/.ssh/known_hosts
script:
- ssh-add <(echo "$SSH_PRIVATE_KEY" | base64 --decode)
- ssh -o StrictHostKeyChecking=no $SSH_USER#"$SSH_HOST" 'ls -la && ssh user#host "cd ~/api && docker-compose pull && docker-compose up -d"'
I also described everything that i did in Farsi here:
https://virgol.io/#aminkt

Related

Copy file to remote server from gitlab ci

I need to copy one file from the project to the server where the application will be deployed. This must be done before deploying the application.
At the moment, I can connect to the server and create the folder I need there. But how to put the necessary file there?
The stage in which this should be done.
deploy_image:
stage: deploy_image
image: alpine:latest
services:
- docker:20.10.14-dind
before_script:
- chmod og= $ID_RSA
- apk update && apk add openssh-client
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
docker login -u $REGISTER_USER -p $REGISTER_PASSWORD $REGISTER
script:
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
mkdir $PROJECT_NAME || true
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP \
cd $PROJECT_NAME
# here you need to somehow send the file to the server
after_script:
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP docker logout
- ssh -i $ID_RSA -o StrictHostKeyChecking=no root#$SERVER_IP exit
only:
- main
Help me please.
Use rsync:
# you can install it in the before_script as well
apk update && apk add rsync
rsync -avx <local files> root#${SERVER_IP}/${PROJECT_NAME}/
Alternatively (to rsync), use scp (which, since ssh package is installed, should come with it)
scp <local files> root#${SERVER_IP}/${PROJECT_NAME}/
That way, no additional apk update/add needed.

Docker, go get from bitbucket private repo

We have project on bitbucket jb_common with address bitbucket.org/company/jb_common
I'm trying to run a container that will requareq package from another private repo bitbucket.org/company/jb_utils
Dockerfile:
FROM golang
# create a working directory
WORKDIR /app
# add source code
COPY . .
### ADD ssh keys for bitbucket
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && apt-get install -y ca-certificates git-core ssh
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
echo "StrictHostKeyChecking no " > /root/.ssh/config && ls /root/.ssh/config
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN git config --global url."git#bitbucket.org:".insteadOf "https://bitbucket.org/" && cat /root/.gitconfig
RUN cat /root/.ssh/id_rsa
RUN export GOPRIVATE=bitbucket.org/company/
RUN echo "${ssh_prv_key}"
RUN go get bitbucket.org/company/jb_utils
RUN cp -R .env.example .env && ls -la /app
#RUN go mod download
RUN go build -o main .
RUN cp -R /app/main /main
### Delete ssh credentials
RUN rm -rf /root/.ssh/
ENTRYPOINT [ "/main" ]
and have bitbucket-pipelines.yml
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- echo $SSH_PRV_KEY
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(echo $SSH_PRV_KEY)" --build-arg ssh_pub_key="$(echo $SSH_PUB_KEY)" .
- docker push $IMAGE:$TAG
in pipeline I build image and push on ECR
I have already add repository variables on bitbucket with ssh private and public keys
[https://i.stack.imgur.com/URAsV.png][1]
On local machine Docker image build successfull using command
docker build -t jb_common --build-arg ssh_prv_key="$(cat ~/docker_key/id_rsa)" --build-arg ssh_pub_key="$(cat ~/docker_key/id_rsa.pub)" .
[https://i.stack.imgur.com/FZuNo.png][2]
But on bibucket have error:
go: bitbucket.org/compaany/jb_utils#v0.1.2: reading https://api.bitbucket.org/2.0/repositories/company/jb_utils?fields=scm: 403 Forbidden
server response: Access denied. You must have write or admin access.
This user with ssh keys have admin access on both private repo.
While debug my problem I add some steps inside bitbucket-pipelines.yml to assert that the variables are forwarded inside the container on bitbucket: echo $SSH_PRV_KEY at the result:
[ https://i.stack.imgur.com/FjRof.png][1]
RESOLVED!!!
Pipelines does not currently support line breaks in environment variables, so base-64 encode the private key by running:
base64 -w 0 < private_key
Output result copy to bitbucket repository variables for your variables.
And I edit my bitbucket-pipelines.yml to:
image: python:3.7.4-alpine3.10
pipelines:
branches:
master:
- step:
services:
- docker
caches:
- pip
script:
- apk add --update coreutils
- mkdir -p ~/.ssh
- (umask 077 ; echo $SSH_PRV_KEY | base64 --decode > ~/.ssh/id_rsa)
- pip3 install awscli
- IMAGE="$AWS_IMAGE_PATH/jb_common"
- TAG=1.0.${BITBUCKET_BUILD_NUMBER}
- aws ecr get-login-password --region $AWS_DEFAULT_REGION | docker login --username AWS --password-stdin $AWS_IMAGE_PATH
- aws ecr list-images --repository-name "jb_common" --region $AWS_DEFAULT_REGION
- docker build -t $IMAGE:$TAG --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" .
- docker push $IMAGE:$TAG

How do I setup GitLab CI/CD for a tor hidden service, using Docker images?

I have been trying to do it, already. Although, some errors appeared and I don't know what else to do.
I think the problem here is that rsync is trying to connect to the host using port 22, when it should be 9150.
The only job that ever fails is deploy:tor. Compilation is fine.
This is the error output:
1615506885 PERROR torsocks[15]: socks5 libc connect: Connection refused (in socks5_connect() at socks5.c:202)
ssh: connect to host [MASKED] port 22: Connection refused
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: unexplained error (code 255) at io.c(228) [sender=v3.13.0_rc2-264-g725ac7fb]
torrc:
## The directory for keeping all the keys/etc
DataDirectory /var/lib/tor
## Tor opens a socks proxy on port 9150
SocksPort 0.0.0.0:9150
Dockerfile:
FROM alpine:3.13
RUN apk update \
&& apk upgrade \
&& apk add --no-cache \
tor --update-cache --repository http://dl-4.alpinelinux.org/alpine/edge/community/ --allow-untrusted \
bash \
torsocks \
rsync \
openssh-client \
sshpass \
ca-certificates \
&& update-ca-certificates \
&& rm -rf /var/cache/apk/*
EXPOSE 9150
ADD ./torrc /etc/tor/torrc
RUN chown -R tor /etc/tor
USER tor
CMD /usr/bin/tor -f /etc/tor/torrc
.gitlab-ci.yml:
variables:
JEKYLL_ENV: production
LC_ALL: C.UTF-8
stages:
- compile
- deploy
compile:pages:
image: ruby
stage: compile
before_script:
- gem install bundler
- bundle install
script:
- bundle exec jekyll build -d public
artifacts:
paths:
- public
cache:
paths:
- public/
only:
- master
deploy:tor:
image: riservatoxyz/pages-hidden-service
stage: deploy
before_script:
- echo "starting hidden service deploy job!"
script:
- torsocks rsync -zv --progress --rsh="/usr/bin/sshpass -p $TOR_SSH_PASSWORD ssh -o StrictHostKeyChecking=no -l $TOR_SSH_USERNAME" public/_site/* $TOR_SSH_HOST:www
- echo "build sent to hidden service!"
artifacts:
paths:
- public
only:
- master
Other Details
I am using sshpass because I am unable to setup ssh passwordless login, on my hidden service host.
It's not a big deal, though. Since I'm using secret and masked variables.

SSH into Azure web-app container running with non root user

I am running an Elastic and Kibana service within a container using an Azure Web app container service. I was keen on checking the SSH connectivity for this container using Azures Web SSH console feature. Followed the microsoft documentation for SSH into custom containers https://learn.microsoft.com/en-us/azure/app-service/configure-custom-container?pivots=container-linux#enable-ssh which shows the example of running the container as default root user.
My issue is Elasticsearch process does not run as a root user so I had to make the sshd process run as an elastic user. I was able to get the sshd process running which accepts the SSH connection from my host however the credentials I am setting in the docker file (elasticsearch:Docker!) are throwing Access Denied error.Any idea where i am going wrong here?
Dockerfile
FROM openjdk:jre-alpine
ARG ek_version=6.5.4
RUN apk add --quiet --no-progress --no-cache nodejs wget \
&& adduser -D elasticsearch \
&& apk add openssh \
&& echo "elasticsearch:Docker!" | chpasswd
# Copy the sshd_config file to the /etc/ssh/ directory
COPY startup.sh /home/elasticsearch/
RUN chmod +x /home/elasticsearch/startup.sh && \
chown elasticsearch /home/elasticsearch/startup.sh
COPY sshd_config /home/elasticsearch/
USER elasticsearch
WORKDIR /home/elasticsearch
ENV ES_TMPDIR=/home/elasticsearch/elasticsearch.tmp ES_DATADIR=/home/elasticsearch/elasticsearch/data
RUN wget -q -O - https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-oss-${ek_version}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ek_version} elasticsearch \
&& mkdir -p ${ES_TMPDIR} ${ES_DATADIR} \
&& wget -q -O - https://artifacts.elastic.co/downloads/kibana/kibana-oss-${ek_version}-linux-x86_64.tar.gz \
| tar -zx \
&& mv kibana-${ek_version}-linux-x86_64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm
EXPOSE 9200 5601 2222
ENTRYPOINT ["/home/elasticsearch/startup.sh"]
startup.sh script
#!/bin/sh
# Generating hostkey
ssh-keygen -f /home/elasticsearch/ssh_host_rsa_key -N '' -t rsa
# starting sshd process
echo "Starting SSHD"
/usr/sbin/sshd -f sshd_config
# Staring the ES stack
echo "Starting ES"
sh elasticsearch/bin/elasticsearch -E http.host=0.0.0.0 & kibana/bin/kibana --host 0.0.0.0
sshd_config file
Port 2222
HostKey /home/elasticsearch/ssh_host_rsa_key
PidFile /home/elasticsearch/sshd.pid
ListenAddress 0.0.0.0
LoginGraceTime 180
X11Forwarding yes
Ciphers aes128-cbc,3des-cbc,aes256-cbc,aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-sha1-96
StrictModes yes
SyslogFacility DAEMON
PasswordAuthentication yes
PermitEmptyPasswords no
PermitRootLogin yes
Subsystem sftp internal-sftp
Error i am getting
Please check and verify that your docker image supports SSH. It would appear that you have done everything correctly so one of the final troubleshooting steps left as this point is to verify that your image supports SSH to begin with.

docker : how to share ssh-keys between containers?

I've 4 containers configured like the following (docker-compose.yml):
version: '3'
networks:
my-ntwk:
ipam:
config:
- subnet: 172.20.0.0/24
services:
f-app:
image: f-app
tty: true
container_name: f-app
hostname: f-app.info.my
ports:
- "22:22"
networks:
my-ntwk:
ipv4_address: 172.20.0.5
extra_hosts:
- "f-db.info.my:172.20.0.6"
- "p-app.info.my:172.20.0.7"
- "p-db.info.my:172.20.0.8"
depends_on:
- f-db
- p-app
- p-db
f-db:
image: f-db
tty: true
container_name: f-db
hostname: f-db.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.6
p-app:
image: p-app
tty: true
container_name: p-app
hostname: p-app.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.7
p-db:
image: p-db
tty: true
container_name: prod-db
hostname: p-db.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.8
Each image is build by the same Dockerfile :
FROM openjdk:8
RUN apt-get update && \
apt-get install -y openssh-server
EXPOSE 22
RUN useradd -s /bin/bash -p $(openssl passwd -1 myuser) -d /home/nf2/ -m myuser
ENTRYPOINT service ssh start && bash
Now I want to be able to connect from f-app to any other machine without typing the password when running this line : ssh myuser#f-db.info.my.
I know that I need to exchange ssh-keys between the servers (thats not a problem). My problem is how to do it with docker containers and when (build or runtime)!
For doing ssh without password you to need to create passwordless user along with configuring SSH keys in the container, plus you will also need to add ssh keys in the sources container plus public key should be added in the authorized of the destination container.
Here is the working Dockerfile
FROM openjdk:7
RUN apt-get update && \
apt-get install -y openssh-server vim
EXPOSE 22
RUN useradd -rm -d /home/nf2/ -s /bin/bash -g root -G sudo -u 1001 ubuntu
USER ubuntu
WORKDIR /home/ubuntu
RUN mkdir -p /home/nf2/.ssh/ && \
chmod 0700 /home/nf2/.ssh && \
touch /home/nf2/.ssh/authorized_keys && \
chmod 600 /home/nf2/.ssh/authorized_keys
COPY ssh-keys/ /keys/
RUN cat /keys/ssh_test.pub >> /home/nf2/.ssh/authorized_keys
USER root
ENTRYPOINT service ssh start && bash
docker-compose will remain same, here is the testing script that you can try.
#!/bin/bash
set -e
echo "start docker-compose"
docker-compose up -d
echo "list of containers"
docker-compose ps
echo "starting ssh test from f-db to f-app"
docker exec -it f-db sh -c "ssh -i /keys/ssh_test ubuntu#f-app"
For further detail, you can try the above working example docker-container-ssh
git clone git#github.com:Adiii717/docker-container-ssh.git
cd docker-container-ssh;
./test.sh
You can replace the keys as these were used for testing purpose only.
If you are using docker compose an easy choice is to forward SSH agent like that:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
ssh-forwarding on macOS hosts - instead of mounting the path of $SSH_AUTH_SOCK, you have to mount this path - /run/host-services/ssh-auth.sock
or you can do it like:
It's a harder problem if you need to use SSH at build time. For example if you're using git clone, or in my case pip and npm to download from a private repository.
The solution I found is to add your keys using the --build-arg flag. Then you can use the new experimental --squash command (added 1.13) to merge the layers so that the keys are no longer available after removal. Here's my solution:
Build command
$ docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .
Dockerfile
FROM openjdk:8
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN apt-get update && \
apt-get install -y openssh-server && \
apt-get install -y openssh-client
EXPOSE 22
RUN useradd -s /bin/bash -p $(openssl passwd -1 myuser) -d /home/nf2/ -m myuser
ENTRYPOINT service ssh start && bash
If you're using Docker 1.13+ and/or have experimental features on you can append --squash to the build command which will merge the layers, removing the SSH keys and hiding them from docker history.

Resources