Jenkins SSH remote hosts Can't connect to server - docker

I can access to the target with ssh password and with the private key from Jenkins bash, I configured SSH sites on jenkins with the same host, User and private key I get the next error:
Docker logs:
2022-09-23 05:06:52.357+0000 [id=71] SEVERE o.j.h.p.SSHBuildWrapper$DescriptorImpl#doLoginCheck: Auth fail 2022-09-23 05:06:52.367+0000 [id=71] SEVERE o.j.h.p.SSHBuildWrapper$DescriptorImpl#doLoginCheck: Can't connect to server
Docker-compose:
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins/jenkins
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: fedora
dockerfile: Dockerfile
networks:
- net
db_host:
container_name: db
image: mysql:5.7
environment:
- "MYSQL_ROOT_PASSWORD=PASSWORD"
volumes:
- "$PWD/db:/var/lib/mysql"
networks:
- net
networks:
net:
DockerFile:
FROM fedora
RUN yum update -y
RUN yum -y install unzip
RUN yum -y install openssh-server
RUN useradd RemoteUser && \
echo "RemoteUser:Password"| chpasswd && \
mkdir /home/madchabelo/.ssh && \
chmod 700 /home/madchabelo/.ssh
COPY remote-ki.pub /home/madchabelo/.ssh/authorized_keys
RUN chown madchabelo:madchabelo -R /home/madchabelo/.ssh/ && \
chmod 600 /home/madchabelo/.ssh/authorized_keys
RUN ssh-keygen -A
RUN yum -y install mysql
RUN curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" && \
unzip awscliv2.zip && \
sudo ./aws/install
RUN yum -y install vim
CMD /usr/sbin/sshd -D
I try with the IP and I get the same.
regards

When creating a private key, you should create a code with the following command after version ubuntu 20.04.
ssh-keygen -t ecdsa -m PEM -f remote-key
For a more detailed explanation, see the link below:
https://community.jenkins.io/t/ssh-connection-auth-fail/4121/7

Related

docker : how to share ssh-keys between containers?

I've 4 containers configured like the following (docker-compose.yml):
version: '3'
networks:
my-ntwk:
ipam:
config:
- subnet: 172.20.0.0/24
services:
f-app:
image: f-app
tty: true
container_name: f-app
hostname: f-app.info.my
ports:
- "22:22"
networks:
my-ntwk:
ipv4_address: 172.20.0.5
extra_hosts:
- "f-db.info.my:172.20.0.6"
- "p-app.info.my:172.20.0.7"
- "p-db.info.my:172.20.0.8"
depends_on:
- f-db
- p-app
- p-db
f-db:
image: f-db
tty: true
container_name: f-db
hostname: f-db.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.6
p-app:
image: p-app
tty: true
container_name: p-app
hostname: p-app.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.7
p-db:
image: p-db
tty: true
container_name: prod-db
hostname: p-db.info.my
networks:
my-ntwk:
ipv4_address: 172.20.0.8
Each image is build by the same Dockerfile :
FROM openjdk:8
RUN apt-get update && \
apt-get install -y openssh-server
EXPOSE 22
RUN useradd -s /bin/bash -p $(openssl passwd -1 myuser) -d /home/nf2/ -m myuser
ENTRYPOINT service ssh start && bash
Now I want to be able to connect from f-app to any other machine without typing the password when running this line : ssh myuser#f-db.info.my.
I know that I need to exchange ssh-keys between the servers (thats not a problem). My problem is how to do it with docker containers and when (build or runtime)!
For doing ssh without password you to need to create passwordless user along with configuring SSH keys in the container, plus you will also need to add ssh keys in the sources container plus public key should be added in the authorized of the destination container.
Here is the working Dockerfile
FROM openjdk:7
RUN apt-get update && \
apt-get install -y openssh-server vim
EXPOSE 22
RUN useradd -rm -d /home/nf2/ -s /bin/bash -g root -G sudo -u 1001 ubuntu
USER ubuntu
WORKDIR /home/ubuntu
RUN mkdir -p /home/nf2/.ssh/ && \
chmod 0700 /home/nf2/.ssh && \
touch /home/nf2/.ssh/authorized_keys && \
chmod 600 /home/nf2/.ssh/authorized_keys
COPY ssh-keys/ /keys/
RUN cat /keys/ssh_test.pub >> /home/nf2/.ssh/authorized_keys
USER root
ENTRYPOINT service ssh start && bash
docker-compose will remain same, here is the testing script that you can try.
#!/bin/bash
set -e
echo "start docker-compose"
docker-compose up -d
echo "list of containers"
docker-compose ps
echo "starting ssh test from f-db to f-app"
docker exec -it f-db sh -c "ssh -i /keys/ssh_test ubuntu#f-app"
For further detail, you can try the above working example docker-container-ssh
git clone git#github.com:Adiii717/docker-container-ssh.git
cd docker-container-ssh;
./test.sh
You can replace the keys as these were used for testing purpose only.
If you are using docker compose an easy choice is to forward SSH agent like that:
something:
container_name: something
volumes:
- $SSH_AUTH_SOCK:/ssh-agent # Forward local machine SSH key to docker
environment:
SSH_AUTH_SOCK: /ssh-agent
ssh-forwarding on macOS hosts - instead of mounting the path of $SSH_AUTH_SOCK, you have to mount this path - /run/host-services/ssh-auth.sock
or you can do it like:
It's a harder problem if you need to use SSH at build time. For example if you're using git clone, or in my case pip and npm to download from a private repository.
The solution I found is to add your keys using the --build-arg flag. Then you can use the new experimental --squash command (added 1.13) to merge the layers so that the keys are no longer available after removal. Here's my solution:
Build command
$ docker build -t example --build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)" --build-arg ssh_pub_key="$(cat ~/.ssh/id_rsa.pub)" --squash .
Dockerfile
FROM openjdk:8
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
RUN apt-get update && \
apt-get install -y openssh-server && \
apt-get install -y openssh-client
EXPOSE 22
RUN useradd -s /bin/bash -p $(openssl passwd -1 myuser) -d /home/nf2/ -m myuser
ENTRYPOINT service ssh start && bash
If you're using Docker 1.13+ and/or have experimental features on you can append --squash to the build command which will merge the layers, removing the SSH keys and hiding them from docker history.

New code changes exist in live container but are not reflected in the browser

I am using Docker with the open source BI tool Apache Superset. I have added a new file, specifically a .geojson file in the CountryMap directory. Now, when I try to build using docker-compose up --build or make changes in the frontend, Docker is not fully updated, and I get a file not found error when trying to run a query. When I look inside the container via docker exec -it container_id bash, the new file is there.
Dockerfile:
FROM python:3.6-jessie
RUN useradd --user-group --create-home --no-log-init --shell /bin/bash superset
# Configure environment
ENV LANG=C.UTF-8 \
LC_ALL=C.UTF-8
RUN apt-get update -y
# Install dependencies to fix `curl https support error` and `elaying package configuration warning`
RUN apt-get install -y apt-transport-https apt-utils
# Install superset dependencies
# https://superset.incubator.apache.org/installation.html#os-dependencies
RUN apt-get install -y build-essential libssl-dev \
libffi-dev python3-dev libsasl2-dev libldap2-dev libxi-dev
# Install extra useful tool for development
RUN apt-get install -y vim less postgresql-client redis-tools
# Install nodejs for custom build
# https://superset.incubator.apache.org/installation.html#making-your-own-build
# https://nodejs.org/en/download/package-manager/
RUN curl -sL https://deb.nodesource.com/setup_10.x | bash - \
&& apt-get install -y nodejs
WORKDIR /home/superset
COPY requirements.txt .
COPY requirements-dev.txt .
COPY contrib/docker/requirements-extra.txt .
RUN pip install --upgrade setuptools pip \
&& pip install -r requirements.txt -r requirements-dev.txt -r requirements-extra.txt \
&& rm -rf /root/.cache/pip
RUN pip install gevent
COPY --chown=superset:superset superset superset
ENV PATH=/home/superset/superset/bin:$PATH \
PYTHONPATH=/home/superset/superset/:$PYTHONPATH
USER superset
RUN cd superset/assets \
&& npm ci \
&& npm run build \
&& rm -rf node_modules
COPY contrib/docker/docker-init.sh .
COPY contrib/docker/docker-entrypoint.sh /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
HEALTHCHECK CMD ["curl", "-f", "http://localhost:8088/health"]
EXPOSE 8088
docker-compose.yml:
version: '2'
services:
redis:
image: redis:3.2
restart: unless-stopped
ports:
- "127.0.0.1:6379:6379"
volumes:
- redis:/data
postgres:
image: postgres:10
restart: unless-stopped
environment:
POSTGRES_DB: superset
POSTGRES_PASSWORD: superset
POSTGRES_USER: superset
ports:
- "127.0.0.1:5432:5432"
volumes:
- postgres:/var/lib/postgresql/data
superset:
build:
context: ../../
dockerfile: contrib/docker/Dockerfile
restart: unless-stopped
environment:
POSTGRES_DB: superset
POSTGRES_USER: superset
POSTGRES_PASSWORD: superset
POSTGRES_HOST: postgres
POSTGRES_PORT: 5432
REDIS_HOST: redis
REDIS_PORT: 6379
# If using production, comment development volume below
#SUPERSET_ENV: production
SUPERSET_ENV: development
# PYTHONUNBUFFERED: 1
user: root:root
ports:
- 8088:8088
depends_on:
- postgres
- redis
volumes:
# this is needed to communicate with the postgres and redis services
- ./superset_config.py:/home/superset/superset/superset_config.py
# this is needed for development, remove with SUPERSET_ENV=production
- ../../superset:/home/superset/superset
volumes:
postgres:
external: false
redis:
external: false
Why is there a not found error?
try to use absolute path in volumes:
volumes:
- /home/me/my_project/superset_config.py:/home/superset/superset/superset_config.py
- /home/me/my_project/superset:/home/superset/superset
It is because docker-compose is utilizing cache. If the dockerfile and the docker-compose.yml in not changed it does not recreate the container image. To avoid this you should use the following flag:
--force-recreate
--force-recreate
Recreate containers even if their configuration and image haven't
changed.
For development purposes I like to use the following switch as well:
-V, --renew-anon-volumes
Recreate anonymous volumes instead of retrieving data from the previous containers.

Docker not found inside container instead of passed daemon as a volume

Can anybody help me? This docker-compose file worked for me a few days ago with docker command available inside the container, but now it throws: docker: not found inside.
The docker daemon on the host is on /usr/local/bin/docker. It's a mac.
Any idea? Could you help me to try this on yours guys? Thks
version: '3'
services:
jenkins:
container_name: jenkins
image: jenkins
build:
context: jenkins
# entrypoint: /var/jenkins_home/entrypoint
ports:
- "8080:8080"
volumes:
- $PWD/jenkins_home:/var/jenkins_home
- /var/run/docker.sock:/var/run/docker.sock
environment:
- AWS_ACCESS_KEY_ID=xxxxx
- AWS_SECRET_ACCESS_KEY=xxxxx
networks:
- net
remote_host:
container_name: remote-host
image: remote-host
build:
context: centos
volumes:
- /var/run/docker.sock:/var/run/docker.sock
networks:
- net
db_host:
container_name: db
image: mysql:5.7
environment:
- MYSQL_ROOT_PASSWORD=1234
networks:
- net
networks:
net:
Dockerfile for remote_host service is the following:
RUN yum install -y openssh-server
RUN useradd remote_user && \
echo "1234" | passwd remote_user --stdin && \
mkdir /home/remote_user/.ssh && \
chmod 700 /home/remote_user/.ssh
COPY remote-key.pub /home/remote_user/.ssh/authorized_keys
RUN chown remote_user:remote_user -R /home/remote_user && \
chmod 600 /home/remote_user/.ssh/authorized_keys
RUN /usr/sbin/sshd-keygen > /dev/null 2>&1
RUN yum install -y mysql
RUN yum install -y epel-release && \
yum install -y python-pip && \
pip install --upgrade pip && \
pip install awscli
# CMD /usr/sbin/sshd-keygen -D
CMD tail -f /dev/null

How to define a docker cli service in docker-compose

I have a docker-compose file that runs a few services.
services:
cli:
build:
context: .
dockerfile: docker/cli/Dockerfile
volumes:
- ./drupal8site:/var/www/html/drupal8site
drupal:
container_name: drupal
build:
context: .
dockerfile: docker/DockerFile.drupal
args:
DOC_ROOT: /var/www/html/drupal8site
ports:
- 80:80
volumes:
- ./drupal8site:/var/www/html/drupal8site
restart: always
environment:
APACHE_DOCUMENT_ROOT: /var/www/html/drupal8site/web
mysql:
image: mysql:5.7
ports:
- 3306:3306
volumes:
- ./mysql:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root
I would like to add another service which will be a container in which I could run CLI commands (composer, drush for drupal, php, etc).
The following Dockerfile was how I initially defined the cli service but it stops right after it is run. How do I define it so it is part of my docker-compose, shares my mounted volume, and I can interactively connect to it and run CLI commands on it ?
FROM php:7.2-cli
#various programs
RUN apt-get update \
&& apt-get install vim --assume-yes \
&& apt-get install git --assume-yes \
&& apt-get install mysql-client --assume-yes
CMD ["bash"]
Thanks,
Yaron
If you want to run automated scripts on docker images this is obviously a job for a ci-pipeline. You can use CloudFoundry or OpenStack to do this.
But there are many other questions in this post:
1.) How can i share my mounted volume:
You can pass a volume with the -v option to a container. e.g.:
docker run -it -d -v $(pwd)/localFolder:/exposedFolderFromDocker mydockerhub/myawesomeimage
2.) Can I interactively connect to it and run CLI commands on it
docker exec -it docker_cli_1 bash
I recommend to implement features of an docker-image to the individual docker-images Dockerfile. For example copying and running a prepared shell-script:
# your Dockerfile
FROM php:7.2-cli
#various programs
RUN apt-get update \
&& apt-get install vim --assume-yes \
&& apt-get install git --assume-yes \
&& apt-get install mysql-client --assume-yes
# individual changes
COPY your_script.sh /
RUN chown root:root /your_script.sh && \
chmod 0755 /your_script.sh
CMD ["/your_script.sh"]
# a folder to expose
VOLUME /exposedFolderFromDocker
CMD ["bash"]

Problem with create Dockercompose and Dockerfile. Causes "Error response from daemon"

This is my first dockerfile project with docker-compose. In my project I try create a docker-compose file.
node/Dockerfile
FROM centos:latest
MAINTAINER braulio#braulioti.com.br
LABEL Description="Site Brau.io - API NodeJS"
EXPOSE 3000
RUN yum update -y \
&& yum install -y java-1.8.0-openjdk \
&& yum update -y \
&& yum install -y epel-release \
&& yum install -y nodejs \
&& yum install -y psmisc \
&& npm install -g forever \
&& npm install -g typescript
RUN rm -rf /etc/localtime && ln -s /usr/share/zoneinfo/Brazil/East /etc/localtime
RUN mkdir -p /app
VOLUME ["/app"]
docker-compose.yml
version: '3'
services:
node:
build: node
image: docker_node
ports:
- "8082:3000"
container_name: "brau_io_api"
volumes:
- /app/brau_io/api:/app/
command: /bin/bash
This project result in:
Error response from daemon: Container 65cecc8bdc923c3f596dba91fd059b8268fd390737391d4d91afa7d34325bea1 is not running
In docker-compose you should create some services and you can link them. for example:
docker-compose.yml
version: '3'
services:
my_app:
build: .
image: my_app:1.0.0
container_name: my_app_container
command: ... # you can run a bash file or a command
I created a docker-compose with my_app service which it can create my_app image.
you can rewrite it with your node container.
Reference
I enabled the tty function in my docker-compose.yml file and works like a chaming (See Reference).
This is my final docker-compose.yml file:
docker-compose.yml
version: '3'
services:
node:
build: node
image: docker_node
ports:
- "8082:3000"
container_name: "brau_io_api"
volumes:
- /app/brau_io/api:/app/
command: /bin/bash
tty: true

Resources