sshpass - scp error - lost connection - connection

Any idea why I'm getting this. I can successfully ping the server from my local machine. which sshpass is showing valid output and I'm using Cygwin64.
$ sshpass -p loriK0ba scp SSUA-HG.war SSUA-RCd.war loki12:/tmp/ARUNKS
lost connection
To install sshpass, I followed the following steps then, -V shows me valid output:
# IMPORTANT: You need to have sshpass utility installed on your local machine.
# ============================================================================
# HOW to get sshpass:
# STEPS: Run the following commands in Cygwin console window - This is a one time action.
# a. cd /cygdrive/c
# b. wget http://downloads.sourceforge.net/project/sshpass/sshpass/1.05/sshpass-1.05.tar.gz
# c. tar -xvzpf sshpass-1.05.tar.gz
# d. cd sshpass-1.05
# e. sh configure
# f. make install
# g. which sshpass
# ============================================================================
.
sh configure; sleep 5; make install; sleep 5; which sshpass
Version of sshpass is:
$ sshpass -V
sshpass 1.05 (C) 2006-2011 Lingnu Open Source Consulting Ltd.
This program is free software, and can be distributed under the terms of the GPL
See the COPYING file for more information.
PS: When I run sshpass with the following command, it doesnt' error out but also not creating giga.txt file on the target server (thus it's not reaching / doing anything). Same user (case sensitive) exist on both local box (where Cygwin is running) and on the target machine.
sshpass -p loriK0ba ssh -q loki12 'rm -rf /tmp/ARUNKS/* 2>/dev/null; mkdir /tmp/ARUNKS 2>/dev/null'

Related

SSH keys for Docker executor

I have created an image where I run some tasks.
I want to be able to push some files to a remote server that runs Windows Server 2022.
The gitlab-runner runs on an Ubuntu machine.
I managed to do that using shell executors. But now I want to do the same inside a docker container.
Using the following guide
https://docs.gitlab.com/ee/ci/ssh_keys/#ssh-keys-when-using-the-docker-executor
I don't understand in which user I will create the keys.
In a shell executor case I used gitlab-runner user in which I created a pair of keys. I added the public key to the server that I want to push files to and it worked.
However, I added the same private key into the gitlab CI/CD variable as the guide suggests.
Then inside the job I added the following:
before_script:
- 'command -v ssh-agent >/dev/null || ( apt-get update -y && apt-get install openssh-client -y )'
- eval $(ssh-agent -s)
- echo "$SSH_PRIVATE_KEY" | tr -d '\r' | ssh-add -
- mkdir -p ~/.ssh
- chmod 700 ~/.ssh
script:
- scp -P <port> myfile.txt username#ip:remote_path
But the job fails with errors
Host key verification failed.
lost connection
Should I use the same private key from gitlab-runner user?
PS: The echo "$SSH_PRIVATE_KEY" works. I can see the key I added in the gitlab CI/CD variable.
I used something similar in my CI process, works like a charm, I recall I've used some base64 formatted runner key due to some formatting errors:
- echo $GITLAB_RUNNER_SSH_KEY | base64 -d > $HOME/.ssh/runner_key
- chmod -R 600 ~/.ssh
- eval $(ssh-agent -s)
- ssh-add $HOME/.ssh/runner_key

How to access root folder inside a Docker container

I am new to docker, and am attempting to build an image that involves performing an npm install. Some of our the dependencies are coming from private repos we have, and I am hitting an SSH related issue:
I realised I was not supplying any form of SSH details to my file, and came across various posts online about how to do this using args into the docker build command.
So taken from here, I have added the following to my dockerfile before the npm install command gets run:
ARG ssh_prv_key
ARG ssh_pub_key
RUN apt-get update && \
apt-get install -y \
git \
openssh-server \
libmysqlclient-dev
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan github.com > /root/.ssh/known_hosts
# Add the keys and set permissions
RUN echo "$ssh_prv_key" > /root/.ssh/id_rsa && \
echo "$ssh_pub_key" > /root/.ssh/id_rsa.pub && \
chmod 600 /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa.pub
So running the docker build command again with the correct args supplied, I do see further activity in the console that suggests my SSH key is being utilised:
But as you can see I am getting no hostkey alg messages and
I still getting the same 'Host key verification failed' error. I was wondering if I could view the log file it references in the error:
Do I need to get the image running in order to be able to connect to it and browse the 'root' folder?
I hope I have made sense, please be gentle I am a docker noob!
Thanks
The lines that start with —-> in the docker build output are valid Docker image IDs. You can pick any of these and docker run them:
docker run --rm -it 59c45dac474a sh
If a step is actually failing, one useful debugging trick is to launch the image built in the step before it and run the command by hand.
Remember that anyone who has your image can do this; the way you’ve built it, if you ever push your image to any repository, your ssh private key is there for the taking, and you should probably consider it compromised. That’s doubly true since it will also be there in plain text in docker history output.

Docker, how to deal with ssh keys, known_hosts and authorized_keys

In docker, how to scope with the requirement of configuring known_hosts, authorized_keys and ssh connectivity in general, when container have to talk with external systems?
For example, I'm running jenkins container and try to checkout the project from github in job, but connection fails with the error host key verification failed
This could be solved by login into container, connect to github manually and trust the host key when prompted. However this isn't proper solution, as everything needs to be 100% automated (I'm building CI pipeline with ansible and docker). Another (clunky) solution would be to provision the running container with ansible, but this would make things messy and hard to maintain. Jenkins container doesn't even has ssh daemon, and I'm not sure how to ssh into container from other host. Third option would be to use my own Dockerfile extending jenkins image, where ssh is configured, but that would be hardcoding and locking the container to this specific environment.
So what is the correct way with docker to manage (and automate) connectivity with external systems?
To trust github.com host you can issue this command when you start or build your container:
ssh-keyscan -t rsa github.com >> ~/.ssh/known_hosts
This will add github public key to your known hosts file.
If everything is done in the Dockerfile it's easy.
In my Dockerfile:
ARG PRIVATE_SSH_KEY
# Authorize SSH Host
RUN mkdir -p /root/.ssh && \
chmod 0700 /root/.ssh && \
ssh-keyscan example.com > /root/.ssh/known_hosts && \
# Add the keys and set permissions
echo "$PRIVATE_SSH_KEY" > /root/.ssh/id_rsa && \
chmod 600 /root/.ssh/id_rsa
...do stuff with private key
# Remove SSH keys
RUN rm -rf /root/.ssh/
You need to obviously need to pass the private key as an argument to the building(docker-compose build or docker build).
One solution is to mount host's ssh keys into docker with following options:
docker run -v /home/<host user>/.ssh:/home/<docker user>/.ssh <image>
This works perfectly for git.
There is a small trick but git version should be > 2.3
export GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no"
git clone git#gitlab.com:some/another/repo.git
or simply
GIT_SSH_COMMAND="ssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
you can even point to private key file path like this:
GIT_SSH_COMMAND="ssh -i /path/to/private_key_file -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no" git clone git#...
This is how I do it, not sure if you will like this solution though. I have a private git repository containing authorized_keys with a collection of public keys. Then, I use ansible to clone this repository and replace authorized_keys:
- git: repo=my_repo dest=my_local_folder force=yes accept_hostkey=yes
- shell: "cp my_local_folder/authorized_keys ~/.ssh/"
Using accept_hostkey is what actually allows me to automate the process (I trust the source, of course).
Try this:
Log into the host, then:
sudo mkdir /var/jenkins_home/.ssh/
sudo ssh-keyscan -t rsa github.com >> /var/jenkins_home/.ssh/known_hosts
The Jenkins container sets the home location to the persistent map, as such, running this in the host system will generate the required result.
Detailed answer to the one provided by #Konstantin Suvorov, if you are going to use a Dockerfile.
In my Dockerfile I just added:
COPY my_rsa /root/.ssh/my_rsa # copy rsa key
RUN chmod 600 /root/.ssh/my_rsa # make it accessible
RUN apt-get -y install openssh-server # install openssh
RUN ssh-keyscan my_hostname >> ~/.ssh/known_hosts # add hostname to known_hosts
Note that "my_hostname" and "my_rsa" are your host-name and your rsa key
This made ssh work in docker without any issues, so I could connect to DBs

How can I change the group of a file when executing in a Travis CI build?

I've got a Python Travis CI build and a Python unit test executes attempts to change the group of a file on the filesystem. The file was previously created by the unit test, so the user executing the test owns the file.
I'm able to start a sub-shell in which I can run chgrp commands (per the Travis guidelines), but unfortunately, this screws up the virtualenv set up for my specific Python version (and who knows what else).
How to replicate (in Travis CI script):
language: python
sudo: true
python:
- "3.4"
- "3.5"
before_install:
- sudo apt-get -qq update
- sudo gpasswd -a $USER fuse
script:
- touch testfile
- chgrp fuse testfile | echo 0 # this does not work - bad
- sudo -E su $USER -c "chgrp fuse testfile" # the sudo / su wrapper is required per Travis instructions, see link above - good
- python --version # reports 3.4 or 3.5 - good
- sudo -E su $USER -c "python --version" # always reports 2.7 - bad
- sudo -E su $USER -c "python --version" # always reports 3.2 - bad
As I've commented in the block above, running a command which attempts to change the group of the testfile (which is what my unit test code is doing) only works when wrapped with sudo -E su $USER -c.
Unfortunately, when I do this, I lose the ability to access python 3.4 and 3.5 in those script phases (which I've specified above) in the virtualenv that Travis has set up for me.
Any idea how I can achieve both of my goals? (allowing chgrp from the travis non-root user while simultaneously not mucking with the virtualenv or the python on the path?
When you create a new group, you have to log out and log in again to be able to use chgrp.
Using sudo is a way around this behavior. Since you're already using it for groupadd and usermod, I suggest changing the last line to sudo chgrp newtravisgroup newfile.
You can also use su to create a new login shell where newtravisgroup will be available but using sudo as mentioned above is the simplest way.
Edit:
When you use su PATH is reset. That's the reason python reverts back to the system python. You can activate the virtualenv again before running your test.
sudo -E su $USER -c "source $VIRTUAL_ENV/bin/activate; python --version"

Package synchronization with opkg

We're using the BeagleBone Black running Angstrom Linux and the opkg package manager to power some of our systems. We need to ensure that we have consistent and reliable access to specific versions of opkg packages. I've set up an in-house opkg repository. Is there any way to sync packages between repositories ? e.g. I'd like to copy specific packages from public / not always accessible repositories to our internal repository, both for speed and reliable access.
After some fooling around with various packages, etc, I found a way of cloning (parts of) a repository using an Ubuntu system. Here's the steps I took:
# Install apache
sudo apt-get install apache2
# Install git
sudo apt-get install git
# Download the opkg-utils from the Yocto Project
git clone http://git.yoctoproject.org/git/opkg-utils
# Build the opkg-utils
cd opkg-utils && make; cd -
# Move them to a common directory
mv opkg-utils /usr/local/share\
# Add them to my path
echo "PATH=\"\$PATH:/usr/local/share/opkg-utils\"" >> /etc/environment
# Update my environment
source /etc/environment
# Create the structure of my repository
mkdir -p /var/www/repositories/opkg/beaglebone
# Create an index for the packages
opkg-make-index -l Packages.filelist -p Packages /var/www/repositories/opkg/beaglebone
cd /var/www/repositories/opkg/beaglebone
gzip -c Packages > Packages.gz
On my client BeagleBone Blacks, to setup access to this repository:
echo "src/gz reponame http://myserver/repositories/opkg/beaglebone" > /etc/opkg/rms-feed.conf
chmod 666 /etc/opkg/reponame-feed.conf
opkg update
On my developer machines, any time I need to backup a package:
#!/bin/bash
###############################################################################
#
# bbb_clone_package_to_internal_repo.sh
#
# Description:
# Clones an ipkg / opkg package to the internal repository server so that it can be deployed
# to BeagleBone Black clients on demand. This is so that we can have backups in
# the event that a public server becomes temporarily or permanently
# inaccessible.
#
# Pre-conditions:
# 1) The given package file must exist at the path specified.
#
# Post-conditions:
# 1) The given package file will be sent to the internal repository server.
# 2) The opkg repository indexes will all be updated
#
# Parameters:
# -p <file path.opk> : The package to be cloned
#
###############################################################################
PACKAGE_FILE_PATH=""
SERVER="myserver"
ERR_INVALID_PACKAGE_FILE_NAME=1
ERR_PACKAGE_FILE_NOT_ACCESSIBLE=2
ERR_FAILED_TO_COPY_PACKAGE_TO_SERVER=3
ERR_FAILED_TO_DEPLOY_PACKAGE_ON_SERVER=4
usage()
{
cat << EOF
usage: $0 [options]
This script copies a remote ipkg/opkg file to the $SERVER server for subsequent
deployment to BeagleBone Black boards.
OPTIONS:
-p <file path.[io]pk> The package file to be deployed
-h,? Show this message
EOF
}
while getopts “p:h?” OPTION
do
case $OPTION in
p)
PACKAGE_FILE_PATH="$OPTARG"
;;
h)
usage
exit
;;
?)
usage
exit
;;
esac
done
if [[ -z "$PACKAGE_FILE_PATH" || ! ( "$PACKAGE_FILE_PATH" =~ \.[io]pk$ ) ]]; then
echo "The package file must not be blank and must have an .ipk or .opk suffix"
exit $ERR_INVALID_PACKAGE_FILE_NAME
fi
# Retrieve the package
wget -q "$PACKAGE_FILE_PATH"
RESULT="$?"
if [[ $RESULT -ne 0 ]]; then
echo "Failed to retrieve file $PACKAGE_FILE_PATH with result $RESULT"
exit $ERR_PACKAGE_FILE_NOT_ACCESSIBLE
fi
# Deploy the package to myserver
PACKAGE_FILE_NAME="$(basename $PACKAGE_FILE_PATH)"
REPOSITORY_ROOT="/var/www/repositories/opkg/beaglebone"
scp "$PACKAGE_FILE_NAME" root#$SERVER:$REPOSITORY_ROOT
RESULT="$?"
if [[ $RESULT -ne 0 ]]; then
echo "Failed to copy file $PACKAGE_FILE_NAME to server with result $RESULT"
exit $ERR_FAILED_TO_COPY_PACKAGE_TO_SERVER
fi
ssh root#$SERVER "chmod 644 $REPOSITORY_ROOT/$PACKAGE_FILE_NAME; opkg-make-index -l $REPOSITORY_ROOT/Packages.filelist -p $REPOSITORY_ROOT/Packages -r $REPOSITORY_ROOT/Packages $REPOSITORY_ROOT && gzip -c $REPOSITORY_ROOT/Packages > $REPOSITORY_ROOT/Packages.gz"
RESULT="$?"
if [[ $RESULT -ne 0 ]]; then
echo "Failed to deploy file $PACKAGE_FILE_NAME in repository with result $RESULT"
exit $ERR_FAILED_TO_DEPLOY_PACKAGE_ON_SERVER
fi
exit 0

Resources