Install Python dependencies on another server using Jenkins Pipeline - jenkins

Hi is there a possibility to ssh to a server activate a virtual env and install requirements from my Jenkins Pipeline Project
I have tried this but does not seem to maintain my virtual env session
node {
sh '''
ssh server virtualenv myvenv
ssh server source myvenv/bin/activate && which python "
'''
}

I found the solution you have to run it like this
ssh server "source myvenv/bin/activate; which python"

Related

Why is this Todo app build failing in Jenkins when deploying on AWS Linux using Docker file in WSL2?

So I was trying to deploy a simple CD pipeline using docker by ssh’ing into my AWS Linux EC2 instance in the WSL2 terminal. The job is failing every time returning the following error:
Started by user Navdeep Singh Running as SYSTEM Building on the
built-in node in workspace /var/lib/jenkins/workspace/todo-dev
[todo-dev] $ /bin/sh -xe /tmp/jenkins6737039323529850559.sh + cd
/home/ubuntu/project/django-todo /tmp/jenkins6737039323529850559.sh:
2: cd: can’t cd to /home/ubuntu/project/django-todo Build step
‘Execute shell’ marked build as failure Finished: FAILURE
DockerFile contents:
FROM python:3 RUN pip install django==3.2
COPY . .
RUN python manage.py migrate
CMD [“python”,“manage.py”,“runserver”,“0.0.0.0:8000”]
Everything goes fine. This error cd: can’t cd to /home/ubuntu/project/django-todo Build step ‘Execute shell’ marked build as failure Finished: FAILURE is not an actual.
Your agent Node is not online.
To fix the problem, find commands on your jenkins web page after an agent setup. You need to run those commands from your terminal. See the screenshot for more details.
Make sure that your jenkins public IP and node agent public IP are the same. If an error occurs, you need to run some commands on the terminal. This is not a real error.
this issue follow this step which i give you
For Agent--->
change your ip here(44.203.138.174:8080) to your EC2 ip
1.curl -sO http://44.203.138.174:8080/jnlpJars/agent.jar
2.java -jar agent.jar -jnlpUrl http://44.203.138.174:8080/manage/computer/todo%2Dagent/jenkins-agent.jnlp -secret beb62de0f81bfd06e4cd81d1b896d85d38f82b87b21ef8baef3389e651c9f72c -workDir "/home/ubuntu"
For JOb --->
sudo vi /etc/sudoers
then add this command below root access in sudoers file
jenkins ALL=(ALL) NOPASSWD: ALL
3.then goto the ubuntu directory using cd .. then run this codes
grep ^ubuntu /etc/group
id jenkins
sudo adduser jenkins ubuntu
grep ^ubuntu /etc/group
4.restart the jenkins relogin
sudo systemctl stop jenkins
then you good to go

Download SQLPlus to Jenkins server

I'm trying to run SQLPlus in a Jenkins build (I have SQLPlus installed on my local). Currently, I have a variable in my build called PATH:
PATH=/usr/local/Cellar/instantclient
My build is stable and Jenkins is able to connect to the database and execute commands. Is there a way to run SQLPlus commands on a Jenkins server that doesn't have SQLPlus installed. I'm trying to avoid using the SQLPlus script runner plug in as well.
You can install SQLPlus through the pipeline itself. For example, refer to the following. Having said that, if you have a static Jenkins instance best option is to install whatever tools that are required in the server itself.
pipeline {
agent any
stages {
stage('SetUpSQLPLus') {
steps {
echo 'Setup'
cleanWs()
sh """
curl -LO https://download.oracle.com/otn_software/linux/instantclient/216000/instantclient-sqlplus-linux.x64-21.6.0.0.0dbru.zip
unzip instantclient-sqlplus*.zip
cd instantclient_21_6
chmod +x sqlplus
./sqlplus --version
"""
}
}
}
}

How to correctly pass ssh key file from Jenkins credentials variable into to docker build command?

This question is a follow up to this question
How to pass jenkins credentials into docker build command?
I am getting the ssh key file from jenkins credential store in my groovy pipeline and
passing it into docker build command via --build-arg so that I can checkout and build artifacts from the private git repos from within my docker container
credentials store id : cicd-user, which works for checking out my private works as expected from my groovy Jenkinsfile
checkout([$class: 'GitSCM',
userRemoteConfigs: [[credentialsId: 'cicd-user', url:'ssh://git#bitbucket.myorg.co:7999/A/software.git']]
I access it and try to pass the same to docker build command:
withCredentials([sshUserPrivateKey(credentialsId: 'cicd-user', keyFileVariable: 'FILE')]) {
sh "cd ${WORKSPACE} && docker build -t ${some-name} --build-arg USERNAME=cicd-user --build-arg PRIV_KEY_FILE=\$FILE --network=host -f software/tools/jenkins/${some-name}/Dockerfile ."
}
in Dockerfile I do
RUN echo "$PRIV_KEY_FILE" > /home/"$USERNAME"/.ssh/id_rsa && \
chmod 700 /home/"$USERNAME"/.ssh/id_rsa
RUN echo "Host bitbucket.myorg.co\n\tStrictHostKeyChecking no\n" >> ~/.ssh/config
But I am seeing the following issue
"Load key "/home/cicd-user/.ssh/id_rsa" :(invalid format)
"git#Bitbucket.mycomp.co:Permission denied( Public key)
"fatal: could not read from remote repository"
In the past I have passed the ssh priv key as --build-arg from outside by cat'ing like below
--build-arg ssh_prv_key="$(cat ~/.ssh/id_rsa)"
Should I do something similar
--build-arg PRIV_KEY_FILE="$(cat $FILE)"
Any idea on what might be going wrong or where I should be looking for debugging this correctly ?
I ran into the same issue yesterday and I think I've come up with a workable solution.
Here are the basic steps I took - using the sshagent plugin to manage the sshagent within the Jenkins job. You could probably use withCredentials as well, though that's not what I ended up finding success with.
The ssagent (or alternatively the key) can be made available to specific build steps using the docker build commands --ssh flag. (Feature reference) It's important to note that for this to work (at the current time) you need to set DOCKER_BUILDKIT=1. If you forget to do this, then it seems like it ignores this configuration and the ssh connection will fail. Once that's set, the sshagent
Cut down look at the pipeline:
pipeline {
agent {
// ...
}
environment {
// Necessary to enable Docker buildkit features such as --ssh
DOCKER_BUILDKIT = "1"
}
stages {
// other stages
stage('Docker Build') {
steps {
// Start ssh agent and add the private key(s) that will be needed in docker build
sshagent(['credentials-id-of-private-key']) {
// Make the default ssh agent (the one configured above) accessible in the build
sh 'docker build --ssh default .'
}
}
// other stages
}
}
}
In the Dockerfile it's necessary to explicitly give lines that need it access to the ssh agent. This can be done by including mount=type=ssh in the relevant RUN command.
For me, this looked roughly like this:
FROM node:14
# Retrieve bitbucket host key
RUN mkdir -p -m -0600 ~/.ssh && ssh-keyscan bitbucket.org >> ~/.ssh/known_hosts
...
# Mount ssh agent for install
RUN --mount=type=ssh npm i
...
With this configuration, the npm install was able to install a private git repo stored on Bitbucket by utilizing the SSH private key within docker build via sshagent.
After spending one week I found some how reasonable way to do.
just add
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
into your docker file and it will install if it needs to install private packages as well.
add pass your GIT_ACCESS_TOKEN (you can have it in your github settings account with setting proper permissions) where you are building your image. Like
docker build --build-arg GIT_ACCESS_TOKEN=yourtoken -t imageNameAndTag .

Jenkins pipeline - use ssh agent to clone a repository in another machine through ssh

Use case:
I have a Jenkins pipeline to update my development environment.
My dev env is an EC2 aws instance with docker compose.
The automation was written along the lines of:
withAWS(profile: 'default') {
sh "ssh -o StrictHostKeyChecking=no -i ~/my-key.pem user#$123.456.789 /bin/bash -c 'run some command like docker pull'"
}
Now, I have other test environments, and they all have some sort of docker-compose file, configurations and property files that requires me to go over all of them when something needs to change.
To help with that, I created a new repository to keep all the different environment configurations, and my plan is to have a clone of this repo in all my development and test environments, so when I need to change something, I can just do it locally, push it, and have my jenkins pipeline update the repository in whichever environment it is updating.
My jenkins has a ssh credential for my repo (it uses in another job that clones the repo and run tests on source code), so I want to use that same credential.
Question: can I somehow, through ssh'ing into another machine, use Jenkins ssh-agent credentials to clone/update a bitbucket repository?
Edit:
I changed the pipeline to:
script {
def hgCommand = "hg clone ssh://hg#bitbucket.org/my-repo"
sshagent(['12345']) {
sh "ssh -o StrictHostKeyChecking=no -i ~/mykey.pem admin#${IP_ADDRESS} /bin/bash -c '\"${hgCommand}\"'"
}
}
And I am getting:
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-FOburguZZlU0/agent.662
SSH_AGENT_PID=664
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/workspace/abc#tmp/private_key_12345.key (rsa w/o comment)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
[test-env-config] Running shell script
+ ssh -o StrictHostKeyChecking=no -i /home/jenkins/mykey.pem admin#123.456.789 /bin/bash -c "hg clone ssh://hg#bitbucket.org/my-repo"
remote: Warning: Permanently added the RSA host key for IP address '765.432.123' to the list of known hosts.
remote: Permission denied (publickey).
abort: no suitable response from remote hg!
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 664 killed;
[ssh-agent] Stopped.
First some background to understand the reasoning (this is pure ssh, no Jenkins or Mercurial specific): the ssh-agent utility works by making a UNIX domain socket to be then used by ssh. The ssh command attempts to communicate with the agent if it finds the the environment variable SSH_AUTH_SOCK. In addition, ssh can be instructed to forward the agent, via -A. For more details, see the man pages of ssh-agent and ssh.
So, assuming that your withAWS context makes the environment variable SSH_AUTH_SOCK (set by the plugin) available, I think it should be enough to:
add -A to your ssh invocation
in the part 'run some command like docker pull', add the hg clone command, ensuring you are using the ssh:// schema for the mercurial URL.
Security observation: -o StrictHostKeyChecking=no should be used as a last resort. From your example, the IP address of the target is fixed, so you should do the following:
remove the -o StrictHostKeyChecking=no
one-shot: get the host fingerprint of 123.456.789 (for example by ssh-ing into it and then looking for the associated line in your $HOME/.known_hosts). Save that line in a file, say 123.456.789.fingerpint
make the file 123.456.789.fingerprint available to Jenkins when it is invoking your sample code. This can be done by committing that file in the repo that contains the Jenkins pipeline, it is safe to do so since it doesn't contain secrets.
Finally, change your ssh invocation to something like ssh -o UserKnownHostsFile=/path/to/123.456.789.fingerprint ...

Using Docker for Windows in Jenkins Declarative Pipeline

I am setting up a CI workflow with Jenkins declarative pipeline and Docker-for-Windows agents through Dockerfile.
Note: It is unfortunately currently not a solution to use a Linux-based docker daemon, since I need to run Windows binaries.
Setup: Jenkins master runs on Linux 16.04 through Docker. Jenkins build agent is
Windows 10 Enterprise 1709 (16299.551)
Docker-for-Windows 17.12.0-ce
Docker 18.x gave me headaches when trying to use Windows Containers, so I rolled back to 17.x. I still had some issues when trying to run with Jenkins and nohup not being on path, but it was solved by adding Git binaries to Windows search path (another reference). I suspect my current issue may be related.
Code: I am trying to initialize a Jenkinsfile and run a simple hello-world-printout within.
/Jenkinsfile
pipeline {
agent none
stages {
stage('Docker Test') {
agent {
dockerfile {
filename 'Dockerfile'
label 'windocker'
}
}
steps {
println 'Hello, World!'
}
}
}
}
/Dockerfile
FROM python:3.7-windowsservercore
RUN python -m pip install --upgrade pip
Basically, this should be a clean image that simply prints "Hello, World!". But it fails on Jenkins!
Output from the log:
[C:\jenkins\workspace\dockerfilecd4c215a] Running shell script
+ docker build -t cbe5e0bb1fa45f7ec37a2b15566f84aa9bd08f5d -f Dockerfile .
Sending build context to Docker daemon 337.4kB
Step 1/2 : FROM python:3.7-windowsservercore
---> 340689b75c39
Step 2/2 : RUN python -m pip install --upgrade pip
---> Using cache
---> a93f446a877f
Successfully built a93f446a877f
Successfully tagged cbe5e0bb1fa45f7ec37a2b15566f84aa9bd08f5d:latest
[C:\jenkins\workspace\dockerfilecd4c215a] Running shell script
+ docker inspect -f . cbe5e0bb1fa45f7ec37a2b15566f84aa9bd08f5d
.
Cannot run program "id": CreateProcess error=2, The system cannot find the file specified
The issue is, that windows is not supported at the moment. It is calling the linux "id" command to get the current user id.
There is an open Pull Request and JIRA Ticket at Jenkins to support Windows docker pipeline:
https://issues.jenkins-ci.org/browse/JENKINS-36776
https://github.com/jenkinsci/docker-workflow-plugin/pull/148

Resources