Jenkins is configured to deploy PCF application. PCF login credentials is configured in Jenkins as variables. Is there any way to fetch the PCF login credential details from Jenkins variables?
echo "Pushing PCF App"
cf login -a https://api.pcf.org.com -u $cduser -p $cdpass -o ORG -s ORG_Dev
cf push pcf-app-04v2\_$BUILD_NUMBER -b java_buildpack -n pcf-app-04v2\_$BUILD_NUMBER -f manifest-prod.yml -p build/libs/*.jar
cf map-route pcf-app-04v2\_$BUILD_NUMBER apps-pr03.cf.org.com --hostname pcf-app
cf delete -f pcf-app
cf rename pcf-app-04v2\_$BUILD_NUMBER pcf-app
cf delete-orphaned-routes -f
Rather than accessing Jenkins credentials outside to manually run your app, you may probably define a simple pipeline in Jenkins and can define a custom script into it to perform these tasks. In script you can access it through credentials() function and use the environment variable in your commands.
E.g.
environment {
CF_USER = credentials('YOUR JENKINS CREDENTIAL KEY')
CF_PWD = credentials('CF_PWD')
}
def deploy() {
script {
sh '''#!/bin/bash
set -x
cf login -a https://api.pcf.org.com -u ${CF_USER} -p ${CF_PWD} -o ORG -s ORG_Dev
pwd
'''
}
Related
I am writing a declarative pipeline in a Jenkinsfile in order to build and deploy an app.
The deployment is usually done by sshing to the docker host and running:
cd myDirectory
docker stack deploy --compose-file docker-compose.yml foo
I managed to run a single shell command via ssh, but don't know how to run multiple commands after eacht other.
This is what I have now:
pipeline {
agent { label 'onlyMyHost' }
stages {
stage("checkout"){...}
stage("build"){...}
stage("deploy") {
steps {
sshagent(credentials: ['my-sshKey']){
sh 'ssh -o StrictHostKeyChecking=no myUser#foo.bar.com hostname'
sh ("""
ssh -o StrictHostKeyChecking=no myUser#foo.bar.com 'bash -s' < "cd MyDirectory && docker stack deploy --composefile docker-compose.yml foo"
""")
}
}
}
}
}
This fails. What is a good way of running a script on a remote from my specific jenkins-worker
Not sure why 'bash -s' is added here. Removing that from your SSH command should allow you to execute docker deploy remotely.
Moreover, you may run any number of commands in the same line by appending with ; after each command. For example:
ssh -o StrictHostKeyChecking=no myUser#foo.bar.com < "cd MyDirectory && docker stack deploy --composefile docker-compose.yml foo ; docker ps"
I have created a docker image using AmazonLinux:2 base image in my Dockerfile. This docker container will run as Jenkins build agent on a Linux server and has to make certain AWS API calls. In my Dockerfile, I'm copying a shell-script called assume-role.sh.
Code snippet:-
COPY ./assume-role.sh .
RUN ["chmod", "+x", "assume-role.sh"]
ENTRYPOINT ["/assume-role.sh"]
CMD ["bash", "--"]
Shell script definition:-
#!/usr/bin/env bash
#echo Your container args are: "${1} ${2} ${3} ${4} ${5}"
echo Your container args are: "${1}"
ROLE_ARN="${1}"
AWS_DEFAULT_REGION="${2:-us-east-1}"
SESSIONID=$(date +"%s")
DURATIONSECONDS="${3:-3600}"
#Temporary loggings starts here
id
pwd
ls .aws
cat .aws/credentials
#Temporary loggings ends here
# AWS STS AssumeRole
RESULT=(`aws sts assume-role --role-arn $ROLE_ARN \
--role-session-name $SESSIONID \
--duration-seconds $DURATIONSECONDS \
--query '[Credentials.AccessKeyId,Credentials.SecretAccessKey,Credentials.SessionToken]' \
--output text`)
# Setting up temporary creds
export AWS_ACCESS_KEY_ID=${RESULT[0]}
export AWS_SECRET_ACCESS_KEY=${RESULT[1]}
export AWS_SECURITY_TOKEN=${RESULT[2]}
export AWS_SESSION_TOKEN=${AWS_SECURITY_TOKEN}
echo 'AWS STS AssumeRole completed successfully'
# Making test AWS API calls
aws s3 ls
echo 'test calls completed'
I'm running the docker container like this:-
docker run -d -v $PWD/.aws:/.aws:ro -e XDG_CACHE_HOME=/tmp/go/.cache arn:aws:iam::829327394277:role/myjenkins test-image
What I'm trying to do here is mounting .aws credentials from host directory to the volume on container at root level. The volume mount is successful and I can see the log outputs as describe in its shell file :-
ls .aws
cat .aws/credentials
It tells me there is a .aws folder with credentials inside it in the root level (/). However somehow, AWS CLI is not picking up and as a result remaining API calls like AWS STS assume-role is getting failed.
Can somebody please suggest me here?
[Output of docker run]
Your container args are: arn:aws:iam::829327394277:role/myjenkins
uid=0(root) gid=0(root) groups=0(root)
/
config
credentials
[default]
aws_access_key_id = AKXXXXXXXXXXXXXXXXXXXP
aws_secret_access_key = e8SYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYxYm
Unable to locate credentials. You can configure credentials by running "aws configure".
AWS STS AssumeRole completed successfully
Unable to locate credentials. You can configure credentials by running "aws configure".
test calls completed
I found the issue finally.
The path was wrong while mounting the .aws volume to the container.
Instead of this -v $PWD/.aws:/.aws:ro, it was supposed to be -v $PWD/.aws:/root/.aws:ro
In my Jenkinsfile I'm running a Maven command to start a database migration. This database is running in a Docker container.
When deploying the database container we're using a Docker secret from the swarm manager node for the password.
Is there any way how to use that Docker secret in the Jenkins pipeline script instead of putting it in in plain text? I could use Jenkins credentials but then I'd need to maintain the same secrets in two different places.
sh """$mvn flyway:info \
-Dproject.host=$databaseHost \
-Dproject.port=$databasePort \
-Dproject.schema=$databaseSchema \
-Dproject.user=db_user \
-Dproject.password=db_pass \ // <--- Use a Docker secret here...
"""
You can create a username and password credential in jenkins, for example database_crendential, then use it like this :
environment {
DATABASE_CREDS = credentials('database_crendential ')
}
...
sh """$mvn flyway:info \
-Dproject.host=$databaseHost \
-Dproject.port=$databasePort \
-Dproject.schema=$databaseSchema \
-Dproject.user=${DATABASE_CREDS_USR}\
-Dproject.password=${DATABASE_CREDS_PSW}\ // <--- Use a Docker secret here...
"""
I'm getting passwords back from Ansible as part of a Jenkins pipeline, then wanting to mask these passwords in shell scripts fired from Jenkins.
The difficulty is that these passwords are not pipeline parameters or Jenkins credentials.
I can see that the mask passwords plugin allows regular expressions to be masked when pre-defined in Manage Jenkins. What I want to do here is define a regex (or literal string) to mask programmatically.
What I'd like is something like this:
def password = getPasswordFromAnsible()
maskPassword(password)
sh "applogin -u ${username} -p ${password}"
This sh script should generate the following in the console log:
sh "applogin -u my_username -p ******"
One option is to disable command echoing before you run that particular sensitive command, and follow the logic using descriptive text instead:
sh '''
echo "Attempting to login"
set +x
applogin -u ${username} -p ${password}
set -x
echo "Logged in successfully"
'''
Use case:
I have a Jenkins pipeline to update my development environment.
My dev env is an EC2 aws instance with docker compose.
The automation was written along the lines of:
withAWS(profile: 'default') {
sh "ssh -o StrictHostKeyChecking=no -i ~/my-key.pem user#$123.456.789 /bin/bash -c 'run some command like docker pull'"
}
Now, I have other test environments, and they all have some sort of docker-compose file, configurations and property files that requires me to go over all of them when something needs to change.
To help with that, I created a new repository to keep all the different environment configurations, and my plan is to have a clone of this repo in all my development and test environments, so when I need to change something, I can just do it locally, push it, and have my jenkins pipeline update the repository in whichever environment it is updating.
My jenkins has a ssh credential for my repo (it uses in another job that clones the repo and run tests on source code), so I want to use that same credential.
Question: can I somehow, through ssh'ing into another machine, use Jenkins ssh-agent credentials to clone/update a bitbucket repository?
Edit:
I changed the pipeline to:
script {
def hgCommand = "hg clone ssh://hg#bitbucket.org/my-repo"
sshagent(['12345']) {
sh "ssh -o StrictHostKeyChecking=no -i ~/mykey.pem admin#${IP_ADDRESS} /bin/bash -c '\"${hgCommand}\"'"
}
}
And I am getting:
[ssh-agent] Looking for ssh-agent implementation...
[ssh-agent] Exec ssh-agent (binary ssh-agent on a remote machine)
$ ssh-agent
SSH_AUTH_SOCK=/tmp/ssh-FOburguZZlU0/agent.662
SSH_AGENT_PID=664
Running ssh-add (command line suppressed)
Identity added: /home/jenkins/workspace/abc#tmp/private_key_12345.key (rsa w/o comment)
[ssh-agent] Started.
[Pipeline] {
[Pipeline] sh
[test-env-config] Running shell script
+ ssh -o StrictHostKeyChecking=no -i /home/jenkins/mykey.pem admin#123.456.789 /bin/bash -c "hg clone ssh://hg#bitbucket.org/my-repo"
remote: Warning: Permanently added the RSA host key for IP address '765.432.123' to the list of known hosts.
remote: Permission denied (publickey).
abort: no suitable response from remote hg!
$ ssh-agent -k
unset SSH_AUTH_SOCK;
unset SSH_AGENT_PID;
echo Agent pid 664 killed;
[ssh-agent] Stopped.
First some background to understand the reasoning (this is pure ssh, no Jenkins or Mercurial specific): the ssh-agent utility works by making a UNIX domain socket to be then used by ssh. The ssh command attempts to communicate with the agent if it finds the the environment variable SSH_AUTH_SOCK. In addition, ssh can be instructed to forward the agent, via -A. For more details, see the man pages of ssh-agent and ssh.
So, assuming that your withAWS context makes the environment variable SSH_AUTH_SOCK (set by the plugin) available, I think it should be enough to:
add -A to your ssh invocation
in the part 'run some command like docker pull', add the hg clone command, ensuring you are using the ssh:// schema for the mercurial URL.
Security observation: -o StrictHostKeyChecking=no should be used as a last resort. From your example, the IP address of the target is fixed, so you should do the following:
remove the -o StrictHostKeyChecking=no
one-shot: get the host fingerprint of 123.456.789 (for example by ssh-ing into it and then looking for the associated line in your $HOME/.known_hosts). Save that line in a file, say 123.456.789.fingerpint
make the file 123.456.789.fingerprint available to Jenkins when it is invoking your sample code. This can be done by committing that file in the repo that contains the Jenkins pipeline, it is safe to do so since it doesn't contain secrets.
Finally, change your ssh invocation to something like ssh -o UserKnownHostsFile=/path/to/123.456.789.fingerprint ...