Getting below error while connecting aws ec2 instance through SSM in jenkins.
Starting session with SessionId:
[?1034hsh-4.2$ Cannot perform start session: EOF
Command used in jenkins (execute shell):
INSTANCE_ID=aws ec2 describe-instances --filters "Name=tag:t_name,Values=appdev" --region us-east-1 | jq -r .Reservations[].Instances[].InstanceId
echo "INSTANCE_ID: $INSTANCE_ID"
aws ssm start-session --region us-east-1 --target $INSTANCE_ID
Why do you want to start a session in Jenkins for SSM?
start-session is used to initiate a connection to a target (for example, a managed node) for a Session Manager session.
To work with SSM in Jenkins you can pass the commands directly without the need to create a session using send-command
Example: To untar files in your instance
aws ssm send-command --instance-ids "${INSTANCE_ID}" \
--region us-east-1 --document-name "AWS-RunShellScript" \
--output-s3-bucket-name "$bucketName" --output-s3-key-prefix "$bucketDir" \
--comment "Untar Files" \
--parameters '{"commands":["tar -xvf /tmp/repo.tar -C /tmp" ]}'
You can pass any number of commands using this way.
After each command, you can call a shell function to check the command status and if it fails you can exit your loop
Related
I have a train.py and using a docker pushed an image from local to AWS ECR.
But i am getting this error
The primary container for production variant variant-1 did not pass the ping
health check. Please check CloudWatch logs for this endpoint.
Here is the complete Docker file. What am I missing.
FROM python:3.7
RUN python -m pip install sagemaker-training snowflake-connector-python[pandas] \
pandas scikit-learn boto3 numpy joblib sagemaker flask gevent gunicorn
ENV PYTHONUNBUFFERED=TRUE
ENV PYTHONDONTWRITEBYTECODE=TRUE
ENV PATH="/opt/ml:${PATH}"
# Set up the program in the image
COPY pred_demo_sm/train.py /opt/ml/code/train.py
COPY pred_demo_sm/serve /opt/ml/code/serve
COPY pred_demo_sm/output /opt/ml/output
COPY pred_demo_sm/model /opt/ml/model
WORKDIR /opt/ml
ENTRYPOINT [ "python3.7", "/opt/ml/code/train.py"]
Here is the complete .sh file which builds and pushes the image to AWS ECR
#!/usr/bin/env bash
# This script shows how to build the Docker image and push it to ECR to be ready for use
# by SageMaker.
# The argument to this script is the image name. This will be used as the image on the local
# machine and combined with the account and region to form the repository name for ECR.
image=$1
if [ "$image" == "" ]
then
echo "Usage: $0 <image-name>"
exit 1
fi
chmod +x pred_demo_sm/train.py
chmod +x pred_demo_sm/serve
chmod +x pred_demo_sm/model/*
# Get the account number associated with the current IAM credentials
account=$(aws sts get-caller-identity --query Account --output text)
if [ $? -ne 0 ]
then
exit 255
fi
# Get the region defined in the current configuration (default to us-west-2 if none defined)
region=$(aws configure get region)
region=${region:-us-west-2}
fullname="${account}.dkr.ecr.${region}.amazonaws.com/${image}:latest"
# If the repository doesn't exist in ECR, create it.
aws ecr describe-repositories --repository-names "${image}" > /dev/null 2>&1
if [ $? -ne 0 ]
then
aws ecr create-repository --repository-name "${image}" > /dev/null
fi
# Get the login command from ECR and execute it directly
aws ecr get-login-password --region "${region}" | docker login --username AWS --password-stdin "${account}".dkr.ecr."${region}".amazonaws.com
# Build the docker image locally with the image name and then push it to ECR
# with the full name.
docker build -t ${image}
docker tag ${image} ${fullname}
docker push ${fullname}
The training job gets successfully completed in Sagemaker.
But fails while deploying the model in sagemaker.
Within your Dockerfile, could you try replacing the COPY, WORKDIR, and ENTRYPOINT commands with the following.
COPY pred_demo_sm/opt/program
WORKDIR /opt/program
Make sure that your serve file is executable as well.
I have created a docker image using AmazonLinux:2 base image in my Dockerfile. This docker container will run as Jenkins build agent on a Linux server and has to make certain AWS API calls. In my Dockerfile, I'm copying a shell-script called assume-role.sh.
Code snippet:-
COPY ./assume-role.sh .
RUN ["chmod", "+x", "assume-role.sh"]
ENTRYPOINT ["/assume-role.sh"]
CMD ["bash", "--"]
Shell script definition:-
#!/usr/bin/env bash
#echo Your container args are: "${1} ${2} ${3} ${4} ${5}"
echo Your container args are: "${1}"
ROLE_ARN="${1}"
AWS_DEFAULT_REGION="${2:-us-east-1}"
SESSIONID=$(date +"%s")
DURATIONSECONDS="${3:-3600}"
#Temporary loggings starts here
id
pwd
ls .aws
cat .aws/credentials
#Temporary loggings ends here
# AWS STS AssumeRole
RESULT=(`aws sts assume-role --role-arn $ROLE_ARN \
--role-session-name $SESSIONID \
--duration-seconds $DURATIONSECONDS \
--query '[Credentials.AccessKeyId,Credentials.SecretAccessKey,Credentials.SessionToken]' \
--output text`)
# Setting up temporary creds
export AWS_ACCESS_KEY_ID=${RESULT[0]}
export AWS_SECRET_ACCESS_KEY=${RESULT[1]}
export AWS_SECURITY_TOKEN=${RESULT[2]}
export AWS_SESSION_TOKEN=${AWS_SECURITY_TOKEN}
echo 'AWS STS AssumeRole completed successfully'
# Making test AWS API calls
aws s3 ls
echo 'test calls completed'
I'm running the docker container like this:-
docker run -d -v $PWD/.aws:/.aws:ro -e XDG_CACHE_HOME=/tmp/go/.cache arn:aws:iam::829327394277:role/myjenkins test-image
What I'm trying to do here is mounting .aws credentials from host directory to the volume on container at root level. The volume mount is successful and I can see the log outputs as describe in its shell file :-
ls .aws
cat .aws/credentials
It tells me there is a .aws folder with credentials inside it in the root level (/). However somehow, AWS CLI is not picking up and as a result remaining API calls like AWS STS assume-role is getting failed.
Can somebody please suggest me here?
[Output of docker run]
Your container args are: arn:aws:iam::829327394277:role/myjenkins
uid=0(root) gid=0(root) groups=0(root)
/
config
credentials
[default]
aws_access_key_id = AKXXXXXXXXXXXXXXXXXXXP
aws_secret_access_key = e8SYYYYYYYYYYYYYYYYYYYYYYYYYYYYYYxYm
Unable to locate credentials. You can configure credentials by running "aws configure".
AWS STS AssumeRole completed successfully
Unable to locate credentials. You can configure credentials by running "aws configure".
test calls completed
I found the issue finally.
The path was wrong while mounting the .aws volume to the container.
Instead of this -v $PWD/.aws:/.aws:ro, it was supposed to be -v $PWD/.aws:/root/.aws:ro
I'm trying to setup Rundeck inside a Docker container. I want to use Rundeck to provision and manage my Docker fleet. I found an image which ships an ansible-plugin as well. So far running simple playbooks and auto-discovering my Pi nodes work.
Docker script:
echo "[INFO] prepare rundeck-home directory"
mkdir ../../target/work/home
mkdir ../../target/work/home/rundeck
mkdir ../../target/work/home/rundeck/data
echo -e "[INFO] copy host inventory to rundeck-home"
cp resources/inventory/hosts.ini ../../target/work/home/rundeck/data/inventory.ini
echo -e "[INFO] pull image"
docker pull batix/rundeck-ansible
echo -e "[INFO] start rundeck container"
docker run -d \
--name rundeck-raspi \
-p 4440:4440 \
-v "/home/sebastian/work/workspace/workspace-github/raspi/target/work/home/rundeck/data:/home/rundeck/data" \
batix/rundeck-ansible
Now I want to feed the container with playbooks which should become jobs to run in Rundeck. Can anyone give me a hint on how I can create Rundeck jobs (which should invoke an ansible playbook) from the outside? Via api?
One way I can think of is creating the jobs manually once and exporting them as XML or YAML. When the container and Rundeck is up and running I could import the jobs automatically. Is there a certain folder in rundeck-home or somewhere where I can put those files for automatic import? Or is there an API call or something?
Could Jenkins be more suited for this task than Rundeck?
EDIT: just changed to a Dockerfile
FROM batix/rundeck-ansible:latest
COPY resources/inventory/hosts.ini /home/rundeck/data/inventory.ini
COPY resources/realms.properties /home/rundeck/etc/realms.properties
COPY resources/tokens.properties /home/rundeck/etc/tokens.properties
# import jobs
ENV RD_URL="http://localhost:4440"
ENV RD_TOKEN="yJhbGciOiJIUzI1NiIs"
ENV rd_api="36"
ENV rd_project="Test-Project"
ENV rd_job_path="/home/rundeck/data/jobs"
ENV rd_job_file="Ping_Nodes.yaml"
# copy job definitions and script
COPY resources/jobs-definitions/Ping_Nodes.yaml /home/rundeck/data/jobs/Ping_Nodes.yaml
RUN curl -kSsv --header "X-Rundeck-Auth-Token:$RD_TOKEN" \
-F yamlBatch=#"$rd_job_path/$rd_job_file" "$RD_URL/api/$rd_api/project/$rd_project/jobs/import?fileformat=yaml&dupeOption=update"
Do you know how I can delay the curl at the end until after the rundeck service is up and running?
That's right you can design an script with an API call using cURL (pointing to your Docker instance) after deploying your instance (a script that deploys your instance and later import the jobs), I leave a basic example (in this example you need the job definition in XML format).
For XML job definition format:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="localhost"
rdeck_port="4440"
rdeck_api="36"
rdeck_token="qNcao2e75iMf1PmxYfUJaGEzuVOIW3Xz"
# specific api call info
rdeck_project="ProjectEXAMPLE"
rdeck_xml_file="HelloWorld.xml"
# api call
curl -kSsv --header "X-Rundeck-Auth-Token:$rdeck_token" \
-F xmlBatch=#"$rdeck_xml_file" "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/project/$rdeck_project/jobs/import?fileformat=xml&dupeOption=update"
For YAML job definition format:
#!/bin/sh
# protocol
protocol="http"
# basic rundeck info
rdeck_host="localhost"
rdeck_port="4440"
rdeck_api="36"
rdeck_token="qNcao2e75iMf1PmxYfUJaGEzuVOIW3Xz"
# specific api call info
rdeck_project="ProjectEXAMPLE"
rdeck_yml_file="HelloWorldYML.yaml"
# api call
curl -kSsv --header "X-Rundeck-Auth-Token:$rdeck_token" \
-F xmlBatch=#"$rdeck_yml_file" "$protocol://$rdeck_host:$rdeck_port/api/$rdeck_api/project/$rdeck_project/jobs/import?fileformat=yaml&dupeOption=update"
Here the API call.
Jenkins is configured to deploy PCF application. PCF login credentials is configured in Jenkins as variables. Is there any way to fetch the PCF login credential details from Jenkins variables?
echo "Pushing PCF App"
cf login -a https://api.pcf.org.com -u $cduser -p $cdpass -o ORG -s ORG_Dev
cf push pcf-app-04v2\_$BUILD_NUMBER -b java_buildpack -n pcf-app-04v2\_$BUILD_NUMBER -f manifest-prod.yml -p build/libs/*.jar
cf map-route pcf-app-04v2\_$BUILD_NUMBER apps-pr03.cf.org.com --hostname pcf-app
cf delete -f pcf-app
cf rename pcf-app-04v2\_$BUILD_NUMBER pcf-app
cf delete-orphaned-routes -f
Rather than accessing Jenkins credentials outside to manually run your app, you may probably define a simple pipeline in Jenkins and can define a custom script into it to perform these tasks. In script you can access it through credentials() function and use the environment variable in your commands.
E.g.
environment {
CF_USER = credentials('YOUR JENKINS CREDENTIAL KEY')
CF_PWD = credentials('CF_PWD')
}
def deploy() {
script {
sh '''#!/bin/bash
set -x
cf login -a https://api.pcf.org.com -u ${CF_USER} -p ${CF_PWD} -o ORG -s ORG_Dev
pwd
'''
}
In my Jenkinsfile I'm running a Maven command to start a database migration. This database is running in a Docker container.
When deploying the database container we're using a Docker secret from the swarm manager node for the password.
Is there any way how to use that Docker secret in the Jenkins pipeline script instead of putting it in in plain text? I could use Jenkins credentials but then I'd need to maintain the same secrets in two different places.
sh """$mvn flyway:info \
-Dproject.host=$databaseHost \
-Dproject.port=$databasePort \
-Dproject.schema=$databaseSchema \
-Dproject.user=db_user \
-Dproject.password=db_pass \ // <--- Use a Docker secret here...
"""
You can create a username and password credential in jenkins, for example database_crendential, then use it like this :
environment {
DATABASE_CREDS = credentials('database_crendential ')
}
...
sh """$mvn flyway:info \
-Dproject.host=$databaseHost \
-Dproject.port=$databasePort \
-Dproject.schema=$databaseSchema \
-Dproject.user=${DATABASE_CREDS_USR}\
-Dproject.password=${DATABASE_CREDS_PSW}\ // <--- Use a Docker secret here...
"""