Jennkins not able to connect AWS Account - jenkins

I have Jenkins file to deploy my application into EKS cluster. From jenkins side i installed AWS credential plugin and I added Jenkins credential my secret key and access key values into the box.
Next when I'm running Jenkins build deployment stage falling with below error .
Unable to connect to the server: getting credentials: exec: executable aws not found
It looks like you are trying to use a client-go credential plugin that is not installed.

I faced similar issue and found it was a PATH settings issue. Basically aws is not found in PATH . What you could do is add "env" to the code and see what PATH values are in console output. To set the PATH correctly
Manage Jenkins -> Configure System -> Global properties -> Environment variables: name=PATH, value= (Ex: /usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin/ )

Related

Failed to setup GlobalToolConfiguration In Jenkins - Http Status 403-Forbidden

My Jenkins is running in an azure app service as a java web application. as soon the app service started jenkins started & running successfully.
Im accessing Jenkins UI using the url https://app-service-url/jenkins
I logged into jenkins with initial admin password The next step is to choose install suggested plugin & select plugins to install
upon clicking any of this options im getting "Error ocurred during installation".
However After few retries plugins are installed but for all further operation I do it is giving Http 403- Forbidden.
I tried to add Jdk in Global Tool Configuration before adding values it is throwing error and even I save it ends with 403-forbidden result.
I could not able to do anything in jenkins I failed to install new plugins,setup basic configuration,run commands in jenkins script console etc.
In all the cases periodically receiving Http 403-Forbidden.
In jenkins system log I found the messages.
Solutions Tried:
Tried to enable "Enable proxy compatibility" in GlobalSecurity - but 403-Forbidden
Added hudson.security.csrf.GlobalCrumbIssuerConfiguration.DISABLE_CSRF_PROTECTION = true in my app service Configuration(Similar to setting the Env Variable)
Created init.groovy script in JENKINS_HOME and added below commands
def instance = Jenkins.instance
instance.setCrumbIssuer(null)
Tried to install strict-crumb-issuer jenkins plugin but failed to install
Note: I tried latest jenkins version 2.375 as well as the downgraded version(2.361.2,2.332 etc).
looking for a solution to fix this no valid crumb Http 403- forbidden.

Problems transferring build artifacts from Jenkins running in a docker container

I'm a little bit of a newb, with this CI/CD container stuff so please correct me anywhere I'm wrong.
I can't seem to find out how to send by npm build files created on my jenkins instance (workspace) to a remote server. I have a pipeline that successfully pulls in my github repo, does all my fun npm stuff (npm install, test, build). I see my build dir in my jenkins instance /workspace.
My environment is as follows. We have a server where docker (with Portainer) is installed. Jenkins is running in a container with a volume mounted (my react build dir goes here). No issues with the pipeline or building etc. I just can't figure out how to push my artifacts from my jenkins workspace directory to my 'remote' dev server.
I can successfully open a console in my jenkins container (portainer as the jenkins user) and scp files from the workspace directory using my remote server creds(but password is necessary).
I installed and used "Publish Over SSH" Jenkins plugin and get a successful "Test Configuration" from my setup.
I created my RSA keys on the REMOTE machine (that I'm trying to push my build files to).
I then pasted the private key (created without a password) into the plugin at the 'Use password authentication, or use a different key' section. Again, I get a successful test connection.
In my pipeline the last step is deploying and I use this command
sh 'scp -r build myusername#xx.xx.xx.xx:/var/files/react-tester'
I get a 'Permission denied (publickey,password).' error. I have no password associated with the rsa key. I tried both ways, creating the rsa key on the remote machine as my remote user, and the jenkins machine as the jenkins user. I've read examples of people creating the keys both ways, but not sure which user/machine combo to create the keys and paste to which section of the 'Publish Over SSH' plugin.
I'm out of ideas.
First, go to "Manage Jenkins" > "Credentials", add a new SSH credential of type "SSH Username with private key" and fill the "Username" and your private key (generate one if you haven't done it yet) fields (you can also upload one). Don't forget that you have to copy the generated public key to the ${SSH_USERNAME}/.ssh/authorized_keys file on the remote server.
I'm assuming you're using a scripted or DSL pipeline here. In your code, after you've builded your application, you can push it to your server adding this step to your pipeline:
pipeline {
stages {
stage("Pushing changes to remote server") {
steps {
script {
def remote_server = "1.2.3.4"
withCredentials([sshUserPrivateKey(credentialsId: 'my-key', keyFileVariable: 'SSH_KEY', passphraseVariable: '', usernameVariable: 'SSH_USERNAME')]) {
sh "scp -i \${SSH_KEY} build/ ${SSH_USERNAME}#${remote_server}:/var/files/react-tester/"
}
}
}
}
}
}
Best regards.

Why is a Jenkins script job failing to use proper AWS credentials?

I have a simple jenkins job that just runs aws ssm send-command and it fails with:
"An error occurred (AccessDeniedException) when calling the SendCommand operation: User: arn:aws:sts::1234567890:assumed-role/jenkins-live/i-1234567890abc is not authorized to perform: ssm:SendCommand on resource: arn:aws:ssm:us-east-1:1234567890:document/my-document-name"
However, the IAM permissions are correct. To prove it, I directly SSH onto that instance and run the exact same ssm command, and it works. I verify it's using the instance role by running aws sts get-caller-identity and it returns arn:aws:sts::1234567890:assumed-role/jenkins-live/i-1234567890abc which is the same user mentioned in the error message.
So indeed, this assumed role can run the command.
I even modified the jenkins job to run aws sts get-caller-identity first, and it outputs the same user json.
Does jenkins do some caching that I am unaware of? Why would I get that AccessDeniedException if that jenkins-live user can run the command otherwise?
First, install the AWS Credentials and AWS Steps plugins and register your AWS key and secret access key in Jenkins credential store. Then, the next steps depends if you're using a freestyle or a declarative/scripted pipeline.
If you're using a freestyle pipeline: On "Build Environment", click on "Use secret text(s) or file(s)" and follow the next steps. After that, you're gonna have your credentials as variables in your pipeline;
If you're using a declarative/scripted pipeline: Enclose your aws calls with a withAWS block, something like this:
withAWS(region: 'us-east-1', credentials: 'my-pretty-credentials') {
// let's explode something
}
Best regards.

Jenkins Pipeline gcloud problems in docker

I'm trying to set up jenkins pipeline according to
this article but instead use google container registry to push the docker images to.
The Problem: The part which fails me is this jenkinsfile stage block
stage ('Push Docker Image To Container Registry') {
docker.image('google/cloud-sdk:alpine').inside {
sh "echo ${env.GOOGLE_AUTH} > gcp-key.json"
sh 'gcloud auth activate-service-account --key-file ./service-account-creds.json'
}
}
The Error:
Please verify that you have permissions to write to the parent directory.)
ERROR: (gcloud.components.update) Could not create directory [/.config/gcloud]: Permission denied.
I can't run any command to do with gcloud as the error above is what i get all the time.
I tried create the "/.config" directory manually logged into the aws instance and open up the permission of the folder to everyone but that didn't help either.
I also can't find anywhere how to properly setup google cloud for jenkins pipeline using docker.
Any suggestions are greatly appreciated :)
It looks like it's trying to write data directly into your root file system directory.
The .config directory for gcloud would normally be in the following locations for username and/or root user:
/home/yourusername/.config/gcloud
/root/.config/gcloud
It looks like, for some reason, jenkins thinks the parent directory should be in /.
I would try checking where your cloud sdk config directories are on the machine you are running this on (and for the user the scripts runs as):
$ sudo find / -iname "gcloud"
And look for location similars to those printed above.
Could it be that the Cloud SDK is installed in a none standard location on the machine?

Jenkins: ansible host not reachable

I am trying Jenkins to execute an ansible playbook.
But I am getting the unreachable host error which I don't get otherwise.
fatal: [vogo-alpha.cloudapp.net]: UNREACHABLE! => {"changed": false, "msg": "Authentication failure.", "unreachable": true}
I have given this variable in ansible hosts file,
ansible_ssh_private_key_file=/home/luvpreet/.ssh/id_rsa
I think it is because the user jenkins is playing those playbooks and it cannot read this private key file. I tried to make jenkins' user home folder but it was not successful.
It can be done if I switch to the user luvpreet and then run these playbooks.
How do I switch to another user via jenkins shell ?
OR
Is there any other way this problem can be solved ?
There are a couple of possibilities why your solution is working. Most likely because Ansible is trying to ssh to your target machine as the jenkins user which isn't on said machine. I'd approach the problem from a different angle.
First, I'd install the Ansible plugin for Jenkins. This allows you to use the built in credentials located at "Manage Jenkins > Manage Credentials". There you can copy and paste your key in (or point to a key file located on the jenkins server) and set the username that will ssh to the target machine. In your job configuration choose "Invoke Ansible Playbook" for your build step rather than shell. There will be a "Credentials" parameter where you can specify the ssh key you added earlier. The rest should be pretty self explanatory.

Resources