Getting error in Ansible get_url module:
URL can't contain control characters`.
Sample code:
- get_url:
url: "{{ jenkins_url_repo }}" --no-check-certificate
I've also tried the following, but the error still persist.
url: "{{ jenkins_url_repo }} {{ no-check-cert }}"
url: https://pkg.jenkins.io/redhat-stable/jenkins.repo --no-check-certificate
url: "https://pkg.jenkins.io/redhat-stable/jenkins.repo --no-check-certificate"
url: 'https://pkg.jenkins.io/redhat-stable/jenkins.repo --no-check-certificate'
--no-check-certificate looks like it's supposed to be part of some other command, not part of the URL.
Probably you've copied this from somewhere that is fetching the file and disabling checking of the remote SSL certificate, because of incorrectly configured local trusted roots.
You probably want to just leave it off and let the certificate be verified so that you don't install malicious software!
If you're really sure you want to ignore the security check, the Ansible equivalent is the validate_certs option.
- get_url:
url: "{{ jenkins_url_repo }}" --no-check-certificate
# WARNING! DISABLING SECURITY! THIS IS DANGEROUS!
validate_certs: no
# WARNING! DISABLING SECURITY! THIS IS DANGEROUS!
Related
I am using SCDF deployment on k8s and trying to add a new Task Application from our internal Maven repo. By default, SCDF seems to only lookup in the [springRepo] repository. I followed the documentation to add a new maven repo here .
Since the documentation only talks about CloudFoundry example, I added these lines to application.yaml section based on my understanding.
spring:
cloud:
dataflow:
task:
platform:
local:
accounts:
localDev:
********
datasource:
uri: xxx
*********
maven:
remote-repositories:
repo1:
url: https://repo1
auth:
username: user1
password: pass1
snapshot-policy:
update-policy: daily
checksum-policy: warn
release-policy:
update-policy: never
checksum-policy: fail
While adding the app I used the syntax : maven://:[:[:]]:. However, when I launch the task, it fails with error : Failed to resolve maven Resource XXX at configured Remote Repository : [springRepo]
How can I override it to search in my newly added repo.. why SCDF still only searching in default [springRepo]? Appreciate any help.
The property prefix is maven.remote-repositories but what you have is spring.maven.remote-repositories.
You need to specify:
spring:
cloud:
dataflow:
task:
platform:
local:
accounts:
localDev:
********
datasource:
uri: xxx
*********
maven:
remote-repositories:
repo1:
url: https://repo1
...
Please note that the Kubernetes deployment works with containers rather than maven jar artifacts and hence, you need to have your apps registered with the app's URI using docker: prefix.
I am using Ansible (v 2.8) as the provisioner behind a Packer template to build an AMI for a Jenkins master node. For previous versions, the playbook passed successfully. However, as of Jenkins version 2.176.3, the jenkins_plugin module has been throwing:
HTTP Error 403: No valid crumb was included in the request
I have retrieved the crumb and registered it in a variable. I have tried passing it to jenkins_plugin with the http_agent field, but that doesn't work. I tried using attributes, but that didn't help either. Unless I am missing something incredibly basic, I am at the end of my tether.
- name: Get Jenkins Crumb
uri:
force_basic_auth: yes
url_username: ****
url_password: ****
url: http://localhost:8080/crumbIssuer/api/json
return_content: yes
register: jenkins_crumb
until: jenkins_crumb.content.find('Please wait while Jenkins is getting ready') == -1
retries: 10
delay: 5
- name: Install plugin
jenkins_plugin:
name: "{{ item }}"
version: latest
force_basic_auth: yes
url_username: ****
url_password: ****
http_agent: "Jenkins-Crumb:{{ jenkins_crumb.json.crumb }}"
with_items: "{{ jenkins_plugins }}"
I expected installed plugins and a happily built AMI. What I got was "HTTP Error 403: No valid crumb was included in the request" and the Packer build failed.
Looks like a change to the crumb issuer in the 2.176 LTS release forces the inclusion of the web session id of the initial token generation call along with the crumb in subsequent calls that use said crumb.
CSRF tokens (crumbs) are now only valid for the web session they were created in to limit the impact of attackers obtaining them. Scripts that obtain a crumb using the /crumbIssuer/api URL will now fail to perform actions protected from CSRF unless the scripts retain the web session ID in subsequent requests.
In addition to the suggestion that you temporarily disable CSRF, the same doc suggests that you could only disable the new functionality, rather than CSRF as a whole, which should allow your packer/ansible to complete as it previously did, as-written.
To disable this improvement you can set the system property hudson.security.csrf.DefaultCrumbIssuer.EXCLUDE_SESSION_ID to true.
EDIT :
Adding the following line in /etc/default/jenkins cleared the CSRF issues in my own playbook (Ansible 2.8.4, Ubuntu 18.04, OpenJDK 11.0.4)
JAVA_ARGS="$JAVA_ARGS -Dhudson.security.csrf.DefaultCrumbIssuer.EXCLUDE_SESSION_ID=true"
Might be a good-enough crutch until tool maintainers catch up with the API changes.
It's exactly the cause #runningEagle mentioned. You need to propagate the initial session cookie value to all subsequent requests along with the crumb.
Required new Ansible code modifications:
...
# Requesting the crumb
uri:
url: "<crumb_URL>"
register: response
...
# Actual action request
uri:
url: "<action_URL>"
headers: '{ ... , "Cookie": "{{ response.set_cookie }}", ... }'
...
I was facing this issue too and given the pointer the work needed to be done in a session I opened a PR for ansible:
https://github.com/ansible/ansible/issues/61672
https://github.com/ansible/ansible/issues/61673
It is a small change and it should be possible to patch your local installation.
UPDATE: The patch was applied in Ansbile v2.8.9 and v2.9.1 be sure to upgrade ansible if you have an older version.
The solution I ended up applying was to disable CSRF using a handy piece of Groovy, and then re-enable it at the end of the play.
Thanks all for your help and recommendations.
Had tried setting the below per the documentation for /etc/default/jenkins (Most Linux) and /etc/sysconfig/jenkins (RHEL):
hudson.security.csrf.DefaultCrumbIssuer.EXCLUDE_SESSION_ID to true
Thus adding:
JAVA_ARGS="-Dhudson.security.csrf.DefaultCrumbIssuer.EXCLUDE_SESSION_ID=true
But to no avail. Using #Yuri answer fixed it for me. See below:
Requested the crumb the same as I did before.
Crumb:
- uri:
url: "http://localhost:8080/crumbIssuer/api/xml?xpath=concat(//crumbRequestField,\":\",//crumb)"
return_content: yes
register: crumb
Action request:
uri:
method: POST
url: "http://localhost:8080/credentials/store/system/domain/_/createCredentials"
headers:
Jenkins-Crumb: "{{ crumb.content.split(':')[1] }}"
Cookie: "{{ crumb.set_cookie }}"
body: |
json={
"": "0",
"credentials": {
"scope": "GLOBAL",
"id": "identification",
"username": "manu",
"password": "bar",
"description": "linda",
"$class": "com.cloudbees.plugins.credentials.impl.UsernamePasswordCredentialsImpl"
}
}
status_code: 302
Resolution:
Install a plugin named Strict Crumb Issue
Go to Manage Jenkins -> Configure Global Security -> CSRF Protection.
Select Strict Crumb Issuer.
Click on Advanced.
Uncheck the Check the session ID box.
Save it.
I am using ansible to gather information from remote nodes and will then use this information to update relevant RPMs.
The issue I am having is collection version number of various applications and writing them to a file.
Playbook:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- debug: msg="{{ k8s_version.stdout }}"
Inventory file:
[kubernetes]
172.29.219.102
172.29.219.105
172.29.219.104
172.29.219.103
Output:
TASK [debug] *******************************************************************
ok: [172.29.219.102] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.103] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.105] => {
"msg": "Kubernetes v1.4.0"
}
ok: [172.29.219.104] => {
"msg": "Kubernetes v1.4.0"
}
The above portion is simple and works. Now I want to write the output to file.
Now im trying to write this information to a file.I want something like:
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
Kubernetes v1.4.0
So I added the below line:
- local_action: copy content={{ k8s_version.stdout_lines }} dest=/tmp/test
My /tmp/test looks like :
# cat /tmp/test
["Kubernetes v1.4.0"]
There is only one value here.
I tried to do something different then.
- local_action: lineinfile dest=/tmp/foo line="{{ k8s_version.stdout }}" insertafter=EOF
This resulted in:
# cat /tmp/foo
Kubernetes v1.4.0
Im trying to figure out why I only see one value whereas I should see the versions of every node in my inventory file. What am I doing wrong?
What am I doing wrong ?
lineinfile module does not perform the action "add a line to a file", instead it ensures a given line is present in the file. If all your target nodes have the same version, it won't add the same line multiple times.
On the other hand, copy module was overwriting the file.
If you need to register values for all hosts, you can for example create a template which will loop over hosts in the kubernetes group:
- copy:
content: "{% for host in groups.kubernetes %}{{ hostvars[host].k8s_version }}\n{% endfor %}"
dest: /tmp/test
delegate_to: localhost
run_once: true
Another way would be to extract the values with map from hostvars, but given you want the values from kubernetes host group only, I'm not sure it would be prettier. And having a for in the template allows you to easily add host names.
According to this post
Ansible register result of multiple commands
your desired variable is in k8s_version.results To access it you need to work with a template where you just iterate over it:
- local_action: template src=my_nodes.j2 dest=/tmp/test
And the template templates/my_nodes.j2 :
{% for res in k8s_version.results %}
{{ res.stdout }}
{% endfor %}
The complete playbook would then be:
---
- name: Check Application Versions
hosts: kubernetes
tasks:
- name: Check K8S version.
shell: kubectl --version
register: k8s_version
- local_action: template src=my_nodes.j2 dest=/tmp/test
Description:
For some reason, I can't build or serve my jekyll site without "configuring a repo name". I have no clue why there would be a repo name needed for a local build or how to add the repo name.
This is the first time this happened. I tried to migrate the default site from "minima" to "jekyll-theme-primer". When I fired it up in minima it outputted me the default side. I migrated the default post, index.md and about to layout default. It does not fire up and throws me this error. Can somebody specify how to move on from here?
Input:
jekyll -v: jekyll 3.7.2
Expected behaviour:
Tobiass-MBP:tobi.codes Tobias$ bundle exec jekyll serve
Configuration file: /Users/Tobias/Jekyll Blog/tobi.codes/_config.yml
Source: /Users/Tobias/Jekyll Blog/tobi.codes/
Destination: /Users/Tobias/Jekyll Blog/tobi.codes/_site
Incremental build: disabled. Enable with --incremental
Generating...
done in 0.62 seconds.
Auto-regeneration: enabled for ' /Users/Tobias/Jekyll Blog/tobi.codes/'
Server address: http://127.0.0.1:4000/
Server running... press ctrl-c to stop.
Actual behaviour:
Tobiass-MBP:tobi.codes Tobias$ bundle exec jekyll serve
Configuration file: /Users/Tobias/Jekyll Blog/tobi.codes/_config.yml
Source: /Users/Tobias/Jekyll Blog/tobi.codes
Destination: /Users/Tobias/Jekyll Blog/tobi.codes/_site
Incremental build: disabled. Enable with --incremental
Generating...
fatal: Not a git repository (or any of the parent directories): .git
fatal: Not a git repository (or any of the parent directories): .git
Liquid Exception: No repo name found. Specify using PAGES_REPO_NWO environment variables, 'repository' in your configuration, or set up an 'origin' git remote pointing to your github.com repository. in /_layouts/default.html
ERROR: YOUR SITE COULD NOT BE BUILT:
------------------------------------
No repo name found. Specify using PAGES_REPO_NWO environment variables, 'repository' in your configuration, or set up an 'origin' git remote pointing to your github.com repository.
The jekyll-theme-primer theme uses jekyll-github-metadata plugin.
The error happens in default layout when site.github is called, because you did not configure it.
You can get rid of this error by copying this file in _layouts/default.html and removing lines 19 to 23.
{% if site.github.private != true and site.github.license %}
<div class="footer border-top border-gray-light mt-5 pt-3 text-right text-gray">
This site is open source. {% github_edit_link "Improve this page" %}.
</div>
{% endif %}
I am using Ansible with "docker_container" to deploy a web app to various environments. When the target host is a non-production server I set the "links" option with a variable, e.g.:
links: "{{ var_db_link }}"
.. and this works great ... when var_db_link is actually set.
Problem: I need to be able to leave this unset when deploying to a production host because in that case the DB is never going to be a linked container. I was expecting that if was not set that Ansible would ignore the "link" option and not try to use it. Instead it uses the unset value which produced the error: FAILED! => {"changed": false, "failed": true, "msg": "Error creating container: 500 Server Error: Internal Server Error (\"{\"message\":\"Could not get container for \"}\")"}
Question: Is this even possible? (can Ansible be told to not try to use an option when it has no value set) .. or .. should I (unfortunately) create separate roles / playbooks for each situation (i.e. with "links" option set and without it.
This line in the referenced commit gives a clue as to the solution:
Instead of trying to set it to null, just set it to an empty list:
links: "{{ [] if var_db_link == '' else var_db_link }}"
The behavior you want looks like what is described in Omitting Parameters.
You should do :
links: "{{ var_db_link | default(omit) }}"
You will need Ansible 1.8 or above.