Ansible run_once with_items - post

Trying to run this task in Ansible only on the first item in with_items. I am not able to change the cluster_server_names variable as it is used elsewhere. Do I need a separate task to register a new variable containing only the server name I want or is there another way possible?
- name: "Provision public IP in dev"
uri:
url: "{{ api_url }}/servers/{{ item }}/publicIPAddresses"
headers:
Authorization: "REMOVED"
Content-Type: "application/json"
Accept: "application/json"
method: POST
body_format: json
status_code:
- 200
- 201
- 202
body:
ports: [{"protocol":"TCP","port":"80"}]
no_log: false
register: blueprint
run_once: true
with_items:
- "{{cluster_server_names |json_query(get_server_names)}}"

I'm afraid I do not have a way to test this, but try the first filter:
- name: "Provision public IP in dev"
uri:
url: "{{ api_url }}/servers/{{ server }}/publicIPAddresses"
headers:
Authorization: "REMOVED"
Content-Type: "application/json"
Accept: "application/json"
method: POST
body_format: json
status_code:
- 200
- 201
- 202
body:
ports: [{"protocol":"TCP","port":"80"}]
no_log: false
register: blueprint
run_once: true
vars:
server: "{{cluster_server_names | json_query(get_server_names) | first }}"

Related

How to return a static response in kong api-gateway

I am using kong API-gateway in its DB-Less mode
as a result, I have a kong.yaml file as below:
_format_version: "2.1"
_transform: true
services:
- name: service1
url: http://service1:port/sample-path
routes:
- name: service1
methods:
- GET
paths:
- /service1/sample-path
strip_path: true
As you see, the api-gateway dispatches requests from /service1/sample-path endpoint of api-gatway to http://service1:port/sample-path
I am looking for a way to add a new service such that when a request is sent to /oas endpoint of api-gateway, it loads a yaml or json file and return that as the response.
In action, I am looking for a way to return a static response instead of dispatching the request?
Any idea how to do this?
pseudo code
_format_version: "2.1"
_transform: true
services:
- name: service1
url: http://service1:port/sample-path
routes:
- name: service1
methods:
- GET
paths:
- /service1/sample-path
strip_path: true
- name: oas
url: STATIC YAML OR JSON FILE
routes:
- name: oas
methods:
- GET
paths:
- /oas
strip_path: true
You can handle that with the plugin https://github.com/IntelliGrape/kong-plugin-static-response

Using gRPC Web with Dart

I have a web application with the following stack:
UI: Flutter Web/Dart
Server: Go
Communication Protocol: gRPC/gRPC-Web
I have defined a few protobufs and compiled them into both Go and Dart successfully. When I run the Go server code, I am able to successfully make gRPC calls with Kreya, however when I try making the same call from Flutter using grpc/grpc_web.dart, though I keep running into the following error:
gRPC Error (code: 2, codeName: UNKNOWN, message: HTTP request completed without a status
(potential CORS issue), details: null, rawResponse: , trailers: {})
Here is my UI Code:
class FiltersService {
static ResponseFuture<Filters> getFilters() {
GrpcWebClientChannel channel =
GrpcWebClientChannel.xhr(Uri.parse('http://localhost:9000'));
FiltersServiceClient clientStub = FiltersServiceClient(
channel,
);
return clientStub.getFilters(Void());
}
}
Backend Code:
func StartServer() {
log.Println("Starting server")
listener, err := net.Listen("tcp", fmt.Sprintf(":%v", port))
if err != nil {
log.Fatalf("Unable to listen to port %v\n%v\n", port, err)
}
repositories.ConnectToMongoDB()
grpcServer = grpc.NewServer()
registerServices()
if err = grpcServer.Serve(listener); err != nil {
log.Fatalf("Failed to serve gRPC\n%v\n", err)
}
}
// Register services defined in protobufs to call from UI
func registerServices() {
cardsService := &services.CardsService{}
protos.RegisterCardsServiceServer(grpcServer, cardsService)
filtersService := &services.FiltersService{}
protos.RegisterFiltersServiceServer(grpcServer, filtersService)
}
As mentioned, the API call is successful when Kreya is used to make the call, however the Dart code keeps failing.
I have tried wrapping the gRPC server in the gRPC web proxy, however that also failed from both Dart and Kreya. Here is the code I tried:
func StartProxy() {
log.Println("Starting server")
listener, err := net.Listen("tcp", fmt.Sprintf(":%v", port))
if err != nil {
log.Fatalf("Unable to listen to port %v\n%v\n", port, err)
}
repositories.ConnectToMongoDB()
grpcServer = grpc.NewServer()
registerServices()
grpcWebServer := grpcweb.WrapServer(grpcServer)
httpServer := &http.Server{
Handler: http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
if r.ProtoMajor == 2 {
grpcWebServer.ServeHTTP(w, r)
} else {
w.Header().Set("Access-Control-Allow-Origin", "*")
w.Header().Set("Access-Control-Allow-Methods", "POST, GET, OPTIONS, PUT, DELETE")
w.Header().Set("Access-Control-Allow-Headers", "Accept, Content-Type, Content-Length, Accept-Encoding, X-CSRF-Token, Authorization, X-User-Agent, X-Grpc-Web")
w.Header().Set("grpc-status", "")
w.Header().Set("grpc-message", "")
if grpcWebServer.IsGrpcWebRequest(r) {
grpcWebServer.ServeHTTP(w, r)
}
}
}),
}
httpServer.Serve(listener)
}
func StartServer() {
StartProxy()
}
I am also aware of Envoy Proxy which can be used in place of this gRPC web proxy, however if I do that, I would be exposing the endpoints on Envoy as REST APIs, which would then forward the request as a gRPC call. From what I understand, this would require maintaining 2 versions of the data models - one for communication between the UI and Envoy (in JSON), and the other for communication between Envoy and the server (as protobuf). Is this the correct understanding? How can I move past this?
*** EDIT: ***
As per the suggestion in the comments, I have tried using Envoy in place of the go proxy. However, even now, I'm having trouble getting it to work. I'm now getting upstream connect error or disconnect/reset before headers. reset reason: overflow when trying to access the port exposed by Envoy 9001, though I can successfully call the backend service directly from kreya on port 9000.
Here is my envoy.yaml:
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: host.docker.internal, port_value: 9001 }
filter_chains:
- filters:
- name: envoy.filters.network.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_stream_duration:
grpc_timeout_header_max: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: id,token,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
http2_protocol_options: {}
lb_policy: round_robin
# win/mac hosts: Use address: host.docker.internal instead of address: localhost in the line below
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal
port_value: 9000
I was able to resolve the issue by taking the suggestion in the comments and using Envoy to proxy instead of the go-proxy, though the solution didn't work purely out of the box according to the linked post.
Here is the working envoy.yaml
admin:
access_log_path: /tmp/admin_access.log
address:
socket_address:
address: 0.0.0.0
port_value: 9901
static_resources:
listeners:
- name: listener_0
address:
socket_address: { address: 0.0.0.0, port_value: 9000 }
filter_chains:
- filters:
- name: envoy.http_connection_manager
typed_config:
"#type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
codec_type: auto
stat_prefix: ingress_http
route_config:
name: local_route
virtual_hosts:
- name: local_service
domains: ["*"]
routes:
- match: { prefix: "/" }
route:
cluster: greeter_service
max_grpc_timeout: 0s
cors:
allow_origin_string_match:
- prefix: "*"
allow_methods: GET, PUT, DELETE, POST, OPTIONS
allow_headers: keep-alive,user-agent,cache-control,content-type,content-transfer-encoding,custom-header-1,x-accept-content-transfer-encoding,x-accept-response-streaming,x-user-agent,x-grpc-web,grpc-timeout
max_age: "1728000"
expose_headers: custom-header-1,grpc-status,grpc-message
http_filters:
- name: envoy.filters.http.grpc_web
- name: envoy.filters.http.cors
- name: envoy.filters.http.router
clusters:
- name: greeter_service
connect_timeout: 0.25s
type: logical_dns
lb_policy: round_robin
http2_protocol_options: {}
load_assignment:
cluster_name: cluster_0
endpoints:
- lb_endpoints:
- endpoint:
address:
socket_address:
address: host.docker.internal
port_value: 9001
The working Dockerfile
FROM envoyproxy/envoy:v1.20-latest
COPY ./envoy.yaml /etc/envoy/envoy.yaml
CMD /usr/local/bin/envoy -c /etc/envoy/envoy.yaml -l debug
And the commands used to run Envoy
docker build -t envoy .
The previous command has to be run in the same directory as the Dockerfile
docker run -p 9000:9000 -p 9901:9901 envoy
Where 9000 and 9901 are the ports I want to expose and be able to access externally (listed in the envoy.yaml)
**NOTE: **
Make sure to include http2_protocol_options: {}. Following some possible solutions online, I had removed it, which led to a connection reset due to protocol error. I had been stuck on this issue for hours until I saw that Envoy can forward a request as either HTTP/2 or HTTP/3, and decided to add it back, which finally allowed me to make a gRPC call using the gRPC Web client.
Hope this helps anyone else that may be coming across this issue.

Nuxt.js SSR w/ Nest API deployed to AWS in a Docker container

I've tried roughly 5 million variations on the theme here, as well as spent a lot of time poring through the Nuxt docs and I cannot get Nuxt SSR with a Nest backend working when deployed in a docker container to AWS. Below is my current setup. Please let me know if I've left anything out.
Here are the errors I'm getting:
https://www.noticeeverythingcreative.com/contact
This route makes a POST request for page meta to https://www.noticeeverythingcreative.com/api/contact/meta in the component's asyncData method. This produces a big old error from Axios. Below is the part I think is relevant, but let me know if you need more.
{
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: 'xxx.xx.x.x', // IP Address of the docker container
port: 443,
config: {
url: 'https://www.noticeeverythingcreative.com/api/contact/meta',
method: 'post',
headers: {
Accept: 'application/json, text/plain, */*',
connection: 'close',
'x-real-ip': 'xx.xxx.xxx.xxx', // My IP
'x-forwarded-for': 'xx.xxx.xxx.xxx', // My IP
'x-forwarded-proto': 'https',
'x-forwarded-ssl': 'on',
'x-forwarded-port': '443',
pragma: 'no-cache',
'cache-control': 'no-cache',
'upgrade-insecure-requests': '1',
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.130 Safari/537.36',
'sec-fetch-user': '?1',
'sec-fetch-site': 'same-origin',
'sec-fetch-mode': 'navigate',
'accept-encoding': 'gzip, deflate',
'accept-language': 'en-US,en;q=0.9',
'Content-Type': 'application/json'
},
baseURL: 'https://www.noticeeverythingcreative.com'
}
Here's the relevant part of my nuxt.config.js:
mode: 'universal',
srcDir: './src',
rootDir: './',
modules: ['#nuxtjs/axios'],
// NOTE: I get the same errors if I leave this block out
server: {
host: '0.0.0.0',
port: 3002
},
When I deploy I use a Dockerfile that copies all the needed files from my project directory into the container, runs yarn install, exposes port 3002, runs yarn build.prod, and ends with CMD ["yarn", "start"] (see below for relevant package.json scripts).
"scripts": {
"clean.nuxt": "rimraf .nuxt",
"build.client": "nuxt build",
"build.server": "tsc -p tsconfig.server.json", // Transpile TypeScript from `src/server` into `.nuxt/api`
"build.prod": "run-s clean.nuxt build.client build.server",
"start": "cross-env NODE_ENV=production node .nuxt/api/index.js",
}
The docker image is built locally and pushed to an ECR repo. I then SSH into my server and run docker-compose up -d with this compose file:
version: '3.2'
services:
my_service:
image: link/to/my/image:${TAG:-prod}
container_name: my_container
hostname: www.noticeeverythingcreative.com
restart: unless-stopped
ports:
# Http Port
- 3002:3002
networks:
- web-network # External (the actual compose file also has the corresponding networks block at the bottom)
environment:
- NODE_ENV=production
- API_URL=https://www.noticeeverythingcreative.com
- HOST=www.noticeeverythingcreative.com
- PORT=3002
- VIRTUAL_PORT=3002
Here's my server-side controller that handles Nuxt rendering:
src/server/app/nuxt.controller.ts
import { Controller, Get, Request, Response } from '#nestjs/common';
import { join, resolve } from 'path';
import * as config from 'config';
const { Builder, Nuxt } = require('nuxt');
const nuxtConfig = require(join(resolve(), 'nuxt.config.js'));
#Controller()
export class NuxtController
{
nuxt:any;
constructor()
{
this.nuxt = new Nuxt(nuxtConfig);
const Env = config as any;
// Build only in dev mode
if (Env.name === 'development')
{
const builder = new Builder(this.nuxt);
builder.build();
}
else
{
this.nuxt.ready();
}
}
#Get('*')
async root(#Request() req:any, #Response() res:any)
{
if (this.nuxt)
{
return await this.nuxt.render(req, res);
}
else
{
res.send('Nuxt is disabled.');
}
}
}
Here is the client-side contact component's asyncData and head implementations:
async asyncData(ctx:any)
{
// fetch page meta from API
try
{
const meta = await ctx.$axios(<any>{
method: 'post',
url: `${ ctx.env.apiHost }/contact/meta`,
headers: { 'Content-Type': 'application/json' }
});
return { meta: meta.data };
}
catch (error)
{
// Redirect to error page or 404 depending on server response
console.log('ERR: ', error);
}
}
head()
{
return this.$data.meta;
}
The issues I'm having only occur in the production environment on the production host. Locally I can run yarn build.prod && cross-env NODE_ENV=development node .nuxt/api/index.js and the app runs and renders without error.
Update
If I allow the Nuxt app to actually run on localhost inside the docker container, I end up with the opposite problem. For example, if I change my nuxt.config.js server and axios blocks to
server: {
port: 3002, // default: 3000,
},
axios: {
baseURL: 'http://localhost:3002'
}
And change the request to:
const meta = await ctx.$axios(<any>{
method: 'post',
// NOTE: relative path here instead of the absolute path above
url: `/api/contact/meta`,
headers: { 'Content-Type': 'application/json' }
});
return { meta: meta.data };
A fresh load of https://www.noticeeverythingcreative.com/contact renders fine. This can be confirmed by viewing the page source and seeing that the title has been updated and that there are no console errors. However, if you load the home page (https://www.noticeeverythingcreative.com) and click the contact link in the nav, you'll see POST http://localhost:3002/api/contact/meta net::ERR_CONNECTION_REFUSED.
NOTE: this is the version that is deployed as of the last edit of this question.
I've come up with a solution, but I don't love it, so if anyone has anything better, please post.
I got it working by allowing the Nuxt app to run on localhost inside the docker container, but making the http requests to the actual host (e.g., https://www.noticeeverythingcreative.com/whatever).
So, in nuxt.config.js:
// The server and axios blocks simply serve to set the port as something other than the default 3000
server: {
port: 3002, // default: 3000,
},
axios: {
baseURL: 'http://localhost:3002'
},
env: {
apiHost: process.env.NODE_ENV === 'production' ?
'https://www.noticeeverythingcreative.com/api' :
'http://localhost:3002/api'
}
In docker-compose.yml I removed anything that would make the host anything but localhost, as well as any env variables that nuxt is counting on (mostly because I can't quite figure out how those work in Nuxt, except that it's not the way I would expect):
version: '3.2'
services:
my_service:
image: link/to/my/image:${TAG:-prod}
container_name: my_container
# REMOVED
# hostname: www.noticeeverythingcreative.com
restart: unless-stopped
ports:
# Http Port
- 3002:3002
networks:
- web-network # External (the actual compose file also has the corresponding networks block at the bottom)
environment:
# REMOVED
# - API_URL=https://www.noticeeverythingcreative.com
# - HOST=www.noticeeverythingcreative.com
- NODE_ENV=production
- PORT=3002
- VIRTUAL_PORT=3002
And when making api requests:
// NOTE: ctx.env.apiHost is https://www.noticeeverythingcreative.com/api
const meta = await ctx.$axios(<any>{
method: 'post',
url: `${ ctx.env.apiHost }/contact/meta`,
headers: { 'Content-Type': 'application/json' }
});
return { meta: meta.data };

scdf2 uaa request failed redirect to dashboard from login

Using kubernetes deployer, I cannot get login into scdf2 applying uaa service security... using scdf 2.1.2 image version.
I got a loop into /login and /login?code=xxx from uaa service because, I think, scdf2 cannot get "token"..
The process :
1) Initial launching of uaa server .
An uaa service running into a pod k8s, using the following config
[applying https://github.com/making/uaa-on-kubernetes/blob/master/k8s/uaa.yml]
It needs a secret deployed with cert and key.
When i've created the csr, with CN value for certificated is "uaa-service"
as a valid hostname
Then, uaa-service using https and certs:
apiVersion: v1
kind: Service
metadata:
name: uaa-service
labels:
app: uaa
spec:
type: LoadBalancer
ports:
- port: 8443
nodePort: 8443
name: uaa
selector:
app: uaa
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: uaa
spec:
replicas: 1
selector:
matchLabels:
app: uaa
template:
metadata:
labels:
app: uaa
spec:
initContainers:
- image: openjdk:8-jdk-slim
name: pem-to-keystore
volumeMounts:
- name: keystore-volume
mountPath: /keystores
- name: uaa-tls
mountPath: /uaa-tls
command:
- sh
- -c
- |
openssl pkcs12 -export \
-name uaa-tls \
-in /uaa-tls/tls.crt \
-inkey /uaa-tls/tls.key \
-out /keystores/uaa.p12 \
-password pass:foobar
keytool -importkeystore \
-destkeystore /keystores/uaa.jks \
-srckeystore /keystores/uaa.p12 \
-deststoretype pkcs12 \
-srcstoretype pkcs12 \
-alias uaa-tls \
-deststorepass changeme \
-destkeypass changeme \
-srcstorepass foobar \
-srckeypass foobar \
-noprompt
containers:
- name: uaa
image: making/uaa:4.13.0
command:
- sh
- -c
- |
mv /usr/local/tomcat/webapps/uaa.war /usr/local/tomcat/webapps/ROOT.war
catalina.sh run
ports:
- containerPort: 8443
volumeMounts:
- name: uaa-config
mountPath: /uaa
readOnly: true
- name: server-config
mountPath: /usr/local/tomcat/conf/server.xml
subPath: server.xml
readOnly: true
- name: keystore-volume
mountPath: /keystores
readOnly: true
env:
- name: _JAVA_OPTIONS
value: "-Djava.security.policy=unlimited -Djava.security.egd=file:/dev/./urandom"
readinessProbe:
httpGet:
path: /healthz
port: 8443
scheme: HTTPS
initialDelaySeconds: 90
timeoutSeconds: 30
failureThreshold: 50
periodSeconds: 60
livenessProbe:
httpGet:
path: /healthz
port: 8443
scheme: HTTPS
initialDelaySeconds: 90
timeoutSeconds: 30
periodSeconds: 60
failureThreshold: 50
volumes:
- name: uaa-config
configMap:
name: uaa-config
items:
- key: uaa.yml
path: uaa.yml
- key: log4j.properties
path: log4j.properties
- name: server-config
configMap:
name: uaa-config
items:
- key: server.xml
path: server.xml
- name: keystore-volume
emptyDir: {}
- name: uaa-tls
secret:
secretName: uaa-tls
# kubectl create secret tls uaa-tls --cert=uaa-service.crt --key=uaa-service.key
---
apiVersion: v1
kind: ConfigMap
metadata:
name: uaa-config
data:
server.xml: |-
<?xml version='1.0' encoding='utf-8'?>
<Server port="-1">
<Listener className="org.apache.catalina.startup.VersionLoggerListener" />
<Listener className="org.apache.catalina.core.AprLifecycleListener" SSLEngine="on" />
<Listener className="org.apache.catalina.core.JreMemoryLeakPreventionListener" />
<Listener className="org.apache.catalina.mbeans.GlobalResourcesLifecycleListener" />
<Listener className="org.apache.catalina.core.ThreadLocalLeakPreventionListener" />
<Service name="Catalina">
<Connector class="org.apache.coyote.http11.Http11NioProtocol" protocol="HTTP/1.1" connectionTimeout="20000"
scheme="https"
port="8443"
SSLEnabled="true"
sslEnabledProtocols="TLSv1.2"
ciphers="TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384"
secure="true"
clientAuth="false"
sslProtocol="TLS"
keystoreFile="/keystores/uaa.jks"
keystoreType="PKCS12"
keyAlias="uaa-tls"
keystorePass="changeme"
bindOnInit="false"/>
<Connector protocol="org.apache.coyote.http11.Http11NioProtocol"
connectionTimeout="20000"
port="8989"
address="127.0.0.1"
bindOnInit="true"/>
<Engine name="Catalina" defaultHost="localhost">
<Host name="localhost"
appBase="webapps"
unpackWARs="true"
autoDeploy="false"
failCtxIfServletStartFails="true">
<Valve className="org.apache.catalina.valves.RemoteIpValve"
remoteIpHeader="x-forwarded-for"
protocolHeader="x-forwarded-proto" internalProxies="10\.\d{1,3}\.\d{1,3}\.\d{1,3}|192\.168\.\d{1,3}\.\d{1,3}|169\.254\.\d{1,3}\.\d{1,3}|127\.\d{1,3}\.\d{1,3}\.\d{1,3}|172\.1[6-9]{1}\.\d{1,3}\.\d{1,3}|172\.2[0-9]{1}\.\d{1,3}\.\d{1,3}|172\.3[0-1]{1}\.\d{1,3}\.\d{1,3}"/>
<Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs"
prefix="localhost_access" suffix=".log" rotatable="false" pattern="%h %l %u %t "%r" %s %b"/>
</Host>
</Engine>
</Service>
</Server>
log4j.properties: |-
PID=????
log4j.rootCategory=INFO, CONSOLE
log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender
log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout
log4j.appender.CONSOLE.layout.ConversionPattern=[%d{yyyy-MM-dd HH:mm:ss.SSS}] uaa%X{context} - ${PID} [%t] .... %5p --- %c{1}: %m%n
log4j.category.org.springframework.security=INFO
log4j.category.org.cloudfoundry.identity=INFO
log4j.category.org.springframework.jdbc=INFO
log4j.category.org.apache.http.wire=INFO
uaa.yml: |-
logging:
config: "/uaa/log4j.properties"
require_https: true
scim:
groups:
zones.read: Read identity zones
zones.write: Create and update identity zones
idps.read: Retrieve identity providers
idps.write: Create and update identity providers
clients.admin: Create, modify and delete OAuth clients
clients.write: Create and modify OAuth clients
clients.read: Read information about OAuth clients
clients.secret: Change the password of an OAuth client
scim.write: Create, modify and delete SCIM entities, i.e. users and groups
scim.read: Read all SCIM entities, i.e. users and groups
scim.create: Create users
scim.userids: Read user IDs and retrieve users by ID
scim.zones: Control a user's ability to manage a zone
scim.invite: Send invitations to users
password.write: Change your password
oauth.approval: Manage approved scopes
oauth.login: Authenticate users outside of the UAA
openid: Access profile information, i.e. email, first and last name, and phone number
groups.update: Update group information and memberships
uaa.user: Act as a user in the UAA
uaa.resource: Serve resources protected by the UAA
uaa.admin: Act as an administrator throughout the UAA
uaa.none: Forbid acting as a user
uaa.offline_token: Allow offline access
oauth:
clients:
uaa_admin:
authorities: clients.read,clients.write,clients.secret,uaa.admin,scim.read,scim.write,password.write
authorized-grant-types: client_credentials
override: true
scope: 'cloud_controller.read,cloud_controller.write,openid,password.write,scim.userids,dataflow.view,dataflow.create,dataflow.manage'
secret: uaa_secret
id: uaa_admin
user:
authorities:
- openid
- scim.me
- cloud_controller.read
- cloud_controller.write
- cloud_controller_service_permissions.read
- password.write
- scim.userids
- uaa.user
- approvals.me
- oauth.approvals
- profile
- roles
- user_attributes
- uaa.offline_token
issuer:
uri: https://uaa-service:8443
login:
url: https://uaa-service:8443
entityBaseURL: https://uaa-service:8443
entityID: cloudfoundry-saml-login
saml:
nameID: 'urn:oasis:names:tc:SAML:1.1:nameid-format:unspecified'
assertionConsumerIndex: 0
signMetaData: true
signRequest: true
socket:
connectionManagerTimeout: 10000
soTimeout: 10000
authorize:
url: https://uaa-service:8443/oauth/authorize
uaa:
# The hostname of the UAA that this login server will connect to
url: https://uaa-service:8443
token:
url: https://uaa-service:8443/oauth/token
approvals:
url: https://uaa-service:8443/approvals
login:
url: https://uaa-service:8443/authenticate
limitedFunctionality:
enabled: false
whitelist:
endpoints:
- /oauth/authorize/**
- /oauth/token/**
- /check_token/**
- /login/**
- /login.do
- /logout/**
- /logout.do
- /saml/**
- /autologin/**
- /authenticate/**
- /idp_discovery/**
methods:
- GET
- HEAD
- OPTIONS
I think that rhe important values to remember are ( in doubt about saml):
issuer:
uri: https://uaa-service:8443
login:
url: https://uaa-service:8443
entityBaseURL: https://uaa-service:8443
authorize:
url: https://uaa-service:8443/oauth/authorize
uaa:
# The hostname of the UAA that this login server will connect to
url: https://uaa-service:8443
token:
url: https://uaa-service:8443/oauth/token
approvals:
url: https://uaa-service:8443/approvals
login:
url: https://uaa-service:8443/authenticate
Ok, deployed and running the pod. Remember 8443 form uaa_services actions.
2)
Upgrade uaa config for users admin and user and roles mappings.
Because i cannot get install uaac gem ... i run a docker imagen with uaac client:
docker run --rm -it cf-uaac bash
then
>>>> I need add the ip pod uaa-server to the docker image
#echo "10.42.0.1 uaa-service" >> /etc/hosts
#uaac --skip-ssl-validation target https://uaa-service:8443
Unknown key: Max-Age = 86400
Target: http://uaa-service:8443
#uaac token client get uaa_admin -s uaa_secret
Unknown key: Max-Age = 86400
Successfully fetched token via client credentials grant.
Target: http://uaa-service:8443
Context: uaa_admin, from client uaa_admin
>>> Ok i got a uaa_admin token to create admin user, group etc ..
>>> check token again is valid
# uaac token decode
Note: no key given to validate token signature
jti: 8067e0122b20433ab817f684e7335d30
sub: uaa_admin
authorities: clients.read password.write clients.secret clients.write uaa.admin scim.write scim.read
scope: clients.read password.write clients.secret clients.write uaa.admin scim.write scim.read
client_id: uaa_admin
cid: uaa_admin
azp: uaa_admin
grant_type: client_credentials
rev_sig: 7216b9b8
iat: 1565017183
exp: 1565060383
iss: http://uaa-service:8443/oauth/token
zid: uaa
aud: scim uaa_admin password clients uaa**
#uaac user add admin -p password --emails admin#mk.com
root#bf98436ccc82:/# uaac user add admin -p password --emails admin#mk.com
user account successfully added
root#bf98436ccc82:/# uaac user add user -p password --emails user#mk.com
user account successfully added
=========================================================================================================================================
root#bf98436ccc82:/# uaac group add "dataflow.view"
id: 9796f596-e540-4f3b-a32c-90b1bac5d0cc
meta
version: 0
created: 2019-08-05T15:00:01.014Z
lastmodified: 2019-08-05T15:00:01.014Z
members:
schemas: urn:scim:schemas:core:1.0
displayname: dataflow.view
zoneid: uaa
root#bf98436ccc82:/# uaac group add "dataflow.create"
id: c798e762-bcae-4d1f-8eef-2f7083df2d45
meta
version: 0
created: 2019-08-05T15:00:01.495Z
lastmodified: 2019-08-05T15:00:01.495Z
members:
schemas: urn:scim:schemas:core:1.0
displayname: dataflow.create
zoneid: uaa
root#bf98436ccc82:/# uaac group add "dataflow.manage"
id: 47aeba32-db27-456c-aa12-d5492127fe1f
meta
version: 0
created: 2019-08-05T15:00:01.986Z
lastmodified: 2019-08-05T15:00:01.986Z
members:
schemas: urn:scim:schemas:core:1.0
displayname: dataflow.manage
zoneid: uaa
=========================================================================================================================================
root#bf98436ccc82:/# uaac member add dataflow.view admin
success
root#bf98436ccc82:/# uaac member add dataflow.create admin
success
root#bf98436ccc82:/# uaac member add dataflow.manage admin
success
=========================================================================================================================================
root#bf98436ccc82:/# uaac member add dataflow.view user
success
root#bf98436ccc82:/# uaac member add dataflow.create user
success
root#bf98436ccc82:/# uaac member add dataflow.manage user
success
>>> Now, mapping admin to dataflow uua client
>>> Important
>>> The redirect url MUST THE SAME from http original request
>>> scdf2-data-flow-skipper:8844
>>> this is my login uri to dashboard scdf2
>>> i can't get direct connect to pod ... ssh tunnels insteads ..
# uaac client add dataflow \
--name dataflow \
--scope cloud_controller.read,cloud_controller.write,openid,password.write,scim.userids,dataflow.view,dataflow.create,dataflow.manage \
--authorized_grant_types password,authorization_code,client_credentials,refresh_token \
--authorities uaa.resource \
--redirect_uri http://scdf2-data-flow-server:8844/login\
--autoapprove openid \
--secret dataflow
#uaac client add skipper \
--name skipper \
--scope cloud_controller.read,cloud_controller.write,openid,password.write,scim.userids,dataflow.view,dataflow.create,dataflow.manage \
--authorized_grant_types password,authorization_code,client_credentials,refresh_token \
--authorities uaa.resource \
--redirect_uri http://scdf2-data-flow-skipper:8844/login \
--autoapprove openid \
--secret skipper
>>>> Using curl to get a valid token and check that uri's are ok
curl -k -v -d"username=admin&password=password&client_id=dataflow&grant_type=client_credentials" -u "dataflow:dataflow" https://uaa-service:8443/oauth/token * Expire in 0 ms for 6 (transfer 0x5632e4386dd0)
* Trying 10.42.0.1...
* TCP_NODELAY set
* Expire in 200 ms for 4 (transfer 0x5632e4386dd0)
* Connected to uaa-service (10.42.0.1) port 8443 (#0)
* Server auth using Basic with user 'dataflow'
> POST /oauth/token HTTP/1.1
> Host: uaa-service:8443
> Authorization: Basic ZGF0YWZsb3c6ZGF0YWZsb3c=
> User-Agent: curl/7.64.0
> Accept: */*
> Content-Length: 81
> Content-Type: application/x-www-form-urlencoded
>
* upload completely sent off: 81 out of 81 bytes
< HTTP/1.1 200
< Cache-Control: no-cache, no-store, max-age=0, must-revalidate
< Pragma: no-cache
< Expires: 0
< X-XSS-Protection: 1; mode=block
< X-Frame-Options: DENY
< X-Content-Type-Options: nosniff
< Cache-Control: no-store
< Pragma: no-cache
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Mon, 05 Aug 2019 15:02:21 GMT
<
* Connection #0 to host uaa-service left intact
{"access_token":"eyJhbGciOiJIUzI1NiIsImtpZCI6ImxlZ2FjeS10b2tlbi1rZXkiLCJ0eXAiOiJKV1QifQ.eyJqdGkiOiJlNmU3YzNiOWVkMmM0ZmI5ODQ5OWE3MmQ2N2EzMjMyYSIsInN1YiI6ImRhdGFmbG93IiwiYXV0aG9yaXRpZXMiOlsidWFhLnJlc291cmNlIl0sInNjb3BlIjpbInVhYS5yZXNvdXJjZSJdLCJjbGllbnRfaWQiOiJkYXRhZmxvdyIsImNpZCI6ImRhdGFmbG93IiwiYXpwIjoiZGF0YWZsb3ciLCJncmFudF90eXBlIjoiY2xpZW50X2NyZWRlbnRpYWxzIiwicmV2X3NpZyI6IjFkMmUwMjVjIiwiaWF0IjoxNTY1MDE3MzQxLCJleHAiOjE1NjUwNjA1NDEsImlzcyI6Imh0dHA6Ly91YWEtc2VydmljZTo4MDgwL29hdXRoL3Rva2VuIiwiemlkIjoidWFhIiwiYXVkIjpbImRhdGFmbG93IiwidWFhIl19.G2f8bIMbUWJOz8kcZYtU37yYhTtMOEJlsrvJFINnUjo","token_type":"bearer","expires_in":43199,"scope":"uaa.resource","jti":"e6e7c3b9ed2c4fb98499a72d67a3232a"}root#bf98436ccc82:/#
At this point, it seems that uaa server it is running ok and i can get from a "docker" process... let's continue using pods ...
3) Deploy skipper and scdf2 using security uaa.
Skipper and scdf2 are deployed using same values (changes into client_ide values of course:
LOGGING_LEVEL_ROOT: DEBUG
KUBERNETES_NAMESPACE: (v1:metadata.namespace)
SERVER_PORT: 8080
SPRING_CLOUD_CONFIG_ENABLED: false
SPRING_CLOUD_DATAFLOW_FEATURES_ANALYTICS_ENABLED: false
SPRING_CLOUD_KUBERNETES_SECRETS_ENABLE_API: true
SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED: true
SPRING_CLOUD_KUBERNETES_SECRETS_PATHS: /etc/secrets
SPRING_CLOUD_KUBERNETES_CONFIG_NAME: scdf2-data-flow-server
SPRING_CLOUD_SKIPPER_CLIENT_SERVER_URI: http://${SCDF2_DATA_FLOW_SKIPPER_SERVICE_HOST}/api
SPRING_CLOUD_DATAFLOW_SERVER_URI: http://${SCDF2_DATA_FLOW_SERVER_SERVICE_HOST}:${SCDF2_DATA_FLOW_SERVER_SERVICE_PORT}
SPRING_CLOUD_DATAFLOW_SECURITY_CF_USE_UAA: true
SECURITY_OAUTH2_CLIENT_CLIENT_ID: dataflow
SECURITY_OAUTH2_CLIENT_CLIENT_SECRET: dataflow
SECURITY_OAUTH2_CLIENT_SCOPE: openid
SPRING_CLOUD_DATAFLOW_SECURITY_AUTHORIZATION_MAP_OAUTH_SCOPES: true
SECURITY_OAUTH2_CLIENT_ACCESS_TOKEN_URI: https://uaa-service:8443/oauth/token
SECURITY_OAUTH2_CLIENT_USER_AUTHORIZATION_URI: https://uaa-service:8443/oauth/authorize
SECURITY_OAUTH2_RESOURCE_USER_INFO_URI: https://uaa-service:8443/userinfo
SECURITY_OAUTH2_RESOURCE_TOKEN_INFO_URI: https://uaa-service:8443/check_token
SPRING_APPLICATION_JSON: { "com.sun.net.ssl.checkRevocation": "false", "maven": { "local-repository": "myLocalrepoMK", "remote-repositories": { "mk-repository": {"url": "http://${NEXUS_SERVICE_HOST}:${NEXUS_SERVICE_PORT}/repository/maven-releases/","auth": {"username": "admin","password": "admin123"}},"spring-repo": {"url": "https://repo.spring.io/libs-release","auth": {"username": "","password": ""}},"spring-repo-snapshot": {"url": "https://repo.spring.io/libs-snapshot/","auth": {"username": "","password": ""}}}} }
Using 8443 as comunication between pod to pod ...
And skipper and scdf2 config maps:
management:
endpoints:
web:
base-path: /management
security:
roles: MANAGE
spring:
cloud:
dataflow:
security:
authorization:
map-oauth-scopes: true
role-mappings:
ROLE_CREATE: dataflow.create
ROLE_DEPLOY: dataflow.deploy
ROLE_DESTROY: dataflow.destoy
ROLE_MANAGE: dataflow.manage
ROLE_MODIFY: dataflow.modify
ROLE_SCHEDULE: dataflow.schedule
ROLE_VIEW: dataflow.view
enabled: true
rules:
# About
- GET /about => hasRole('ROLE_VIEW')
# Audit
- GET /audit-records => hasRole('ROLE_VIEW')
- GET /audit-records/** => hasRole('ROLE_VIEW')
# Boot Endpoints
- GET /management/** => hasRole('ROLE_MANAGE')
At this point, i think why cannot i see a login mapping defined?
I deploy skipper and scdf2 and the first problem is that all health process is returno 401 .. ok ... let's continue ...
Request not progress after :
http://scdf2-data-flow-server:8844/login?code=ETFX6qfQMw&state=Fudfts
Not bypass /login page from scdf2 and go to dashboard
The request hangs in:
http://scdf2-data-flow-server:8844/login&response_type=code&scope=openid&state=5HST0f
I think that all UAA's process are terminanted and back to redirect to login into scdf security model.
login and loop
But, what is happens?
Login request arrive to scdf2, scdf2 check into uaa that all is correct and back again to process as new request into scdf2, that send again a request to uaa server ...
Then , restart scdf using debug logging ...
request is now :
GET /login?code=W7luipeEGG&state=7yiI9S HTTP/1.1
and logging :
2019-08-12 15:37:58.413 DEBUG 1 --- [nio-8080-exec-5] o.a.tomcat.util.net.SocketWrapperBase : Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#39c5463b:org.apache.tomcat.util.net.NioChannel#6160a9db:java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:58562]], Read from buffer: [0]
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] org.apache.tomcat.util.net.NioEndpoint : Socket: [org.apache.tomcat.util.net.NioEndpoint$NioSocketWrapper#39c5463b:org.apache.tomcat.util.net.NioChannel#6160a9db:java.nio.channels.SocketChannel[connected local=/127.0.0.1:8080 remote=/127.0.0.1:58562]], Read direct from socket: [593]
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] o.a.coyote.http11.Http11InputBuffer : Received [GET /login?code=W7luipeEGG&state=7yiI9S HTTP/1.1
Host: scdf2-data-flow-server:8844
Connection: keep-alive
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.100 Safari/537.36
DNT: 1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3
Sec-Fetch-Site: none
Accept-Encoding: gzip, deflate
Accept-Language: es-ES,es;q=0.9,en-US;q=0.8,en;q=0.7
Cookie: JSESSIONID=077168452F9CCF4378715DC3FE20D4B2
]
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] o.a.t.util.http.Rfc6265CookieProcessor : Cookies: Parsing b[]: JSESSIONID=077168452F9CCF4378715DC3FE20D4B2
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] o.a.catalina.connector.CoyoteAdapter : Requested cookie session id is 077168452F9CCF4378715DC3FE20D4B2
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] o.a.c.authenticator.AuthenticatorBase : Security checking request GET /login
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] org.apache.catalina.realm.RealmBase : No applicable constraints defined
2019-08-12 15:37:58.414 DEBUG 1 --- [nio-8080-exec-5] o.a.c.authenticator.AuthenticatorBase : Not subject to any constraint
2019-08-12 15:37:58.415 DEBUG 1 --- [nio-8080-exec-5] org.apache.tomcat.util.http.Parameters : Set encoding to UTF-8
2019-08-12 15:37:58.415 DEBUG 1 --- [nio-8080-exec-5] org.apache.tomcat.util.http.Parameters : Decoding query null UTF-8
2019-08-12 15:37:58.416 DEBUG 1 --- [nio-8080-exec-5] org.apache.tomcat.util.http.Parameters : Start processing with input [code=W7luipeEGG&state=7yiI9S]
2019-08-12 15:37:58.425 ERROR 1 --- [nio-8080-exec-5] o.s.c.c.s.OAuthSecurityConfiguration : An error occurred while accessing an authentication REST resource.
but using debug error, now i can see:
019-08-12 15:37:58.416 DEBUG 1 --- [nio-8080-exec-5] org.apache.tomcat.util.http.Parameters : Start processing with input [code=W7luipeEGG&state=7yiI9S]
2019-08-12 15:37:58.425 ERROR 1 --- [nio-8080-exec-5] o.s.c.c.s.OAuthSecurityConfiguration : An error occurred while accessing an authentication REST resource.
org.springframework.security.authentication.BadCredentialsException: Could not obtain access token
at org.springframework.security.oauth2.client.filter.OAuth2ClientAuthenticationProcessingFilter.attemptAuthentication(OAuth2ClientAuthenticationProcessingFilter.java:107)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:212)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:334)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilterInternal(BasicAuthenticationFilter.java:158)
at
org.springframework.security.oauth2.client.filter.OAuth2ClientAuthenticationProcessingFilter.attemptAuthentication(OAuth2ClientAuthenticationProcessingFilter.java:105)
... 66 common frames omitted
Caused by: org.springframework.web.client.ResourceAccessException: I/O error on POST request for "https://uaa-service:8443/oauth/token": sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target; nested exception is javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at org.springframework.web.client.RestTemplate.doExecute(RestTemplate.java:744)
at org.springframework.web.client.RestTemplate.execute(RestTemplate.java:691)
at org.springframework.security.oauth2.client.token.OAuth2AccessTokenSupport.retrieveToken(OAuth2AccessTokenSupport.java:137)
... 72 common frames omitted
Caused by: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1946)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:316)
at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:310)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1639)
at sun.security.ssl.ClientHandshaker.processMessage(ClientHandshaker.java:223)
at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1037)
at sun.security.ssl.Handshaker.process_record(Handshaker.java:965)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1064)
at sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1367)
at sun.security.ssl.SSLSocketImpl.startHandshake(SSLSocketImpl.java:1395)
at sun.security.ssl.ClientHandshaker.serverCertificate(ClientHandshaker.java:1621)
... 88 common frames omitted
Caused by: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
at sun.security.provider.certpath.SunCertPathBuilder.build(SunCertPathBuilder.java:141)
at sun.security.provider.certpath.SunCertPathBuilder.engineBuild(SunCertPathBuilder.java:126)
at java.security.cert.CertPathBuilder.build(CertPathBuilder.java:280)
at sun.security.validator.PKIXValidator.doBuild(PKIXValidator.java:392)
... 94 common frames omitted
2019-08-12 15:37:58.426 DEBUG 1 --- [nio-8080-exec-5] o.a.c.c.C.[Tomcat].[localhost] : Processing ErrorPage[errorCode=0, location=/error]
2019-08-12 15:37:58.427 DEBUG 1 --- [nio-8080-exec-5] o.a.c.c.C.[.[.[/].[dispatcherServlet] : Disabling the response for further output
Ok, now we got
sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target
It seems that jvm needs more info into cacerts o similars ...
then, how can i add the new cacert from uaa-server to jvm from scdf2?
Is it the new step into get to start working scdf2 uaa ?
what am i doing wrong?
Do I need to add the uaa-service cert to pod jvm from scdf2 running?
please help !!!
And, the problem was,
Into the server-deployment, I've remove the following config:
#- name: SECURITY_OAUTH2_CLIENT_SCOPE
# value: 'openid'
Do not apply any config parametrization about scope in anywhere.
Because, if scope is omitted or null, all scopes will be assigned to the client, and no needed confirmation for third party permission ...
Warning, you can get alot samples using this config into .. tested?
No apply any config about uaa into skipper.... only cacert to uaa into jks

How to parse a XML response in ansible?

I'm running the panos_op ansible module and struggling to parse the output.
ok: [localhost] => {
"result": {
"changed": true,
"failed": false,
"msg": "Done",
"stdout": "{\"response\": {\"#status\": \"success\", \"result\": \"no\"}}",
"stdout_lines": [
"{\"response\": {\"#status\": \"success\", \"result\": \"no\"}}"
],
"stdout_xml": "<response status=\"success\"><result>no</result></response>"
}
}
This is as close as I can get to assigning the value for "result".
ok: [localhost] => {
"result.stdout": {
"response": {
"#status": "success",
"result": "no"
}
}
}
My goal is to set a conditional loop for the ansible task.
tasks:
- name: Checking for pending changes
panos_op:
ip_address: '{{ host }}'
password: '{{ operator_pw }}'
username: '{{ operator_user}}'
cmd: 'check pending-changes'
register: result
until: result.stdout.result = no
retries: 10
delay: 5
tags: check
How can I make this work?
UPDATE: I've tried it another way, but now I have a new issue trying to deal with a literal "<" char.
tasks:
- name: Checking for pending changes
panos_op:
ip_address: '{{ host }}'
password: '{{ operator_pw }}'
username: '{{ operator_user}}'
cmd: 'check pending-changes'
register: result
- fail:
msg: The Firewall has pending changes to commit.
when: '"<result>no"' not in result.stdout_xml
ERROR:
did not find expected key
Any help at all would be very appreciated.
As I just mentioned in another answer, since Ansible 2.4, there's an xml module.
Playbook
---
- hosts: localhost
gather_facts: false
tasks:
- name: Get result from xml.
xml:
xmlstring: "<response status=\"success\"><result>no</result></response>"
content: "text"
xpath: "/response/result"
Output
PLAY [localhost] ***************************************************************
TASK [Get result from xml.] ****************************************************
ok: [localhost] => changed=false
actions:
namespaces: {}
state: present
xpath: /response/result
count: 1
matches:
- result: 'no'
msg: 1
xmlstring: |-
<?xml version='1.0' encoding='UTF-8'?>
<response status="success"><result>no</result></response>
PLAY RECAP *********************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=0

Resources