Deployment of sample task fails in PCF - spring-cloud-dataflow

spring-cloud-dataflow-server-2.0.1.RELEASE.jar
I am trying to deploy the sample task app on SCDF#PCF.
Deployment fails with the following Exception :
Shell side :
No Launcher found for the platform named 'default'. Available platform names are []
org.springframework.cloud.dataflow.rest.client.DataFlowClientException: No Launcher found for the platform named 'default'. Available platform names are []
SCDF Server side :
2019-03-25T08:00:33.81-0500 [APP/PROC/WEB/0] OUT 2019-03-25 13:00:33.815 ERROR 19 --- [io-8080-exec-10] o.s.c.d.s.c.RestControllerAdvice : Caught exception while handling a request
2019-03-25T08:00:33.81-0500 [APP/PROC/WEB/0] OUT java.lang.IllegalStateException: No Launcher found for the platform named 'default'. Available platform names are []
2019-03-25T08:00:33.81-0500 [APP/PROC/WEB/0] OUT at org.springframework.cloud.dataflow.server.service.impl.DefaultTaskExecutionService.findTaskLauncher(DefaultTaskExecutionService.java:199)
2019-03-25T08:00:33.81-0500 [APP/PROC/WEB/0] OUT at org.springframework.cloud.dataflow.server.service.impl.DefaultTaskExecutionService.executeTask(DefaultTaskExecutionService.java:151)
2019-03-25T08:00:33.81-0500 [APP/PROC/WEB/0] OUT at org.springframework.cloud.dataflow.server.service.impl.DefaultTaskExecutionService$$FastClassBySpringCGLIB$$422cda43.invoke(<generated>)
Any ideas ? Do I need to set a launcher ?

It appears you may not have configured a platform for Tasks.
Starting from v2.0, SCDF provides the flexibility to configure multiple platform backends for Tasks, so you can choose from a list of platforms where you'd want to launch the Task. You can read more about the feature from the release highlights-blog.
If you haven't already configured the Task platform properties, please use the sample manifest.yml as a reference.
If you have set those properties and you still see this issue, feel free to share the manifest.yml - we can review for correctness. Of course, make sure to remove sensitive creds before sharing it.

Just as complementary information:
I got the same error by launch on a Kubernetes platform (Openshift) and could resolve the problem by adding the following snippet in the application.yaml from dataflow-server:
spring:
cloud:
dataflow:
task:
platform:
kubernetes:
accounts:
dev:
namespace: devNamespace
imagePullPolicy: Always
entryPointStyle: exec
limits:
cpu: 4
qa:
namespace: qaNamespace
imagePullPolicy: IfNotPresent
entryPointStyle: boot
limits:
memory: 2048m
Reference: Documentation Dataflow

Related

Run Ambassador in local dev environment without Kubernetes

I am trying to run Ambassador API gateway on my local dev environment so I would simulate what I'll end up with on production - the difference is that on prod my solution will be running in Kubernetes. To do so, I'm installing Ambassador into Docker Desktop and adding the required configuration to route requests to my microservices. Unfortunately, it did not work for me and I'm getting the error below:
upstream connect error or disconnect/reset before headers. reset reason: connection failure
I assume that's due to an issue in the mapping file, which is as follows:
apiVersion: ambassador/v2
kind: Mapping
name: institutions_mapping
prefix: /ins/
service: localhost:44332
So what I'm basically trying to do is rewrite all requests coming to http://{ambassador_url}/ins to a service running locally in IIS Express (through Visual Studio) on port 44332.
What am I missing?
I think you may be better off using another one of Ambassador Labs tools called Telepresence.
https://www.telepresence.io/
With Telepresence you can take your local service you have running on localhost and project it into your cluster to see how it performs. This way you don't need to spin up a local cluster, and can get real time feedback on how your service operates with other services in the cluster.

Kubernetes [error: no kind "CertificateSigningRequest"]

While I am trying to approve the certificate for RBAC in Kubernetes I am getting error.
I create a certificate request for Kubernetes for student-csr
apiVersion: certificates.k8s.io/v1beta1
kind: CertificateSigningRequest
metadata:
name: student-csr
spec:
groups:
- system:authenticated
request: <encoded key>
usages:
- digital signature
- key encipherment
- client auth
Then I ran kubectl create -f signing-request.yaml and out put was certificatesigningrequest.certificates.k8s.io/student-csr created
And then kubectl get csr shows
NAME AGE SIGNERNAME REQUESTOR CONDITION
student-csr 100s kubernetes.io/legacy-unknown minikube-user Pending
So far so good. But the problem occurred when I tried to approve it by kubectl certificate approve student-csr
No resources found
error: no kind "CertificateSigningRequest" is registered for version "certificates.k8s.io/v1" in scheme "k8s.io/kubectl/pkg/scheme/scheme.go:28"
I don't have any idea why. I tried to search but there is nothing similar to this kind of error.
Tools I am using:
Minikube: v1.13.1
Kubernetes v1.19.2
Docker 19.03.12
Mac OS: Catalina (10.15.6)
*** Using minikube with minikube start --container-runtime=docker --vm-driver=virtualbox
Any kind of help much appreciated.
Thank you in advance.
I faced this issue while I was running kubectl version v1.17 and my k8s cluster was version v1.19:
$ kubectl version --short
Client Version: v1.17.0
Server Version: v1.19.2
I fixed it by updating my kubectl to v1.19
$ kubectl version --short
Client Version: v1.19.0
Server Version: v1.19.2
In the Kubernetes v1.19 release notes you can find the following changes:
The CertificateSigningRequest API is promoted to certificates.k8s.io/v1 with the following changes:
spec.signerName is now required, and requests for kubernetes.io/legacy-unknown are not allowed to be created via the
certificates.k8s.io/v1 API
spec.usages is now required, may not contain duplicate values, and must only contain known usages
status.conditions may not contain duplicate types
status.conditions[*].status is now required
status.certificate must be PEM-encoded, and contain only CERTIFICATE blocks (#91685, #liggitt) [SIG API Machinery,
Architecture, Auth, CLI and Testing]
So the error you see:
no kind "CertificateSigningRequest" is registered for version "certificates.k8s.io/v1"
means that you should be using apiVersion: certificates.k8s.io/v1 instead of apiVersion: certificates.k8s.io/v1beta1.
In order to change your API versions you can use the kubectl convert command:
Convert config files between different API versions. Both YAML and
JSON formats are accepted.
The command takes filename, directory, or URL as input, and convert it
into format of version specified by --output-version flag. If target
version is not specified or not supported, convert to latest version.
You might have skipped the configuring cgroup driver step when installing kubeadm
Check out this resource: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#configuring-a-cgroup-driver
It seems you have two version of csr. Change your student-csr version to certificates.k8s.io/v1 , it will work I guess.
The certificates controller is not enabled by default in Minikube, there is an opened issue : https://github.com/kubernetes/minikube/issues/1647
This is the reason why you can create your API object but cannot approve the certificate.
However, it may be possible to make it work using extra params :
https://github.com/kubernetes/minikube/issues/1647#issuecomment-311138886
I got same issue as you with my minikube (minikube v1.24.0). Kubectl was not the reason of the error:
kubectl version --short
Client Version: v1.22.3
Server Version: v1.22.3
Got the same error as you mentioned:
error: unable to recognize "*****.yml": no matches for kind "CertificateSigningRequest" in version "certificates.k8s.io/v1beta1"
I solved the problem with changing the apiVersion and adding signerName items in my yaml file:
apiVersion: certificates.k8s.io/v1beta1
to
apiVersion: certificates.k8s.io/v1
Successfully applied final maniefst file version is as below:
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: mycsr
spec:
groups:
- system:authenticated
request: <BASE64_CSR>
signerName: kubernetes.io/kube-apiserver
usages:
- digital signature
- key encipherment
- server auth
- client auth

How to configure Kibana for Swisscom elasticsearch public cloud (CloudFoundry)

Note: This question is specific to the Elasticsearch service provided by Swisscom
Question: (a.k.a: tl;dr)
What configuration is required to get the official Kibana docker container to connect to a Swisscom Elasticsearch Service?
Background:
Up until about a year ago the Swisscom public cloud offered a full ELK stack (Elasticsearch, Logstash, Kibana) in a single service offering. When this service was discontinued, Swisscom replaced it by just offering the Elasticsearch service and asked clients to setup their own Kibana and Logstash solutions via provided CloudFoundry build_packs (Kibana, Logstash). The migration recommendation was discussed here: https://ict.swisscom.ch/2018/04/building-the-elk-stack-on-our-new-elasticsearch/
More recently, the underlying OS (called "stack") that runs the applications on Swisscom's CloudFoundry-based PaaS offering, has been upgraded. The aforementioned build_packs are now outdated and have been declared as deprecated by Swisscom. The suggestion now is to move to a generic Docker container provided by Elastic as discussed here: https://github.com/swisscom/kibana-buildpack/issues/3
What I tried:
CloudFoundry generally works well with Docker containers and the whole thing should be as straight forward as providing some valid configuration to the docker container. My current manifest.yml for Kibana looks something like this, but the Kibana application ultimately fails to connect:
---
applications:
- name: kibana-test-example
docker:
image: docker.elastic.co/kibana/kibana:6.1.4
memory: 4G
disk_quota: 5G
services:
- elasticsearch-test-service
env:
SERVER_NAME: kibana-test
ELASTICSEARCH_URL: https://abcdefghijk.elasticsearch.lyra-836.appcloud.swisscom.com
ELASTICSEARCH_USERNAME: username_provided_by_elasticsearch_service
ELASTICSEARCH_PASSWORD: password_provided_by_elasticsearch_service
XPACK_MONITORING_ENABLED: true
Additional Info:
The Elasticsearch Service provided by Swisscom currently runs on version 6.1.3. As far as I'm aware it has x-pack installed.
What errors are you getting?
I played around with the configuration a bit and have seen different errors, most of which appear to be related to failing authentication against the Elasticsearch Service.
Here is some exemplary initial log output (seriously, though, you need a running Kibana just to be able to read that...)
2019-05-10T08:08:34.43+0200 [CELL/0] OUT Cell eda692ed-f4c3-4a5e-86aa-c0d1641b029f successfully created container for instance 385e5b7f-1570-46cd-532a-c5b4
2019-05-10T08:08:48.60+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:08:48Z","tags":["info","optimize"],"pid":6,"message":"Optimizing and caching bundles for graph, monitoring, apm, kibana, stateSessionStorageRedirect, timelion, login, logout, dashboardViewer and status_page. This may take a few minutes"}
2019-05-10T08:15:07.68+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["info","optimize"],"pid":6,"message":"Optimization of bundles for graph, monitoring, apm, kibana, stateSessionStorageRedirect, timelion, login, logout, dashboardViewer and status_page complete in 379.08 seconds"}
2019-05-10T08:15:07.77+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:kibana#6.1.4","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
2019-05-10T08:15:07.82+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:elasticsearch#6.1.4","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
2019-05-10T08:15:07.86+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:xpack_main#6.1.4","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
2019-05-10T08:15:07.86+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:graph#6.1.4","info"],"pid":6,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"}
2019-05-10T08:15:07.88+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:monitoring#6.1.4","info"],"pid":6,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"}
2019-05-10T08:15:07.89+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:xpack_main#6.1.4","error"],"pid":6,"state":"red","message":"Status changed from yellow to red - Authentication Exception","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
2019-05-10T08:15:07.89+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:graph#6.1.4","error"],"pid":6,"state":"red","message":"Status changed from yellow to red - Authentication Exception","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
2019-05-10T08:15:07.89+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:07Z","tags":["status","plugin:elasticsearch#6.1.4","error"],"pid":6,"state":"red","message":"Status changed from yellow to red - Authentication Exception","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"}
2019-05-10T08:15:11.39+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:11Z","tags":["reporting","warning"],"pid":6,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"}
2019-05-10T08:15:11.39+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:11Z","tags":["status","plugin:reporting#6.1.4","error"],"pid":6,"state":"red","message":"Status changed from uninitialized to red - Authentication Exception","prevState":"uninitialized","prevMsg":"uninitialized"}
The actually relevant error message seems to be this:
2019-05-10T08:15:11.66+0200 [APP/PROC/WEB/0] OUT {"type":"log","#timestamp":"2019-05-10T06:15:11Z","tags":["license","warning","xpack"],"pid":6,"message":"License information from the X-Pack plugin could not be obtained from Elasticsearch for the [data] cluster. [security_exception] unable to authenticate user [ABCDEFGHIJKLMNOPQRST] for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [ABCDEFGHIJKLMNOPQRST] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [ABCDEFGHIJKLMNOPQRST] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}"}
I have tried to set XPACK_SECURITY_ENABLED: false as recommended elsewhere as well as setting the actual SERVER_HOST, which seemed to make things worse.
I would very much appreciate a working example from someone using the existing Kibana docker images to connect to the Swisscom-provided Elasticsearch Service.
Could it be that you confused username and password? When I check my service-key password comes before username, which might have lead to a copy-paste error on your side:
cf service-key myece mykey|grep kibana_system
"kibana_system_password": "aKvOpMVrXGCJ4PJht",
"kibana_system_username": "aksTxVNyLU4JWiQOE6V",
I tried pushing Kibana with your manifest.yml and it works perfectly in my case.
Swisscom has also updated the documentation on how to use Kibana and Logstash with Docker:
https://docs.developer.swisscom.com/service-offerings/kibana-docker.html
https://docs.developer.swisscom.com/service-offerings/logstash-docker.html

Wildfly Swarm: Environment specific configuration of Keycloak Backend

Given is a JavaEE application on wildfly that uses keycloak as authentication backend, configured in project-stages.yml:
swarm:
deployment:
my.app.war:
web:
login-config:
auth-method: KEYCLOAK
The application will be deployed in different environments using a Gitlab-CD-Pipeline. Therefore keycloak specifics must be configured per environment.
By now the only working configuration that I found is adding a keycloak.json like (the same file in every environment):
{
"realm": "helsinki",
"bearer-only": true,
"auth-server-url": "http://localhost:8180/auth",
"ssl-required": "external",
"resource": "backend"
}
According to the Wildfly-Swarm Documentation it should be possible to configure keycloak in project-stages.yml like:
swarm:
keycloak:
secure-deployments:
my-deployment:
realm: keycloakrealmname
bearer-only: true
ssl-required: external
resource: keycloakresource
auth-server-url: http://localhost:8180/auth
But when I deploy the application, no configuration is read:
2018-03-08 06:29:03,540 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) KeycloakServletException initialization
2018-03-08 06:29:03,540 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) using /WEB-INF/keycloak.json
2018-03-08 06:29:03,542 WARN [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) No adapter configuration. Keycloak is unconfigured and will deny all requests.
2018-03-08 06:29:03,545 DEBUG [org.keycloak.adapters.undertow.KeycloakServletExtension] (ServerService Thread Pool -- 12) Keycloak is using a per-deployment configuration.
If you take a look at the source of the above class, it looks like the only way to get around is to provide a KeycloakConfigResolver. Does Wildfly-Swarm provide a resolver that reads the project-stages.yml?
How can I configure environment-specific auth-server-urls?
A workaround would be to have different keycloak.json-Files, but I would rather use the project-stages.yml.
I have a small WildFly Swarm project which configures Keycloak exclusively via project-defaults.yml here: https://github.com/Ladicek/swarm-test-suite/tree/master/wildfly/keycloak
From the snippets you post, the only thing that looks wrong is this:
swarm:
keycloak:
secure-deployments:
my-deployment:
The my-deployment name needs to be the actual name of the deployment, same as what you have in
swarm:
deployment:
my.app.war:
If you already have that, then I'm afraid I'd have to start speculating: which WildFly Swarm version you use? Which Keycloak version?
Also you could specify the swarm.keycloak.json.path property in your yml:
swarm:
keycloak:
json:
path: path-to-keycloak-config-files-folder/keycloak-prod.json
and you can dynamically select a yml file config during startup of the application with the -Dswarm.project.stage option.
Further references:
cheat sheet: http://design.jboss.org/redhatdeveloper/marketing/wildfly_swarm_cheatsheet/cheat_sheet/images/wildfly_swarm_cheat_sheet_r1v1.pdf
using multiple swarm project stages (profiles) example: https://github.com/thorntail/thorntail/tree/master/testsuite/testsuite-project-stages/src/test/resources
https://docs.thorntail.io/2018.1.0/#_keycloak

Dataflow 1.2.0 YAML configuration changes

Yesterday I upgraded my development environment to Spring Cloud Dataflow 1.2.0 and all of my sink/source apps dependencies.
I have two main issues:
javaOpts: -Xmx128m is not longer being picked up, so locally deployed apps have the default Xmx value.
Here is the format of my previously working Dataflow yaml config.
See full here: https://pastebin.com/p1JmLnLJ
spring:
cloud:
dataflow:
applicationProperties:
stream:
spring:
cloud:
deployer:
local:
javaOpts: -Xmx128m
Kafka config options like ssl.truststore.location etc. are not being read correctly. Another stackoverflow post indicated these must be marked like this "[ssl.truststore.location]". Is there some documented working yaml config or list of breaking changes with 1.2.0? The file based authentication block was also moved, and I was able to figure that one out.
Yes, It looks like a bug in Spring Cloud Local Deployer to consider the common application properties passed via args. Created https://github.com/spring-cloud/spring-cloud-deployer-local/issues/48 to track this.

Resources