I'm using spring-cloud-dataflow with taskcloud module but I've some trouble to lunch a simple example in container.
tiny example 6.3 writing code then I've deploy it
but when I try to execute it throw me an
java.lang.IllegalArgumentException: Invalid TaskExecution, ID 1 not found
at org.springframework.util.Assert.notNull(Assert.java:134)
at org.springframework.cloud.task.listener.TaskLifecycleListener.doTaskStart(TaskLifecycleListener.java:200)
In my evaluation I've used Spring boot example
and for run in scd I've add #EnableTask and configured ad sqlserver datasource but it doesn't works.
I'm insisting on using spring cloud data flow cause I've read that spring batch admin is at end-of-life, but 2.0.0.BUILD-SNAPSHOT works
well and a tiny examples works as opposed to what happens in spring cloud data flow with #task annotation.
Probably is my misundestand but could you please provide me a tiny example where or address me some url ?
Referencing https://docs.spring.io/spring-cloud-dataflow/docs/current-SNAPSHOT/reference/htmlsingle/#configuration-rdbms, datasource arguments has to be passed to the data flow server and data flow shell(if using) in-order for the cloud data flow to persist the execution/task/step related data in the required datasource.
Ex from the link for a MySQL datasource(similar can be configured for SQL Server):
java -jar spring-cloud-dataflow-server-local/target/spring-cloud-dataflow-server-local-1.0.0.BUILD-SNAPSHOT.jar \
--spring.datasource.url=jdbc:mysql:<db-info> \
--spring.datasource.username=<user> \
--spring.datasource.password=<password> \
--spring.datasource.driver-class-name=org.mariadb.jdbc.Driver
This error:
Invalid TaskExecution, ID 1 not found
Can be about the SCDF's datasource, in general, SCDF cannot find the Task Execution table in its own database, not application database
You might fix it by adding database driver or fixing url connection string, point to SCDF's database
This issue below might help
How to properly compile/package a Task for Spring Cloud Data Flow
Related
I use a Spring Boot application with spring-rabbit (version 2.2.2). Since the nature of my application is very dynamic, the queues and bindings are declared dynamically using RabbitAdmin.declareXXX methods, so they are not declared as Spring Beans.
From my understanding (and testing), the RabbitAdmin's functionality for auto-recovery the topology when rabbitmq server restarts is only for exchanges/queues/bindings that were declared as Spring Beans (am I correct?).
I tried to use the underlying Rabbit client's auto recovery feature using the following methods:
cachingConnectionFactory.getRabbitConnectionFactory().setAutomaticRecoveryEnabled(true)
cachingConnectionFactory.getRabbitConnectionFactory().setTopologyRecoveryEnabled(true)
However, after the rabbitmq server restart, the spring application fails with:
One org.springframework.amqp.rabbit.connection.AutoRecoverConnectionNotCurrentlyOpenException
And multiple continuous com.rabbitmq.client.ShutdownSignalException: channel error; protocol method: #method<channel.close>(reploy-code=404, reply-text=NOT_FOUND - no queue 'recovery-q1' in host '/'
and nothing is getting recovered.
Note that a test without Spring, where the queue is created directly through the channel, the queue is recovered properly with its consumers.
Is there anything else I can configure to make this work?
Currently, spring only recovers Declarables that are defined as beans in the application context.
Based on your user name, I assume you opened this feature request: https://github.com/spring-projects/spring-amqp/issues/1365
Posting this here in case people come across this question.
I am trying to use the spring-cloud-dataflow-rest-client v2.6.0 in an application to launch spring cloud tasks. I followed the instructions on this page https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#appendix-identity-provider-azure to secure Spring-cloud-dataflow-server using Azure AD. However, I am unable to get the setup that was provided for dataflow-shell to work with the SCDF rest client. I know shell internally uses SCDF-rest-client. So, not sure why it won't work for me.
Which properties should I use if my application which uses SCDF-rest-client is to launch tasks like the shell?
I tried with the following properties but I keep getting an invalid scope error.
-Dspring.cloud.dataflow.client.authentication.client-id=yhas7wqh-2a5d-4795-babb-b6213f896b52
-Dspring.cloud.dataflow.client.authentication.client-secret=asjajd8hhsasajdassakja
-Dspring.cloud.dataflow.client.authentication.oauth2.client-registration-id=Batch-Launcher
-Dspring.cloud.dataflow.client.authentication.token-uri=https://login.microsoftonline.com/d8bb2fd3-e835-4d68-b9db-7402a9bf39f1/oauth2/v2.0/token
-Dspring.cloud.dataflow.client.authentication.scope=api://dataflow-server/dataflow.deploy,api://dataflow-server/dataflow.view,offline_access
-Dspring.cloud.dataflow.client.authentication.oauth2.username=abcddemo#afdemo12.onmicrosoft.com
-Dspring.cloud.dataflow.client.authentication.oauth2.password=abcdPwd
-Dspring.cloud.dataflow.client.authentication.basic.username=abcddemo#afdemo12.onmicrosoft.com
-Dspring.cloud.dataflow.client.authentication.basic.password=abcdPwd
The exception that I get
Caused by: org.springframework.security.oauth2.core.OAuth2AuthorizationException: [invalid_scope] AADSTS70011: The provided request must include a 'scope' input parameter. The provided value for the input parameter 'scope' is not valid. The scope api://dataflow-server/dataflow.deploy offline_access is not valid.
Can someone from SCDF team update the Azure provider docs to include also how one can use SCDF rest client like shell to invoke SCDF API.
I'm deploying KSQLDB in a docker container and I need to create some tables (if they don't exist) when the database starts. Is there a way to do that? Any examples?
As of version 0.11 you would need to have something that could query the servers rest endpoint to determine what tables existed, and then submit SQL to create any missing tables. This is obviously a little clunky.
I believe the soon to be released 0.12 release comes with CREATE OR REPLACE support for creating streams and tables. With this feature all you'd need is a script with a few curl commands within your docker image that waited for the server to become available and then fired in a SQL script with your table definitions using CREATE OR REPLACE.
The 0.12 release also comes with IF NOT EXIST syntax support for streams, tables, connectors and types. So you can do:
CREATE STREAM IF NOT EXISTS FOO (ID INT) WITH (..);
Details of what to pass to the server can be found in the Rest API docs.
Or you should be able to script sending in the commands using the CLI.
every time i change the chaincode and do the deploy, it return a new chaincodeID and i have to do init again, but in production environment, we can not do this,we just want to update the chaincode and the history data must be keeped. i seached, https://jira.hyperledger.org/browse/FAB-22 this url tells me now hyperledger not support for chaincode upgrade, so what can i do if i need this now? if i misunderstand it, you can tell me. thanks!
As you found in FAB-22, Fabric v0.5-0.6 has no support for chaincode “upgrade”. The reason for such behavior is how Fabric saves information in the ledger.
When chaincode tries to call PutState method:
PutState(customKey string, value []byte) error
Fabric will automatically add ChaincodeId to the key and save provided “value” using name CHAINCODE_ID + customKey.
As a result each chaincode has an access only to its own variables. After update, chaincode receives new ChaincodeId and new visibility area.
We found several workarounds for how to deal with this limitation.
Custom upgrade feature:
In your chaincode (v1) you can create function “readAllVars” which loads all variables from ledger using “stub.RangeQueryState” method.
When new version(v2) is deployed, you can make cross-chaincode request to (v1) using “InvokeChaincode” and read previous state from “readAllVars”, then save everything in (v2) area of visibility.
DAO layer:
You can create separate chaincode which will be responsible “read/write” operations. All versions will use this DAO as a proxy for all “PutState” and “GetState” requests. With such approach all Chaincode’s versions will work in the same area of visibility. At the same time this DAO layer become responsible for security and should guarantee that no other chaincodes have access to private information.
I need to run a shell script (e.g. df) on a server, say Client. The call to this script should be made from another independent rails application, say Monitor via REST Api and return the output in response to Monitor application.
This shell command should run on all application server instances of Client. Though I'm researching on it, it'll be quite helpful if anyone has done this already before.
I need to get following information from Client servers to Monitor application:
Disk space left on each Client server instance ,
Processes running on each Client server instance,
Should be able to terminate non-responsive Client instance.
Thanks
A simple command can be executed via:
result = `df -h /`
But it does not fullfill the requirement run on all application server instances of Client. For this you need to call every instance independently.
Another way can be to run your checks from a cron job and let the Client call Monitor. If a cron is not suited you can create an ActiveJob on every client, collect the data and call Monitor
You should also look for ruby libraries providing the data you need.
For instance sys/filesystem can provide data about your disk stats.