I have created my own stream source, processor, sink. When i deploy the apps and create a stream the properties are not being read from the application.yaml file which is present on the resource folder.
I have tried both the below way on applicaiton.yaml file.
app:
app-name:
properties
and without app: app-name as well.
Related
I am trying to set up mass ingestion in IICS. This will be a streaming ingestion task. The source is REST V2.
According to the documentation, I can't provide an absolute path to the Swagger file for this connection. Instead, I need a hosted URL.
I tried hosting the Swagger file on a server that has the Informatica Cloud secure agent installed. When I create a connection everything works.
But when I try to add a connection for mass ingestion I get following error:
What is interesting, I also tried hosting this file on a VM in Azure, and when I try to access this file from a server using an internet browser it works. I can also see requests on the web server, but when I create mass ingestion and define the source I still get an error and I can't see any requests for the Swagger file on the web server.
What is wrong?
Is this possible. Based on the documentation it looks like imread does not support anything but local file paths? If it is possible would anyone be so kind as to provide a code sample?
Cheers.
Here is Documentation,
The following remote services are well supported and tested against the main codebase:
Local or Network File System: file:// - the local file system, default in the absence of any protocol.
Hadoop File System: hdfs:// - Hadoop Distributed File System, for resilient, replicated files within a cluster. This uses PyArrow as the backend.
Amazon S3: s3:// - Amazon S3 remote binary store, often used with Amazon EC2, using the library s3fs.
Google Cloud Storage: gcs:// or gs:// - Google Cloud Storage, typically used with Google Compute resource using gcsfs.
Microsoft Azure Storage: adl://, abfs:// or az:// - Microsoft Azure Storage using adlfs.
HTTP(s): http:// or https:// for reading data directly from HTTP web servers.
Check above given documentation for more information
My Spring Cloud Data Flow deleted log file in folder after I stopped it.
Why SCDF does that and How can I keep these log files?
You can customize the logging configuration in the logback config file and pass it as a configuration properties for the SCDF server. Assuming you are trying this with the local data flow server, you can refer this documentation for logback configuration.
I am currently running a rails application and a SpringBoot configuration service in the same local network. Is it possible to configure rails to use the config files provided by the service in Springboot?
More specifically I am looking to fetch the database connection and user data via the service and let rails connect to a remote database.
The service provides theses files via http as json or yml.
Thank you.
Edit: Solved it by using a bash script with wget to pull and assemble config files manually via container scripts that are executed before each deploy.
Yesterday I upgraded my development environment to Spring Cloud Dataflow 1.2.0 and all of my sink/source apps dependencies.
I have two main issues:
javaOpts: -Xmx128m is not longer being picked up, so locally deployed apps have the default Xmx value.
Here is the format of my previously working Dataflow yaml config.
See full here: https://pastebin.com/p1JmLnLJ
spring:
cloud:
dataflow:
applicationProperties:
stream:
spring:
cloud:
deployer:
local:
javaOpts: -Xmx128m
Kafka config options like ssl.truststore.location etc. are not being read correctly. Another stackoverflow post indicated these must be marked like this "[ssl.truststore.location]". Is there some documented working yaml config or list of breaking changes with 1.2.0? The file based authentication block was also moved, and I was able to figure that one out.
Yes, It looks like a bug in Spring Cloud Local Deployer to consider the common application properties passed via args. Created https://github.com/spring-cloud/spring-cloud-deployer-local/issues/48 to track this.