How can I implement KSQLDB init script? - ksqldb

I'm deploying KSQLDB in a docker container and I need to create some tables (if they don't exist) when the database starts. Is there a way to do that? Any examples?

As of version 0.11 you would need to have something that could query the servers rest endpoint to determine what tables existed, and then submit SQL to create any missing tables. This is obviously a little clunky.
I believe the soon to be released 0.12 release comes with CREATE OR REPLACE support for creating streams and tables. With this feature all you'd need is a script with a few curl commands within your docker image that waited for the server to become available and then fired in a SQL script with your table definitions using CREATE OR REPLACE.
The 0.12 release also comes with IF NOT EXIST syntax support for streams, tables, connectors and types. So you can do:
CREATE STREAM IF NOT EXISTS FOO (ID INT) WITH (..);
Details of what to pass to the server can be found in the Rest API docs.
Or you should be able to script sending in the commands using the CLI.

Related

spring-data-flow task example

I'm using spring-cloud-dataflow with taskcloud module but I've some trouble to lunch a simple example in container.
tiny example 6.3 writing code then I've deploy it
but when I try to execute it throw me an
java.lang.IllegalArgumentException: Invalid TaskExecution, ID 1 not found
at org.springframework.util.Assert.notNull(Assert.java:134)
at org.springframework.cloud.task.listener.TaskLifecycleListener.doTaskStart(TaskLifecycleListener.java:200)
In my evaluation I've used Spring boot example
and for run in scd I've add #EnableTask and configured ad sqlserver datasource but it doesn't works.
I'm insisting on using spring cloud data flow cause I've read that spring batch admin is at end-of-life, but 2.0.0.BUILD-SNAPSHOT works
well and a tiny examples works as opposed to what happens in spring cloud data flow with #task annotation.
Probably is my misundestand but could you please provide me a tiny example where or address me some url ?
Referencing https://docs.spring.io/spring-cloud-dataflow/docs/current-SNAPSHOT/reference/htmlsingle/#configuration-rdbms, datasource arguments has to be passed to the data flow server and data flow shell(if using) in-order for the cloud data flow to persist the execution/task/step related data in the required datasource.
Ex from the link for a MySQL datasource(similar can be configured for SQL Server):
java -jar spring-cloud-dataflow-server-local/target/spring-cloud-dataflow-server-local-1.0.0.BUILD-SNAPSHOT.jar \
--spring.datasource.url=jdbc:mysql:<db-info> \
--spring.datasource.username=<user> \
--spring.datasource.password=<password> \
--spring.datasource.driver-class-name=org.mariadb.jdbc.Driver
This error:
Invalid TaskExecution, ID 1 not found
Can be about the SCDF's datasource, in general, SCDF cannot find the Task Execution table in its own database, not application database
You might fix it by adding database driver or fixing url connection string, point to SCDF's database
This issue below might help
How to properly compile/package a Task for Spring Cloud Data Flow

Run a shell script on one rails application from another rails application

I need to run a shell script (e.g. df) on a server, say Client. The call to this script should be made from another independent rails application, say Monitor via REST Api and return the output in response to Monitor application.
This shell command should run on all application server instances of Client. Though I'm researching on it, it'll be quite helpful if anyone has done this already before.
I need to get following information from Client servers to Monitor application:
Disk space left on each Client server instance ,
Processes running on each Client server instance,
Should be able to terminate non-responsive Client instance.
Thanks
A simple command can be executed via:
result = `df -h /`
But it does not fullfill the requirement run on all application server instances of Client. For this you need to call every instance independently.
Another way can be to run your checks from a cron job and let the Client call Monitor. If a cron is not suited you can create an ActiveJob on every client, collect the data and call Monitor
You should also look for ruby libraries providing the data you need.
For instance sys/filesystem can provide data about your disk stats.

How to concurrently load multiple ttl files as separate instances in jena-fuseki server

I am using apache jena fuseki server to load the data in a .ttl format and then querying the data.But the problem is i am not able to serve multiple data simultaneously.
I am starting the server using the following command.
./fuseki-server --update --mem /ds
The server version i am using is 1.1.1
/home/user/jena-fuseki-1.1.1/./s-put http://192.168.1.38:3030/ds/data **default** /home/user/data.ttl
I was thinking like if we change the default option in s-put command,is there any other options to serve concurrent data as separate instances.
./s-put http://192.168.1.38:3030/ds/data default /home/user/data.ttl
I am having a rest api from which multiple users can load the data and do SPARQL queries on top of it.But when each time a new user loads the data the server gets the new data and the previous data is gone.
I want each user to have his own data to be maintained by the server.Is there some mistake in the way i am loading data ?
To add data, not replace it, use POST and command s-post. HTTP PUT means "replace", HTTP POST is "append" (which forRDF just means "add").
PS Try Fuseki 2.3.0

HTTP Server in iOS to list files Documents directory

I am trying to create an HTTP Server inside my iOS application, to develop something like Xender application. Right now I Succeed to setup HTTP Server inside my Application and hosting any HTML file there, that can be loaded on another Device/System using IP and Port.
But, I want to Link that HTML to my application database to populate data on that HTML file, followed by making it dynamic so that It can be opened from another device or system.
Ultimately, I need to Query on SQLITE database of application from
HTML file, Is there any way to do such thing?
Can I connect SQLITE to frontend of HTML? In case of Web apps these things can be done using any server side scripting languages like PHP, by connecting with Databases like MySQL. But, Here My case is HTML and Sqlite.
EDIT
I found Is it possible to access an SQLite database from JavaScript? . but this is all about Client side local storage, but I think in my case its on Server side SQLITE.
You have to create template HTML files and provide a set of variables for it. Then, when the file is requested in your server, you load it into memory.
Now you do some RegEx magic to get the query parameters, do your SQL stuff and then replace the corresponding variables in your HTML string and finally serve it to the client.
Your would need to define your own non-logical "scripting" language that is able to tell your application what data is requested and where to output possibly returned data.
I think this is quite hard work and you should possibly try to find a better solution that is probably already done by others.
EDIT
You could use Node.js and this interpreter but it's not maintained anymore. But their might be similar projects.
EDIT II
I've found the neu.Node, which sounds quite promising. They haven't done anything in 4 months, but they seem to be well organized and documented.

Any way a db change can trigger a method call in Rails?

Is there any way a database change (say an insert statement) can trigger a call to a ruby method in my app? I know about observers but this is sort of a complicated situation, because the database is updated by a Java application.
Note, both the Rails and the Java app connect to the same database.
Polling DB by Rails app - in regular time intervals.
Introduce table trigger which runs pl/ruby, pl/* or whatever to ping command-line, REST or web service of Rails app.
Java app 'pings' Rails app (via REST, SOAP etc) after DB change.
In case 2&3 ping event can contain some more information - e.g. row id.

Resources