Can I get any examples on creating the stored procedures in DynamoDB. I would like to read the data from Kafka topic and write to DynamoDB. If anyone has done any example on stored procedure in DynamoDB with or without Kafka, let me know.
Lokesh Narayan
Stored procedure (i.e. similar to oracle PL/SQL) is not available in AWS DynamoDB at the moment.
Stored Procedure are not available.
You can probably achieve a good performance by using AWS Lambda to do more complex db operations; in particular this approach could save you round-trips required if you are doing dependent operations on multiple tables; and it should be very fast since it would be performed inside of AWS.
p.s. this is somewhat similar approach that MongoDB has taken by storing js functions stored on the db server (although in case of AWS it is better to keep this separated).
You can use your own code in AWS Lambda to achieve that.
Related
I'm currently working on a new Java application which uses an embedded Neo4j database as its data store. Eventually we'll be deploying to a cloud host which has no persistent data storage available - we're fine while the app is running but as soon as it stops we lose access to anything written to disk.
Therefore I'm trying to come up with a means of persisting data across an application restart. We have the option of capturing any change commands as they come into our application and writing them off somewhere but that means retaining a lifetime of changes and applying them in order as an application node comes back up. Is there any functionality in Neo4j or SDN that we could leverage to capture changes at the Neo4j level and write them off to and AWS S3 store or the like? I have had a look at Neo4j clustering but I don't think that will work either from a technical level (limited protocol support on our cloud platform) or from the cost of an Enterprise licence.
Any assistance would be gratefully accepted...
If you have an embedded Neo4j, you should know where in your code you are performing an update/create/delete query in Neo, no ?
To respond to your question, Neo4j has a TransactionEventHandler (https://neo4j.com/docs/java-reference/current/javadocs/org/neo4j/graphdb/event/TransactionEventHandler.html) that captures all the transaction and tells you what node/rel has been added, updated, deleted.
In fact it's the way to implement triggers in Neo4j.
But in your case I will consider to :
use another cloud provider that allow you to have a storage
if not possible, to implement a hook on the application shutdown that copy the graph.db folder to a storage (do the opposite for the startup)
use Neo4j as a remote server, and to install it on a cloud provider with a storage.
Does the DBMS save the compiled queries from prepared statements in JDBC, in the form of stored procedures on the DBMS server? I thought that prepared statement isn't a concept in DBMS but in JDBC, so I was wondering how it is implemented on DBMS server side.
My question comes from Why do Parameterized queries allow for moving user data out of string to be interpreted?
I read DIfference Between Stored Procedures and Prepared Statements..?, but don't find my answer.
Thanks.
I am interested in PostgreSQL, MySQL, or SQL server in order.
No, prepared statements are not implemented as stored procedures in any RDBMS.
Prepared statements are parsed and saved on the server-side so they can be executed multiple times with different parameter values, but they are not saved in the form of a stored procedure. They are saved in some implementation-dependent manner. For example, as some kind of in-memory object, totally internal to the code of the database server. These are not callable like a stored procedure.
Re your comment:
Consider MySQL for example.
MySQL in the very early days did not support prepared statements, so the MySQL JDBC driver has an option to "emulate" prepared statements. The idea of emulation mode is that the SQL query string is saved in the JDBC client when you create a PreparedStatement. The SQL is not yet sent to the database server. Then when you bind parameters and call execute(), it copies the parameter values into the SQL query and sends the final result.
I don't know whether a similar feature exists in other brands of JDBC driver.
I am setting greenplum for the first time. I am following the documentation. I want to setup connection from sql to greenplum database. Currently figuring out what's the best way to achieve this. I came across gpfdist and gpload.
How are the two different? Since both use external tables, both work on slaved nodes and are used for parallel loading. So Is there any advantage of using one over other?
Answering to your question for " I want to setup connection from sql to greenplum database"...
It's ambiguous for which SQL database you are referring to.
Also, there is no direct connectivity drivers available to connect non-greenplum database to greenplum database.
However if you want to migrate data from Oracle to Greenplum, then you can use Informatica's fastclone tool.
To answer your second part of question regarding gpfdist and gpload. GPFDIST is a file distributed process which runs on host system and it serves file parallely to many segments. While initialising external table to read/ write from file, you will need to specify which process will serve the file, In your case it will be GPFDIST. There are other processes too like FTP, GPHDFS, HTTP.
GPLOAD is a wrapper utility which makes your work easier by automatically creating gpfdist processes and external tables.
Also be aware that GPLOAD can only create readable external tables.
gpfdist n gpload or same. In gpfdist you do it manually while in gpload you can automate the activities via maiking entries in config(yaml file) file.
GPLOAD is a wrapper around GPFDIST. so when you load data via gpload it will internally use gpfdist only.
If you want to load/ migrate data from any other RDBMS to Greenplum and you are using any ETL or migration tool, it will use normal copy command and while loading/migrating if you enable gpload(now a days in the latest version of most of the ETL tool and migration tool support gpload feature when you migrate/load data to Greenplum) it will load data in parallel fashion via using gpfdist internally.
I am new to stored procedures, informix, and UCCX. I am working on a project to consolidate reporting into on BI tool, and it appears there are several UCCX stored procedures that could be great time savers for bringing the UCCX historical reporting into our BI tools. Can anyone offer tips on how to query stored procedures for informix via RazorSQL?
You are on the right track with your syntax. For example, I can call a different stored procedure by executing the following in RazorSQL:
execute procedure sp_agent_state_detail('2016-05-31 05:00:00','2016-05-31 05:59:59','0',null,'David Bowie',null,null)
The error you get is self-explanatory -- there is no calls_handled table. I suspect that the gettotalcalls() stored procedure is meant to be used in conjunction with some other code, perhaps another stored procedure, that creates that table.
I have streaming data coming into my consumer app that I ultimately want to show up in Hive/Impala. One way would be to use Hive based APIs to insert the updates in batches to the Hive Table.
The alternate approach is to write the data directly into HDFS as a avro/parquet file and let hive detect the new data and suck it in.
I tried both approaches in my dev environment and the 'only' drawback I noticed was high latency writing to hive and/or failure conditions I need to account for in my code.
Is there an architectural design pattern/best practices to follow?