lua database insert - prepared statement - lua

require "luasql.mysql"
instance:name(profile:id());
env = assert (luasql.mysql())
con = assert (env:connect("fxcm", "root", "admin"))
con:execute([[INSERT INTO pet values('swaroop',"12")]]);
I would like to use prepared statement like in java. I found some references like conn:prepare(statement) in
https://realtimelogic.com/ba/doc/en/lua/luasql.html link.
But got no clue how to construct the statement. Please help me.

https://realtimelogic.com/ba/doc/en/lua/luasql.html Is a barracudas module not LuaSQL from kepler prorject. I didn't find source from this server.
Original LuaSQL does not support prepared query (yet?).
I use ODBC library to do that
local odbc = require "odbc.dba"
local cnn = odbc.Connect{
Driver ='{MySQL ODBC 5.2 ANSI Driver}';
db='test';
uid='root';
};
local stmt = cnn:prepare"INSERT INTO pet values(:NAME,:AGE)"
stmt:exec{NAME = "swaroop", AGE = 12}
You can also checkout LuaDBI library to native MySQL support.

Related

ZetaSQL - Creating a simple catalog with tables and columns using local service

We are using a Python client binding for ZetaSQL GRPC local service in our application to analyze statements and extract referenced tables and output columns.
It is possible to extract referenced tables using the following simplified Python code and the local service:
import zetasql.local_service as zql
conn = zql.connect()
language_options = conn.GetLanguageOptions(
zql.pb2.LanguageOptionsRequest(maximum_features=True)
)
# Used to allow ZetaSQL parser to parse `CREATE TABLE AS` statments
language_options.supported_statement_kinds.pop()
req = zql.pb2.ExtractTableNamesFromStatementRequest(
sql_statement=sql, options=language_options
)
res = conn.ExtractTableNamesFromStatement(req)
return json.loads(MessageToJson(res))
However, from what I see here, the local service doesn't have the full functionalities of the Java client, mainly creating simple catalog with tables and columns to analyze any SQL statement. Also setting analyzer options doesn't seem to be possible.
Is it possible to analyze SQL statements using ZetaSQL with only the local service? If not, what should be the alternative approach to extract output columns?

PERL DBI in PL/PERL in POSTGRESQL

Can I use DBI in a pl/perl function created in Postgresql to select any foreign database?
Im getting the error: Unable to laod DBI.pm into plperl
(I know that there are oracle foreign data wrappers, but I just need to store the resultset of a select statement fired against Oracle, MSSQL or PG and store it in Postgres.)
Here is my function (just with the connect string at the moment):
CREATE OR REPLACE FUNCTION sel_ora()
RETURNS VOID AS $$
use DBI;
my $db = DBI->connect( "dbi:Oracle:DBKUNDEN", "stadl", "sysadm" )
|| die( $DBI::errstr . "\n" );
$$ LANGUAGE plperl;
Yes, you can use DBI from within plperl.
Note that for security reasons, plperl restricts access to using perl modules. This is intended for multi-user databases where your postgres users are not trusted.
The solution in plperl is to add a line such as this to your postgresql.conf file:
plperl.on_init = 'use DBI;'
Then DBI will be available within your plperl functions. See docs: https://www.postgresql.org/docs/9.5/plperl-under-the-hood.html
Alternatively, if this security consideration does not apply in your situation, then you can use plperlu (u = unrestricted) instead of plperl. Then you can use any perl module directly from your plperlu code.

Equivalence of Rails console for Node.js

I am trying out Node.js Express framework, and looking for plugin that allows me to interact with my models via console, similar to Rails console. Is there such a thing in NodeJS world?
If not, how can I interact with my Node.js models and data, such as manually add/remove objects, test methods on data etc.?
Create your own REPL by making a js file (ie: console.js) with the following lines/components:
Require node's built-in repl: var repl = require("repl");
Load in all your key variables like db, any libraries you swear by, etc.
Load the repl by using var replServer = repl.start({});
Attach the repl to your key variables with replServer.context.<your_variable_names_here> = <your_variable_names_here>. This makes the variable available/usable in the REPL (node console).
For example: If you have the following line in your node app:
var db = require('./models/db')
Add the following lines to your console.js
var db = require('./models/db');
replServer.context.db = db;
Run your console with the command node console.js
Your console.js file should look something like this:
var repl = require("repl");
var epa = require("epa");
var db = require("db");
// connect to database
db.connect(epa.mongo, function(err){
if (err){ throw err; }
// open the repl session
var replServer = repl.start({});
// attach modules to the repl context
replServer.context.epa = epa;
replServer.context.db = db;
});
You can even customize your prompt like this:
var replServer = repl.start({
prompt: "Node Console > ",
});
For the full setup and more details, check out:
http://derickbailey.com/2014/07/02/build-your-own-app-specific-repl-for-your-nodejs-app/
For the full list of options you can pass the repl like prompt, color, etc: https://nodejs.org/api/repl.html#repl_repl_start_options
Thank you to Derick Bailey for this info.
UPDATE:
GavinBelson has a great recommendation for running with sequelize ORM (or anything that requires promise handling in the repl).
I am now running sequelize as well, and for my node console I'm adding the --experimental-repl-await flag.
It's a lot to type in every time, so I highly suggest adding:
"console": "node --experimental-repl-await ./console.js"
to the scripts section in your package.json so you can just run:
npm run console
and not have to type the whole thing out.
Then you can handle promises without getting errors, like this:
const product = await Product.findOne({ where: { id: 1 });
I am not very experienced in using node, but you can enter node in the command line to get to the node console. I then used to require the models manually
Here is the way to do it, with SQL databases:
Install and use Sequelize, it is Node's ORM answer to Active Record in Rails. It even has a CLI for scaffolding models and migrations.
node --experimental-repl-await
> models = require('./models');
> User = models.User; //however you load the model in your actual app this may vary
> await User.findAll(); //use await, then any sequelize calls here
TLDR
This gives you access to all of the models just as you would in Rails active record. Sequelize takes a bit of getting used to, but in many ways it is actually more flexible than Active Record while still having the same features.
Sequelize uses promises, so to run these properly in REPL you will want to use the --experimental-repl-await flag when running node. Otherwise, you can get bluebird promise errors
If you don't want to type out the require('./models') step, you can use console.js - a setup file for REPL at the root of your directory - to preload this. However, I find it easier to just type this one line out in REPL
It's simple: add a REPL to your program
This may not fully answer your question, but to clarify, node.js is much lower-level than Rails, and as such doesn't prescribe tools and data models like Rails. It's more of a platform than a framework.
If you are looking for a more Rails-like experience, you may want to look at a more 'full-featured' framework built on top of node.js, such as Meteor, etc.

Reading docs in irb

One thing I miss about ipython is it has a ? operator which diggs up the docs for a particular function.
I know ruby has a similar command line tool but it is extremely inconvenient to call it while I am in irb.
Does ruby/irb have anything similar?
Pry is a Ruby version of IPython, it supports the ? command to look up documentation on methods, but uses a slightly different syntax:
pry(main)> ? File.dirname
From: file.c in Ruby Core (C Method):
Number of lines: 6
visibility: public
signature: dirname()
Returns all components of the filename given in file_name
except the last one. The filename must be formed using forward
slashes (/'') regardless of the separator used on the
local file system.
File.dirname("/home/gumby/work/ruby.rb") #=> "/home/gumby/work"
You can also look up sourcecode with the $ command:
pry(main)> $ File.link
From: file.c in Ruby Core (C Method):
Number of lines: 14
static VALUE
rb_file_s_link(VALUE klass, VALUE from, VALUE to)
{
rb_secure(2);
FilePathValue(from);
FilePathValue(to);
from = rb_str_encode_ospath(from);
to = rb_str_encode_ospath(to);
if (link(StringValueCStr(from), StringValueCStr(to)) < 0) {
sys_fail2(from, to);
}
return INT2FIX(0);
}
See http://pry.github.com for more information :)
You can start with
irb(main):001:0> `ri Object`
Although the output of this is less than readable. You'd need to filter out some metacharacters.
In fact, someone already made a gem for it
gem install ori
Then in irb
irb(main):001:0> require 'ori'
=> true
irb(main):002:0> Object.ri
Looking up topics [Object] o
= Object < BasicObject
------------------------------------------------------------------------------
= Includes:
Java (from gem activesupport-3.0.9)
(from gem activesupport-3.0.9) [...]
No, it doesn't. Python has docstrings:
def my_method(arg1,arg2):
""" What's inside this string will be made available as the __doc__ attribute """
# some code
So, when the ? is called from ipython, it probably calls the __doc__ attribute on the object. Ruby doesn't have this.

Connect to MySQL from Microsoft Data Application Block

lI am using the Data Application block for a majority of my data access, specifically using the SqlHelper class to call the ExecuteReader, ExecuteNonQuery, and like methods. Passing the connection string with each database call.
How can I modify this to enable connection to a MySQL database as well.
If you've got the Enterprise Library installed and already know how to connect to SQL Server databases, connecting to MySQL databases is not any harder.
One way to do it is to use ODBC. This is what I did:
Go to MySQL.com and download the latest MySQL ODBC connector. As I write this it's 5.1.5. I used the 64-bit version, as I have 64-bit Vista.
Install the ODBC Connector. I chose to use the no-installer version. I just unzipped it and ran Install.bat at an administrator's command prompt. The MSI version probably works fine, but I did it this way back when I installed the 3.51 connector.
Verify the installation by opening your ODBC control panel and checking the Drivers tab. You should see the MySQL ODBC 5.1 Driver listed there. It seems to even co-exist peacefully with the older 3.51 version if you already have that. Additionally it coexists peacefully with the .NET connector if that is installed too.
At this point you will be doing what you've done to connect to a SQL Server database. All you need to know is what to use for a connection string.
Here's what mine looks like:
Of course you can set "name" to whatever you want.
If this is your only database, you can set it up as the defaultDatabase like this:
Access your data in your code like you always do! Here's a plain text sql example:
public List<Contact> Contact_SelectAll()
{
List<Contact> contactList = new List<Contact>();
Database db = DatabaseFactory.CreateDatabase("MySqlDatabaseTest");
DbCommand dbCommand = db.GetSqlStringCommand("select * from Contact");
using (IDataReader dataReader = db.ExecuteReader(dbCommand))
{
while (dataReader.Read())
{
Contact contact = new Contact();
contact.ID = (int) dataReader["ContactID"];
client.FirstName = dataReader["ContactFName"].ToString();
client.LastName = dataReader["ContactLName"].ToString();
clientList.Add(client);
}
}
return clientList;
}
Another way to do it is to build and use a MySql provider. This guy did that.
I learned how to do this by adapting these instructions for connecting to Access.
Oh, and here are some more MySql Connection String samples.

Resources