Does anyone know how to get the output after the execution of the stored function???
thank you
Not sure what language you're using, and not sure what output you're looking for, but in C#/ADO.NET, you can grab select query output into a DataSet by doing something like this:
SqlConnection sqlConnection = new SqlConnection(
"server=localhost\SQLEXPRESS;Integrated Security=SSPI;database=Northwind");
SqlDataAdapter sqlDataAdapter = new SqlDataAdapter("[MyStoredProc]", sqlConnection);
sqlDataAdapter.SelectCommand.CommandType = CommandType.StoredProcedure;
// Whatever selects your stored proc does will become tables in the DataSet
DataSet northwindDataSet = new DataSet("Northwind");
sqlConnection.Open();
sqlDataAdapter.Fill(northwindDataSet);
sqlConnection.Close();
// data now available in: dsNorthwind.Tables[0];, etc. depending on how many selects your query ran
Assuming you'd want the value of an OUTPUT parameter in T-SQL you'd do something like this:
CREATE PROC pTestProc (#in int, #out int OUTPUT)
AS
SET #Out = #In
SELECT 'Done'
RETURN 1
GO
DECLARE #Output INT
EXEC pTestProc 46, #Output OUTPUT
SELECT #Output
-Edoode
Related
I tested anomaly detection using Deeplearning4j, everything works fine except that, I am not able to preserve the VehicleID while training. What is the best approach in such scenario?
Please look at the following snippet of code, SparkTransformExecutor returns a RDD and InMemorySequence is taking a list when, I am collecting list from RDD indexing is not guaranteed.
val records:JavaRDD[util.List[util.List[Writable]]] = SparkTransformExecutor
.executeToSequence(.....)
val split = records.randomSplit(Array[Double](0.7,0.3))
val testSequences = split(1)
//in memory sequence reader
val testRR = new InMemorySequenceRecordReader(testSequences.collect().toList)
val testIter = new RecordReaderMultiDataSetIterator.Builder(batchSize)
.addSequenceReader("records", trainRR)
.addInput("records")
.build()
Typically you track training examples by index in a dataset. Track which index that dataset is vehicle is in the dataset alongside training. There are a number of ways to do that.
In dl4j, we typically keep the data raw and use record readers + transform processes for the training data. If you use a record reader on raw data (pick one for your dataset, it could be csv or even video) and use a recordreader datasetiterator like here:
```java
RecordReader recordReader = new CSVRecordReader(0, ',');
recordReader.initialize(new FileSplit(new ClassPathResource("iris.txt").getFile()));
int labelIndex = 4;
int numClasses = 3;
int batchSize = 150;
RecordReaderDataSetIterator iterator = new RecordReaderDataSetIterator(recordReader,batchSize,labelIndex,numClasses);
iterator.setCollectMetaData(true); //Instruct the iterator to collect metadata, and store it in the DataSet objects
DataSet allData = iterator.next();
DataSet trainingData = testAndTrain.getTrain();
DataSet testData = testAndTrain.getTest();
```
(Complete code here):
https://github.com/deeplearning4j/dl4j-examples/blob/master/dl4j-examples/src/main/java/org/deeplearning4j/examples/dataexamples/CSVExampleEvaluationMetaData.java
Alongside this you use TransformProcess:
```
//Let's define the schema of the data that we want to import
//The order in which columns are defined here should match the
//order in which they appear in the input data
Schema inputDataSchema = new Schema.Builder()
//We can define a single column
.addColumnString("DateTimeString")
....
.build();
//At each step, we identify column by the name we gave them in the
input data schema, above
TransformProcess tp = new TransformProcess.Builder(inputDataSchema)
//your transforms go here
.build();
```
Complete example below:
https://github.com/deeplearning4j/dl4j-examples/blob/6967b2ec2d51b0d19b5d6437763a2936ca922a0a/datavec-examples/src/main/java/org/datavec/transform/basic/BasicDataVecExampleLocal.java
If you use these things, you customize keep the data as is, but have a complete data pipeline. There are a lot of ways to do it, just keep in mind you start with the vehicle id, it doesn't have to disappear.
I have constructed a simple Xamarin IOS Application to test the problem.
Lets say I create an sqlite table test with two “numeric columns”
I declare the first one named as Value1 as real(25,10) and the second one named as Value2 as NUMERIC(25,10) by running the following code
public void CreateTable()
{
SqliteCommand SQLcommand = new SqliteCommand();
SQLcommand = this.Connection.CreateCommand();
SQLcommand.CommandText = "CREATE TABLE IF NOT EXISTS test ( Val1 real(25, 10) NOT NULL COLLATE BINARY DEFAULT (0.0), Val2 Numeric(25, 10) NOT NULL COLLATE BINARY DEFAULT (0.0));";
SQLcommand.ExecuteNonQuery();
SQLcommand.Dispose();
}
I insert the number 2300450,25 into both of them
When I try to read it from the database then I get
Value1 -> 2300450
Value2 -> 2300450,25
If I insert the number 23004,25 all is good!!!
Value1 -> 23004,25
Value2 -> 23004,25
the values are inserted using the following code
public void IntertData(object value)
{
SqliteCommand cmd = new SqliteCommand();
cmd = this.Connection.CreateCommand();
cmd.CommandText = "insert into test (Val1,Val2) values (#val1,#val2)";
cmd.Parameters.AddWithValue("#val1", value);
cmd.Parameters.AddWithValue("#val2", value);
cmd.ExecuteNonQuery();
cmd.Dispose();
}
and the data are loaded into a DataTable using the following code
public DataTable LoadQueryDataTable(string a_Command)
{
SqliteCommand cmd = this.Connection.CreateCommand();
cmd.CommandText = a_Command;
cmd.CommandType = CommandType.Text;
SqliteDataAdapter dat = new SqliteDataAdapter();
dat.SelectCommand = cmd;
DataTable dt = new DataTable();
dat.Fill(dt);
return dt;
}
I have run similar tests in a standard windows forms project and I see no problem there. Therefore I think there is a problem in the mono.data.sqlite implementation.
I would use the numeric declaration, however, it has another problem. According to sqlite documentation numeric is a type affinity
In order to maximize compatibility between SQLite and other database
engines, SQLite supports the concept of "type affinity" on columns.
The type affinity of a column is the recommended type for data stored
in that column. The important idea here is that the type is
recommended, not required. Any column can still store any type of
data. It is just that some columns, given the choice, will prefer to
use one storage class over another. The preferred storage class for a
column is called its "affinity".
Therefore, Numeric cannot be used because in some cases it converts all datta of the column into integers
A column with NUMERIC affinity may contain values using all five
storage classes. When text data is inserted into a NUMERIC column, the
storage class of the text is converted to INTEGER or REAL (in order of
preference)
I'm seeing some genuinely bizarre behavior w/ ActiveRecord as it relates to assignment. I have an ActiveRecord model named Venue that includes the measurements of the Venue, all integers less than 1K. We add Venues via an XML feed. On the model itself, I have a Venue.from_xml_feed method takes the XML, parses, and creates Venues.
The problem comes from the measurements. Using Nokogiri, I'm parsing out the measurements like so:
elems = xml.xpath("//*[#id]")
elems.each do |node|
distance = node.css("distances")
rs = distance.attr("rs")
// get the rest of the sides
# using new instead of create to print right_side, behavior is the same
venue = Venue.new right_side: rs # etc
venue.save
puts venue.right_side
end
The problem is that venue.right_side ALWAYS evaluates to nil, even though distance.attr("rs") contains a legal value, say 400. So this code:
rs = distance.attr("rs")
puts rs
Venue.new right_side: rs
Will print 400, then save rs as nil. If I try any type of Type Conversions, like so:
content = distance.attr("rs").content
str = content.to_s
int = Integer(str)
puts "Is int and Integer? #{int.is_a? Integer}"
Venue.new right_side: int
It will print Is int an Integer? true, then again save again save Venue.right_side as nil.
However, if I just explicitly create a random integer like so:
int = 400
Venue.new right_side: int
It will save Venue.right_side as 400. Can anyone tell me what's going on with this?
Well, you failed to include the prerequisite sample XML to confirm this, so you get a fairly generic answer.
In your code you're using:
distance = node.css("distances")
rs = distance.attr("rs")
css doesn't return what you think it does. It returns a NodeSet, which is similar to an Array. When you try to use attr on a NodeSet, you're going to set the value, not retrieve it. From the documentation:
#attr(key, value = nil, &blk) ⇒ Object (also: #set, #attribute)
Set the attribute key to value or the return value of blk on all Node objects in the NodeSet.
Because you're not using a value, the resulting action is to remove the attribute from the tag, which will then return nil and Ruby will assign nil to rs.
If you want to get the attribute of a node, you need to point to the node itself, so use at, or at_css, either of which returns a Node. Once you have the node, you can use attribute to retrieve the value, or use the [] shortcut similar to this untested code:
rs = node.at('distances')['rs']
Again though, because you didn't supply XML it's not possible to tell what else you might be trying to do, or whether this code is entirely accurate.
I have a one2many which stores some data.
In python, when I need to update the object with .write method; the new data is stored but the old stuff remain there.
How can I empty the many2many before using .write method ??
Maybe using .browse and .search ?? please help !!!
It would be great if you have posted some example of what you are trying to do. Any way, you have 2 solutions:
use unlink()
understand how the write() ORM method works on one2many fields.
Let take the example of account.invoice and account.invoice.line.
The first approach - unlink():
def delete_lines(self, cr, uid, ids, context=None):
invoice_pool = self.pool.get('account.invoice')
line_pool = self.pool.get('account.invoice.line')
for invoice in invoice_pool.browse(cr, uid, ids, context=context):
line_ids = [line.id for line in invoice.invoice_line]
line_pool.unlink(cr, uid, line_ids, context=context)
The second approach - write()
Looking at the OpenERP docs (https://doc.openerp.com/6.0/developer/2_5_Objects_Fields_Methods/methods/#osv.osv.osv.write):
write(cr, user, ids, vals, context=None)
...
Note: The type of field values to pass in vals for relationship fields is specific:
For a one2many field, a lits of tuples is expected. Here is the list of tuple that are accepted, with the corresponding semantics
(2, ID) remove and delete the linked record with id = ID (calls unlink on ID, that will delete the object completely, and the link to it as well)
So for the vals parameter we need a list of tuples in the following format:
[
(2, line1_id),
(2, line2_id),
(2, line3_id),
...
]
The following code illustrates the use of the write() method.
def delete_lines(self, cr, uid, ids, context=None):
invoice_pool = self.pool.get('account.invoice')
for invoice in invoice_pool.browse(cr, uid, ids, context=context):
vals = [(2, line.id) for line in invoice.invoice_line]
invoice.write(vals)
I didn't test the examples so let me know if they do the job.
Here is how I solved it:
my_object = self.pool.get('my.main.object')
props = self.pool.get('table.related')
prop_id = props.search(cr, uid, [('id_1', '=', id_2)])
del_a = []
for p_id in prop_id:
del_a.append([2, p_id])
my_object.write(cr, uid, line_id, {'many2one_field': del_a}, context=context)
Where:
del_a.append([2, p_id]) creates the string of tuples with code "2" (delete)
and my_object is where I need to make the changes.
I have the following code:
Dim cn As Object
Dim rs As Object
Dim strSql As String
Dim strConnection As String
Dim AppPath As String
Set cn = CreateObject("ADODB.Connection")
AppPath = Application.ActiveWorkbook.Path
Set rs = CreateObject("ADODB.RecordSet")
strConnection = "Provider=Microsoft.ACE.OLEDB.12.0;" & _
"Data Source=" & AppPath & "\Masterlist_Current_copy.accdb;"
strSql = "SELECT [Neptune Number],[Description],[Manufacturer],[Manufacturer P/N] FROM [All Components];"
cn.Open strConnection
Set rs = cn.Execute(strSql)
'Need Code here to get Info out of recordset
I am trying to get information out of the recordset that has the query result being dumped into it. I'm trying to figure out how to query the recordset and get the number of rows with a specific value in the "Neptune Number" field. I will then insert the correct number of rows into the worksheet I'm modifying. After that I need to get the data for that value and insert it into the worksheet.
Note: I don't care if recordset, datatable or anything else is used I simply need to be able to do what I described above. Please show code.
The easiest way to get where you I think you are asking to go is to modify your SQL statement changing
strSql = "SELECT [Neptune Number],[Description],[Manufacturer],[Manufacturer P/N] FROM [All Components];"
to
strSql = "SELECT [Neptune Number],[Description],[Manufacturer],[Manufacturer P/N] FROM [All Components] WHERE [Neptune Number] = 'Specific Value' ;"
This will force the sql query to only return the records you need. the .find method can be used for filtering the recordset, but i have avoided using it in this instance as it is cleaner to just ask the db for only the information that you want.
to process the recordset you can use the following
with rs
'will skip further processing if no records returned
if not (.bof and .eof) then
'assuming you do not need the headers
'loop through the recordset
do while not .eof
for i = 0 to .fields.count -1
'assuming the active sheet is where you want the data
cells(row, i + colOffset) = .fields(i).value
next
Rows(Row & ":" & Row).Insert
.movenext
loop
end if
end with
Where row is the starting point of your data and colOffset is the starting column of your data. Note that this code does not do things in the exact order you specified in your question (I am inserting rows as needed instead of calcualting the number of records up front.)
I have avoided using .recordcount because I find depending on the database used it will not return a correct record count.