I am adding records to a sqflite table in flutter from a json api.
The structure of my api is like this:
{ "procesos": [
{
"id_registro": "43",
"server_id": "478",
"usuario_id": "131",
"categoria": "342",
"nueva_cat": "9046340",
"estatus_id": "8",
},
{...}
]
}
My method to insert the records from the api to the table:
for (int i = 0; i < list.length; i++) {
Map<String, dynamic> estructura ={"id_registro": "43",
"server_id": "list[i].serverId",
"usuario_id": "list[i].usuarioId",
"categoria": "list[i].categoria",
"nueva_cat": "list[i].nuevaCat",
"estatus_id": "list[i].estatusId"
};
final db = await SQLHelper.db();
db.insert('procesos', estructura,
conflictAlgorithm: ConflictAlgorithm.replace);
}
The structure of the table in which I add the records is like this:
id autoincrement not null
id_registro
server_id
usuario_id
1
43
478
131
But when I reopen the screen, the method is executed again and the records are duplicated, the only thing that changes is the id:
id autoincrement not null
id_registro
server_id
usuario_id
1
43
478
131
2
43
478
131
I am getting information from the table with these lines:
final db=await SQLHelper.db();
int rowId=0;
var queryResult= await db.query('procesos',columns: ['id','id_registro','server_id','usuario_id'], where:'id_nuevo!=?',whereArgs: [rowId]);
Now the main question, how to compare the id_registro of the table with those of the json to avoid that duplicate of records?
Any help comment is welcome, greetings.
Related
This is in a C# ASP.NET MVC 5 web application. jQuery version 1.10.2. DataTables jQuery plugin version 1.10.21.
A page in the web application uses a DataTable. The DataTable is configured for server-side processing mode. So, when it receives a response from the server, the data for a row in the table is an object like the following example. (It is not proper JSON syntax; I am trying to represent what I see in the watch window of the browser's debugger.) i.e., each row has a string name, and an array of grade objects.
row: {...}
Id: 42
Name: "Fred"
Grades: (3) [...]
0: {...}
Id: 101
Name: "Quiz 1"
Value: "A"
1: {...}
Id: 102
Name: "Homework 2"
Value: "B"
2: {...}
Id: 103
Name: "Exam 3"
Value: "C"
length: 3
In the DataTable, I want 4 columns: one for the name, and 3 for the several grades, like the following example.
Name Quiz 1 Homework 2 Exam 3
=========================================
Fred A B C
My problem is that I cannot determine the correct notation for the data source for each of the grade columns, so that when a cell in such a column is rendered that the callback function receives an object that contains data about the grade. The following is what I have tried in the CSHTML file for the page. (I provide the number of grades to the page via a ViewBag property.)
...
<table id="gradebook-table" class="table">
...
</table>
...
<script>
$( document ).ready( onPageReady );
function onPageReady()
{
var options = {};
options.serverSide = true;
options.ajax =
{
'url': '#Url.Content( "~/gradebook/load" )',
'type': 'POST',
};
var c = 0;
options.columnDefs = [
{ targets: c++, data: "Id", visible: false, searchable: false },
{ targets: c++, data: "Name" },
];
for ( var i = 0; i < #ViewBag.GradeCount; i++ )
{
var def = {};
def.targets = c++;
def.data = "Grades[" + i + "]";
def.render = renderGradeCell;
options.columnDefs.push( def );
}
$( '#gradebook-table' ).DataTable( options );
}
function renderGradeCell( data, type, row, meta )
{
if ( type === 'display' )
{
// I expect data to be an object containing grade properties.
return '<span>' + data.Value + '</span>';
}
return data;
}
</script>
When the data source for a grade column is "Grades[" + i + "]", the data that the renderGradeCell() function is given is not an object that I expected, but a string like the following. It is just "[object Object]0" repeated for as many items as there are in the Grades array in the data for the whole row.
"[object Object]0[object Object]0[object Object]0"
I changed the data source for a grade column to just "Grades[]". But then, the data that the renderGradeCell() function is given is the entire Grades array for that row.
Any suggestions are appreciated. Thanks.
Actually i have two tables i want to display records of the use of second table. Please help me in this. Let me provide you example.
[TABLE1]
LocationID LoginID Location_Name
1 101 A
2 102 B
3 103 C
[TABLE2]
ID LoginID No_Of_Item Location ToLocation
1 101 5 1 2
2 102 6 2 3
3 103 7 1 3
This is my database. Now i want to show [TABLE2] records with location name. But i am unable to do that. Please help me in this LINQ Query.
This is my code.
public IQueryable<StockTransferViewModel> GetAllStockTransferDetailByLoginId(string LoginId)
{
var StockList = (from aspuser in context.AspNetUsers
join cus in context.Customers on aspuser.Id equals cus.LoginID
join transfer in context.StockTransfers on aspuser.Id equals transfer.LoginID
where transfer.LoginID == LoginId
select new StockTransferViewModel
{
ID = transfer.ID,
LoginID = transfer.LoginID,
Date_Of_Transfer = transfer.Date_Of_Transfer,
No_Of_Sku = transfer.No_Of_SKU,
FromLocationName=transfer.Location,
ToLocation=transfer.ToLocation,
}).AsQueryable();
return StockList;
}
If TABLE1 is named Locations, you'll likey want to add the following lines after the line starting with join transfer in:
join fromLocation in context.Locations on transfer.Location equals fromLocation.LocationID
join toLocation in context.Locations on transfer.Location equals toLocation.LocationID
Then in your select statement, you'll want:
FromLocationName = fromLocation.Location_Name,
ToLocation = toLocation.Location_Name,
I have a json as mentioned below,
{
"list": [{
"notificationId": 123,
"userId": 444
},
{
"notificationId": 456,
"userId": 789
}
]
}
I need to write a postgres procedure which interates through the list and perform either update/insert based on notification id is already present or not in DB.
I have a notification table which has notificationid and userID as columns.
Can anyone please tell me on how to perform this using postgres json operators.
Try this query:
SELECT *
FROM yourTable
WHERE col->'list'#>'[{"notificationId":123}]';
You may replace the value 123 with whatever notificationId you want to search. Follow the link below for a demo showing that this logic works:
Demo
Assuming you have a unique constraint on notificationid (e.g. because it's the primary key, there is no need for stored function or loop:
with data (j) as (
values ('
{
"list": [{
"notificationId": 123,
"userId": 444
},
{
"notificationId": 456,
"userId": 789
}
]
}'::jsonb)
)
insert into notification (notificationid, userid)
select (e.r ->> 'notificationId')::int, (e.r ->> 'userId')::int
from data d, jsonb_array_elements(d.j -> 'list') as e(r)
on conflict (notificationid) do update
set userid = excluded.userid;
The first step in that statement is to turn the array into a list of rows, this is what:
select e.*
from data d, jsonb_array_elements(d.j -> 'list') as e(r)
does. Given your sample JSON, this returns two rows with a JSON value in each:
r
--------------------------------------
{"userId": 444, "notificationId": 123}
{"userId": 789, "notificationId": 456}
This is then split into two integer columns:
select (e.r ->> 'notificationId')::int, (e.r ->> 'userId')::int
from data d, jsonb_array_elements(d.j -> 'list') as e(r)
So we get:
int4 | int4
-----+-----
123 | 444
456 | 789
And this result is used as the input for an INSERT statement.
The on conflict clause then does an insert or update depending on the presence of the row identified by the column notificationid which has to have a unique index.
Meanwhile i tried this,
CREATE OR REPLACE FUNCTION insert_update_notifications(notification_ids jsonb) RETURNS void AS
$$
DECLARE
allNotificationIds text[];
indJson jsonb;
notIdCount int;
i json;
BEGIN
FOR i IN SELECT * FROM jsonb_array_elements(notification_ids)
LOOP
select into notIdCount count(notification_id) from notification_table where notification_id = i->>'notificationId' ;
IF(notIdCount = 0 ) THEN
insert into notification_table(notification_id,userid) values(i->>'notificationId',i->>'userId');
ELSE
update notification_table set userid = i->>'userId' where notification_id = i->>'notificationId';
END IF;
END LOOP;
END;
$$
language plpgsql;
select * from insert_update_notifications('[{
"notificationId": "123",
"userId": "444"
},
{
"notificationId": "456",
"userId": "789"
}
]');
It works.. Please review this.
Below query takes long time to create temporary table, its only have "228000" distinct record.
DECLARE todate,fromdate DATETIME;
SET fromdate=DATE_SUB(UTC_TIMESTAMP(),INTERVAL 2 DAY);
SET todate=DATE_ADD(UTC_TIMESTAMP(),INTERVAL 14 DAY);
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
DROP TEMPORARY TABLE IF EXISTS tempabc;
SET max_heap_table_size = 1024*1024*1024;
CREATE TEMPORARY TABLE IF NOT EXISTS tempabc
-- (index using BTREE(id))
ENGINE=MEMORY
AS
(
SELECT SQL_NO_CACHE DISTINCT id
FROM abc
WHERE StartTime BETWEEN fromdate AND todate
);
I already created index on 'startTime' coulmn, still it tooks 20 sec to create table. Kindly help me out to reduce the creation time.
More Info:-
I changed my query earlier I was using "tempabc" temporary table to get my output, now I am using IN clause instead of temporary table and now it is taking 12 sec to execute, but still more than expected time..
Earlier(taking 20-30 sec)
DECLARE todate,fromdate DATETIME;
SET fromdate=DATE_SUB(UTC_TIMESTAMP(),INTERVAL 2 DAY);
SET todate=DATE_ADD(UTC_TIMESTAMP(),INTERVAL 14 DAY);
SET SESSION TRANSACTION ISOLATION LEVEL READ COMMITTED;
DROP TEMPORARY TABLE IF EXISTS tempabc;
SET max_heap_table_size = 1024*1024*1024;
CREATE TEMPORARY TABLE IF NOT EXISTS tempabc
-- (index using BTREE(id))
ENGINE=MEMORY
AS
(
SELECT SQL_NO_CACHE DISTINCT id
FROM abc
WHERE StartTime BETWEEN fromdate AND todate
);
SELECT DISTINCT p.xyzID
FROM tempabc s
JOIN xyz_tab p ON p.xyzID=s.ID AND IFNULL(IsGeneric,0)=0;
Now(taking 12-14 sec)
DECLARE todate,fromdate Timestamp;
SET fromdate=DATE_SUB(UTC_TIMESTAMP(),INTERVAL 2 DAY);
SET todate=DATE_ADD(UTC_TIMESTAMP(),INTERVAL 14 DAY);
SELECT p.xyzID FROM xyz_tab p
WHERE id IN (
SELECT DISTINCT id FROM abc
WHERE StartTime BETWEEN fromdate AND todate )
AND IFNULL(IsGeneric,0)=0 GROUP BY p.xyxID;
But we need to achieve 3-5 sec of execution time.
This is my explain output.
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: abc
partitions: NULL
type: index
possible_keys: ix_starttime_id,IDX_Start_time,IX_id_starttime,IX_id_starttime_prgsvcid
key: IX_id_starttime
key_len: 163
ref: NULL
rows: 18779876
filtered: 1.27
Extra: Using where; Using index; Using temporary; Using filesort; LooseScan
*************************** 2. row ***************************
id: 1
select_type: SIMPLE
table: p
partitions: NULL
type: eq_ref
possible_keys: PRIMARY,IX_seriesid
key: PRIMARY
key_len: 152
ref: onconnectdb.abc.ID
rows: 1
filtered: 100.00
Extra: Using where
Explain in JSON format
EXPLAIN: {
"query_block": {
"select_id": 1,
"cost_info": {
"query_cost": "10139148.44"
},
"grouping_operation": {
"using_temporary_table": true,
"using_filesort": true,
"cost_info": {
"sort_cost": "1.00"
},
"nested_loop": [
{
"table": {
"table_name": "abc",
"access_type": "index",
"possible_keys": [
"ix_starttime_tmsid",
"IDX_Start_time",
"IX_id_starttime",
"IX_id_starttime_prgsvcid"
],
"key": "IX_id_starttime",
"used_key_parts": [
"ID",
"StartTime",
"EndTime"
],
"key_length": "163",
"rows_examined_per_scan": 19280092,
"rows_produced_per_join": 264059,
"filtered": "1.37",
"using_index": true,
"loosescan": true,
"cost_info": {
"read_cost": "393472.45",
"eval_cost": "52812.00",
"prefix_cost": "446284.45",
"data_read_per_join": "2G"
},
"used_columns": [
"ID",
"StartTime"
],
"attached_condition": "(`onconnectdb`.`abc`.`StartTime` between <cache>(fromdate#1) and <cache>(todate#0))"
}
},
{
"table": {
"table_name": "p",
"access_type": "eq_ref",
"possible_keys": [
"PRIMARY",
"IX_seriesid"
],
"key": "PRIMARY",
"used_key_parts": [
"ID"
],
"key_length": "152",
"ref": [
"onconnectdb.abc.ID"
],
"rows_examined_per_scan": 1,
"rows_produced_per_join": 1,
"filtered": "100.00",
"cost_info": {
"read_cost": "9640051.00",
"eval_cost": "0.20",
"prefix_cost": "10139147.44",
"data_read_per_join": "2K"
},
"used_columns": [
"ID",
"xyzID",
"IsGeneric"
],
"attached_condition": "(ifnull(`onconnectdb`.`p`.`IsGeneric`,0) = 0)"
}
}
]
}
}
}
Please suggest.
I'm trying to use the Aerospike bulk loader to seed a cluster with data from a tab-separated file.
The source data looks like this:
set key segments
segment 123 10,20,30,40,50
segment 234 40,50,60,70
The third column, 'segments', contains a comma separated list of integers.
I created a JSON template:
{
"version" : "1.0",
"input_type" : "csv",
"csv_style": { "delimiter": " " , "n_columns_datafile": 3, "ignore_first_line": true}
"key": {"column_name":"key", "type": "integer"},
"set": { "column_name":"set" , "type": "string"},
"binlist": [
{"name": "segments",
"value": {"column_name": "segments", "type": "list"}
}
]
}
... and ran the loader:
java -cp aerospike-load-1.1-jar-with-dependencies.jar com.aerospike.load.AerospikeLoad -c template.json data.tsv
When I query the records in aql, they seem to be a list of strings:
aql> select * from test
+--------------------------------+
| segments |
+--------------------------------+
| ["10", "20", "30", "40", "50"] |
| ["40", "50", "60", "70"] |
+--------------------------------+
The data I'm trying to store is a list of integers. Is there an easy way to convert the objects stored in this bin to a list of integers (possibly a Lua UDF) or perhaps there's a tweak that can be made to the bulk loader template?
Update:
I attempted to solve this by creating a Lua UDF to convert the list from strings to integers:
function convert_segment_list_to_integers(rec)
for i=1, table.maxn(rec['segments']) do
rec['segments'][i] = math.floor(tonumber(rec['segments'][i]))
end
aerospike:update(rec)
end
... registered it:
aql> register module 'convert_segment_list_to_integers.lua'
... and then tried executing against my set:
aql> execute convert_segment_list_to_integers.convert_segment_list_to_integers() on test.segment
I enabled some more verbose logging and notice that the UDF is throwing an error. Apparently, it's expecting a table and it was passed userdata:
Dec 04 2015 23:23:34 GMT: DEBUG (udf): (udf_rw.c:send_result:527) FAILURE when calling convert_segment_list_to_integers convert_segment_list_to_integers ...rospike/usr/udf/lua/convert_segment_list_to_integers.lua:2: bad argument #1 to 'maxn' (table expected, got userdata)
Dec 04 2015 23:23:34 GMT: DEBUG (udf): (udf_rw.c:send_udf_failure:407) Non-special LDT or General UDF Error(...rospike/usr/udf/lua/convert_segment_list_to_integers.lua:2: bad argument #1 to 'maxn' (table expected, got userdata))
It seems that maxn isn't an applicable method to a userdata object.
Can you see what needs to be done to fix this?
To convert your lists with string values to lists of integer values you can run the following record udf:
function convert_segment_list_to_integers(rec)
local list_with_ints = list()
for value in list.iterator(rec['segments']) do
local int_value = math.floor(tonumber(value))
list.append(list_with_ints, int_value)
end
rec['segments'] = list_with_ints
aerospike:update(rec)
end
When you edit your existing lua module, make sure to re-run register module 'convert_segment_list_to_integers.lua'.
The cause of this issue is within the aerospike-loader tool: it will always assume/enforce strings as you can see in the following java code:
case LIST:
/*
* Assumptions
* 1. Items are separated by a colon ','
* 2. Item value will be a string
* 3. List will be in double quotes
*
* No support for nested maps or nested lists
*
*/
List<String> list = new ArrayList<String>();
String[] listValues = binRawText.split(Constants.LIST_DELEMITER, -1);
if (listValues.length > 0) {
for (String value : listValues) {
list.add(value.trim());
}
bin = Bin.asList(binColumn.getBinNameHeader(), list);
} else {
bin = null;
log.error("Error: Cannot parse to a list: " + binRawText);
}
break;
Source on Github: http://git.io/vRAQW
If you prefer, you can modify this code and re-compile to always assume integer list values. Change line 266 and 270 to something like this (untested):
List<Integer> list = new ArrayList<Integer>();
list.add(Integer.parseInt(value.trim());