Let's say I have an AVRO file that was written as a record with one column named c1.
When I read this file with the schema it was written with I get all the data from c1.
Now, is it possible to read exactly that file with another schema, that has for example three columns? One column named c0 with a default value of null, one column named c1 that returns all the values as with the first schema. And one column named c2 with an alias of c1 that also returns all values of c1?
This is probably implementation dependent. If using JavaScript is an option, avsc will let you do what you want. For example, if you write an Avro file using your first schema...
const writerSchema = {
type: 'record',
name: 'Foo',
fields: [{name: 'c1', type: 'int'}]
};
const encoder = avro.createFileEncoder('data.avro', writerSchema);
// Write a little data.
encoder.write({c1: 123});
encoder.write({c1: 45});
encoder.end({c1: 6789});
...then you can read it with the second schema simply as...
const readerSchema = {
type: 'record',
name: 'Foo',
fields: [
{name: 'c0', type: ['null', 'int'], 'default': null},
{name: 'c1', type: 'int'},
{name: 'c2', aliases: ['c1'], type: 'int'},
]
};
// Decode the file and print out its data.
avro.createFileDecoder('data.avro', {readerSchema})
.on('data', (val) => { console.log(val); });
...and the output will have the form you expect:
Foo { c0: null, c1: 123, c2: 123 }
Foo { c0: null, c1: 45, c2: 45 }
Foo { c0: null, c1: 6789, c2: 6789 }
Related
Full example:
https://jsfiddle.net/gbeatty/4byg0p2t/
data: {
table: 'datatable',
startRow: 0,
endRow: 6,
startColumn: 0,
endColumn: 3,
parsed: function (columns) {
columns.forEach(column => {
column.splice(1, 2);
});
}
},
What I'd like the chart to reference is only column 0 "Year" and column 3 "Group C" while keeping the entire table displayed below. Challenge is disregarding the 2 columns in the middle.
I am trying the parsed option but it seems the rows and columns are mixed up. I even tried setting the switchRowsAndColumns value to true. (https://api.highcharts.com/highcharts/data.seriesMapping)
You can also use complete function to modify your data.
Example code based on your config:
complete: function(options) {
let series = [];
series.push(options.series[2]);
options.series = series;
}
Demo:
https://jsfiddle.net/BlackLabel/tfs4ubcL/
API Reference:
https://api.highcharts.com/highcharts/data.complete
I have a following space schema in Tarantool
box.schema.space.create('customer')
format = {
{name = 'id', type = 'string'},
{name = 'last_name', type = 'string'},
}
box.space.customer:format(format)
box.space.customer:create_index('id', {parts = {{field = 'id', is_nullable = false}}})
box.space.customer:replace({'1', 'Ivanov'})
I want to add the new field first_name to this space. What's the way I can do that?
Before answering a question we should discuss a format method.
format - it’s a space option that allows you to get value from a tuple by name. Actually tuple is a “list” of values and any field could be accessed by field number.
What’s next? E.g. you have a simple schema.
box.schema.space.create('customer')
box.space.customer:format(format)
box.space.customer:create_index('id', {parts = {{field = 'id', is_nullable = false}}})
box.space.customer:replace({'1', 'Ivanov'})
Let’s define new format that has the third field - first_name.
new_format = {
{name = 'id', type = 'string'},
{name = 'last_name', type = 'string'},
{name = 'first_name', type = 'string'},
}
box.space.customer:format(new_format) -- error: our tuple have only two fields
tarantool> box.space.customer:format(new_format)
- --
- error: Tuple field 3 required by space format is missing
...
There are two ways to fix it.
Add new field to the end of tuple with default value.
box.space.customer:update({'1'}, {{'=', 3, 'Ivan'}})
box.space.customer:format(new_format) -- OK
Define new field as nullable
new_format = {
{name = 'id', type = 'string'},
{name = 'last_name', type = 'string'},
{name = 'first_name', type = 'string', is_nullable = true},
}
box.space.customer:format(new_format) -- OK: absence of the third value is acceptable
You can choose one described above variant.
I’ve just add some notes:
You can’t add some value through the absence field (e.g. you have first and second values you should add the third before adding the fourth)
tarantool> box.tuple.new({'1', 'Ivanov'}):update({{'=', 4, 'value'}})
- --
- error: Field 4 was not found in the tuple
...
tarantool> box.tuple.new({'1', 'Ivanov'}):update({{'=', 3, box.NULL}, {'=', 4, 'value'}})
- --
- ['1', 'Ivanov', null, 'value']
...
Filling in the field with the default value can be quite a long operation if you have a lot of data. Please be careful when apply any migratons.
Read more about format method in the documentation.
I have a json as mentioned below,
{
"list": [{
"notificationId": 123,
"userId": 444
},
{
"notificationId": 456,
"userId": 789
}
]
}
I need to write a postgres procedure which interates through the list and perform either update/insert based on notification id is already present or not in DB.
I have a notification table which has notificationid and userID as columns.
Can anyone please tell me on how to perform this using postgres json operators.
Try this query:
SELECT *
FROM yourTable
WHERE col->'list'#>'[{"notificationId":123}]';
You may replace the value 123 with whatever notificationId you want to search. Follow the link below for a demo showing that this logic works:
Demo
Assuming you have a unique constraint on notificationid (e.g. because it's the primary key, there is no need for stored function or loop:
with data (j) as (
values ('
{
"list": [{
"notificationId": 123,
"userId": 444
},
{
"notificationId": 456,
"userId": 789
}
]
}'::jsonb)
)
insert into notification (notificationid, userid)
select (e.r ->> 'notificationId')::int, (e.r ->> 'userId')::int
from data d, jsonb_array_elements(d.j -> 'list') as e(r)
on conflict (notificationid) do update
set userid = excluded.userid;
The first step in that statement is to turn the array into a list of rows, this is what:
select e.*
from data d, jsonb_array_elements(d.j -> 'list') as e(r)
does. Given your sample JSON, this returns two rows with a JSON value in each:
r
--------------------------------------
{"userId": 444, "notificationId": 123}
{"userId": 789, "notificationId": 456}
This is then split into two integer columns:
select (e.r ->> 'notificationId')::int, (e.r ->> 'userId')::int
from data d, jsonb_array_elements(d.j -> 'list') as e(r)
So we get:
int4 | int4
-----+-----
123 | 444
456 | 789
And this result is used as the input for an INSERT statement.
The on conflict clause then does an insert or update depending on the presence of the row identified by the column notificationid which has to have a unique index.
Meanwhile i tried this,
CREATE OR REPLACE FUNCTION insert_update_notifications(notification_ids jsonb) RETURNS void AS
$$
DECLARE
allNotificationIds text[];
indJson jsonb;
notIdCount int;
i json;
BEGIN
FOR i IN SELECT * FROM jsonb_array_elements(notification_ids)
LOOP
select into notIdCount count(notification_id) from notification_table where notification_id = i->>'notificationId' ;
IF(notIdCount = 0 ) THEN
insert into notification_table(notification_id,userid) values(i->>'notificationId',i->>'userId');
ELSE
update notification_table set userid = i->>'userId' where notification_id = i->>'notificationId';
END IF;
END LOOP;
END;
$$
language plpgsql;
select * from insert_update_notifications('[{
"notificationId": "123",
"userId": "444"
},
{
"notificationId": "456",
"userId": "789"
}
]');
It works.. Please review this.
I'm trying to use alasql to export a set of HTML tables into an excel document.
The documentation has code that looks similar to this:
var data1 = alasql('SELECT * FROM HTML("#dev-table",{headers:false})');
var data2 = alasql('SELECT * FROM HTML("#dev2-table",{headers:false})');
var data3 = alasql('SELECT * FROM HTML("#dev3-table",{headers:false})');
//var data4 = alasql('SELECT * FROM HTML("#dev2-table",{headers:true})');
var data = data1.concat(data2, data3);
alasql('SELECT * INTO XLS("data.xls",{headers:false}) FROM ?', [data]);
The problem is that this code concatenates the data1 and data2 field so that all of the data is printed in the same column. This is not the result I desire. I want "data1" to go into column "A" and data2 to go into column "B".
I've looked through the documentation and am unsure how to get the desired result. I'm aware of the existence of "options" that include fields for specifying columns based on the data itself, but none of those examples are what I want. If this is not possible using alasql, I'm willing to use a different library or framework for this.
Based on this JSFiddle, http://jsfiddle.net/95j0txwx/7/
$scope.items = [{
name: "John Smith",
email: "j.smith#example.com",
dob: "1985-10-10"
}, {
name: "Jane Smith",
email: "jane.smith#example.com",
dob: "1988-12-22"
},
...
I would guess that your data is not formatted correctly to be inserted.
EDIT: JSFiddle is from documentation. https://github.com/agershun/alasql/wiki/XLSX
I am using Grails excel import plugin to import an excel file.
static Map propertyConfigurationMap = [
name:([expectedType: ExcelImportService.PROPERTY_TYPE_STRING, defaultValue:null]),
age:([expectedType: ExcelImportService.PROPERTY_TYPE_INT, defaultValue:0])]
static Map CONFIG_BOOK_COLUMN_MAP = [
sheet:'Sheet1',
startRow: 1,
columnMap: [
//Col, Map-Key
'A':'name',
'B':'age',
]
]
I am able to retrieve the array list by using the code snippet:
def usersList = excelImportService.columns(workbook, CONFIG_USER_COLUMN_MAP)
which results in
[[name: Mark, age: 25], [name: Jhon, age: 46], [name: Anil, age: 62], [name: Steve, age: 32]]
And also I'm able to read each record say [name: Mark, age: 25] by using usersList.get(0)
How do I read the each column value?
I know I can read something like this
String[] row = usersList.get(0)
for (String s : row)
println s
I wonder is there any thing that plugin supports so that I can read column value directly rather manipulating it to get the desired result.
Your usersList is basically a List<Map<String, Object>> (list of maps). You can read a column using the name you gave it in the config. In your example, you named column A name and column B age. So using your iteration example as a basis, you can read each column like this:
Map row = usersList.get(0)
for(Map.Entry entry : row) {
println entry.value
}
Groovy makes this easier to do with Object.each(Closure):
row.each { key, value ->
println value
}
If you want to read a specific column value, here are a few ways to do it:
println row.name // One
println row['name'] // Two
println row.getAt('name') // Three
Hint: These all end up calling row.getAt('name')