I am trying to import a ms-excel 2007 sheet using excel-import plugin. It was simple to integrate with my project and I found it working as expected until I noticed that the number values in the cells are populated as real numbers with exponent.
For example if the cell contains value 9062831150099 -(populated as)->9.062831150099E12 i.e.
A |
_____________________
Registration Number |
____________________
9062831150099
Is populated as: [RegNum:9.062831150099E12]
Anyone could suggest me how I can change this representation back to its original format keeping its type as number?
Missed it at the first attempt but I figured out how to to achieve it:
When invoking the key methods (the ones that process cellMap and columnMaps) for example List bookParamsList = excelImportService.columns(workbook, CONFIG_BOOK_COLUMN_MAP) or Map bookParams = excelImportService.cells(workbook, CONFIG_BOOK_CELL_MAP )
There is also ability to pass in configuration information, i.e. to specify what to do if a cell is empty (i.e. to provide a default value), to make sure a cell is of expected type, etc.
So in my case I created a configuration parameter Map of properties of the object and passed it to the above methods. i.e.
Class UploadExcelDataController {
def excelImportService
static Map CONFIG_BOOK_COLUMN_MAP = [
sheet:'Sheet1',
startRow: 2,
columnMap: [
'A':'registrationNumber',
'B':'title',
'C':'author',
'D':'numSold',
]
]
static Map configBookPropertyMap = [
registrationNumber: ([expectedType: ExpectedPropertyType.IntType, defaultValue:0])
]
def uploadFile(){
...
List bookParamsList = excelImportService.columns(workbook, CONFIG_BOOK_COLUMN_MAP,configBookPropertyMap )
...
}
}
Related
On my process, I have a variable that is an array of objects similar to the following:
"llista-finques" : [
{
"FIN_ID": "H10",
"FIN_NOMBRE": "PLUTO VIVIENDAS",
"FIN_PROPIETARIO": "H10",
"FIN_LINIA_NEGOCIO": "Horizontal"
},
{
"FIN_ID": "H11",
"FIN_NOMBRE": "PLUTO PARKING",
"FIN_PROPIETARIO": "H11",
"FIN_LINIA_NEGOCIO": "Horizontal"
},
{
"FIN_ID": "H12",
"FIN_NOMBRE": "PINTO VIVENDES",
"FIN_PROPIETARIO": "H12",
"FIN_LINIA_NEGOCIO": "Horizontal"
},
{
"FIN_ID": "H16",
"FIN_NOMBRE": "ZURUBANDO",
"FIN_PROPIETARIO": "H16",
"FIN_LINIA_NEGOCIO": "Horizontal"
} ......
I am trying to create a Calculated Propery in one of my forms that needs to create a subset of this array filtering by object property. In order to do so, I was hoping to use the following javascript for the calculated field:
return this.llista-finques.filter(finca => {return finca.FIN_PROPIETARIO === this.Id_client});
For some reason this code produces no result, and after many tests, I have arrived at the conclussion that the variable "this.llista-finques" is simply not accessible from the script, although it is available in the process data.
If I change the Calculated Property script to simply return the value of the variable as bellow:
return this.llista-finques;
or even someting that simply should return a string:
return this.llista-finques[0].FIN_ID
the calculated property produces no result.
If I do exaclty the same with any of the other process variables that are not arrays of objects the calculated property seems to work correctly.
Al the testing I have done is using the screen preview debuging tools of Processmaker 4.
Is there a limitation on the kind of variables I can use for calculated properties? Is this a processmaker bug?
This is embarassing ... after testing and testing and testing I figured out that the problem was due to the name of the variable I was using. Can't use a name with the character '-' ....
Once I corrected the variable name it all worked as expected.
I am testing some network packets of my Organisation's product. We already have custom plugins. I am trying to add some some more fields into those existing plugins (like conversion of 2 byte code to a string and assign it to a field)
Thankyou in advance for reading my query.
--edit
Wireshark version : 2.4.5 (organization's plugins dont work on latest wireshark application)
--edit
Problem statement:
I am able to add field and show value, but fieldname is not displayed as defined.
I cannot share the entire .lua file but i will try to explain What i did:
Below is the image where I have a field aprint.type. this is a two byte field. In .lua file, for display purpose it is appended with corresponding description using a custom function int_to_enum.
I want to add one more proto field aprint.typetext which will show the text.
What I did:
Added a protofield f_apr_msg_type_txt = ProtoField.string("aprint.typetxt","aprint_type_text") (Tried f_apr_msg_type_txt = ProtoField.string("aprint.typetxt","aprint_type_text",FT_STRING) also)
Below the code where subtree aprint.type is shown, added my required field as subtree:add(f_apr_msg_type_txt, msg_type_string) (Below is image of code extract)
I am able to see the text but field Name is shown as Wireshark Lua text (_ws.lua.text)
Normally displaying strings based on numeric values is accomplished by a value string lookup, so you'd have something like so:
local aprint_type_vals = {
[1] = "Foo",
[2] = "Bar",
[9] = "State alarm"
}
f_apr_msg_type = ProtoField.uint16("aprint.type", "Type", base.DEC, aprint_type_vals)
f_apr_msg_type_txt = ProtoField.string("aprint.typetxt","aprint_type_text", base.ASCII)
... then
local msg_type = tvb(offset, 2):le_uint()
subtree:add_le(f_apr_msg_type, tvb(offset, 2))
subtree:add(f_apr_msg_type_txt, tvb(offset, 2), (aprint_type_vals[msg_type] or "Unknown"))
--[[
Alternatively:
subtree:add(f_apr_msg_type_txt, tvb(offset, 2)):set_text("aprint_type_text: " .. (aprint_type_vals[msg_type] or "Unknown"))
--]]
I'm also not sure why you need the extra field with only the text when the text is already displayed with the existing field, but that's basically how you'd do it.
Is it possible to use FieldValue.increment(_:) on Map field values?
For instance, if you have a field called foo whose value is a Map type, and inside foo is a key-value pair bar: Int, can you increment bar without using the method in question?
A similar question for JS was asked and the answer was to use FieldValue.increment(_:) using the dot notation, like one would use for JS Object types; e.g. points.total. I've tried this on iOS, but instead of incrementing bar, it creates a field foo.bar of Number.
If possible, it would be great to know how.
If you want to be able to update a document that looks like this:
$docId (document)
|
--- foo (map)
|
--- bar: 1
You cannot do it using the . (dot) notation:
docRef.updateData([
"foo.bar": FieldValue.increment(Int64(1))
])
Because you'll get as a result something that looks like this:
$docId (document)
|
--- foo.bar: 1
So if you want to increment the value of bar, you have to use two maps, one to perform the increment operation and one to perform the actual update in Firestore:
let increment = [
"bar": FieldValue.increment(Int64(1))
]
let update = [
"foo": increment
]
docRef.updateData(update) { err in
if let err = err {
print("Error updating document: \(err)")
} else {
print("Document successfully updated")
}
}
I'm not sure if it's the appropriate answer but since I had a similar problem this might help you, mine was not related to iOS.
If the field's name starts with a number ex. '1test', the map update won't work and the result will be the one you have.
I hope I was helpful.
I am creating a SAPUI5 application. This application is connected to a backend SAP system via OData. In the SAPUI5 application I use a smart chart control. Out of the box the smart chart lets the user create filters for the underlying data. This works fine - except if you try to use multiple 'not equals' for one property. Is there a way to accomplish this?
I found out that all properties within an 'and_expression' (including nested or_expressions) must have unique name.
The reason why two parameters with the same property don't get parsed into the select options:
/IWCOR/CL_ODATA_EXPR_UTILS=>GET_FILTER_SELECT_OPTIONS takes the expression you pass and parses it into a table of select options.
The select option table returned is of type /IWCOR/IF_ODATA_TYPES=>EDM_SELECT_OPTION_T which is a HASHED TABLE .. WITH UNIQUE KEY property.
From: https://archive.sap.com/discussions/thread/3170195
The problem is that you cannot combine NE terms with OR. Because both parameters after the NE should not be shown in the result set.
So at the end the it_filter_select_options is empty and only the iv_filter_string is filled.
Is there a manual way of facing this problem (evaluation of the iv_filter_string) to handle multiple NE terms?
This would be an example request:
XYZ/SmartChartSet?$filter=(Category%20ne%20%27Smartphone%27%20and%20Category%20ne%20%27Notebook%27)%20and%20Purchaser%20eq%20%27CompanyABC%27%20and%20BuyDate%20eq%20datetime%272018-10-12T02%3a00%3a00%27&$inlinecount=allpages
Normally I want this to exclude items with the category 'Notebook' and 'Smartphone' from my result set that I retrieve from the backend.
If there is a bug inside /iwcor/cl_odata_expr_utils=>get_filter_select_options which makes it unable to treat multiple NE filters of the same component, and you cannot wait for an OSS. I would suggest to wrap it inside a new static method that will make the following logic (if you will be stuck with the ABAP implementation i would try to at least partially implement it when i get time):
Get all instances of <COMPONENT> ne '<VALUE>' inside a () (using REGEX).
Replace each <COMPONENT> with <COMPONENT>_<i> so there will be ( <COMPONENT>_1 ne '<VALUE_1>' and <COMPONENT>_2 ne '<VALUE_2>' and... <COMPONENT>_<n> ne '<VALUE_n>' ).
Call /iwcor/cl_odata_expr_utils=>get_filter_select_options with the modified query.
Modify the rt_select_options result by changing COMPONENT_<i> to <COMPONENT> again.
I can't find the source but I recall that multiple "ne" isn't supported. Isn't that the same thing that happens when you do multiple negatives in SE16, some warning is displayed?
I found this extract for Business ByDesign:
Excluding two values using the OR operator (for example: $filter=CACCDOCTYPE ne ‘1000’ or CACCDOCTYPE ne ‘4000’) is not possible.
The workaround I see is to select the Categories you actively want, not the ones you don't in the UI5 app.
I can also confirm that my code snippet I've used a long time for filtering also has the same problem...
* <SIGNATURE>---------------------------------------------------------------------------------------+
* | Instance Public Method ZCL_MGW_ABS_DATA->FILTERING
* +-------------------------------------------------------------------------------------------------+
* | [--->] IO_TECH_REQUEST_CONTEXT TYPE REF TO /IWBEP/IF_MGW_REQ_ENTITYSET
* | [<-->] CR_ENTITYSET TYPE REF TO DATA
* | [!CX!] /IWBEP/CX_MGW_BUSI_EXCEPTION
* | [!CX!] /IWBEP/CX_MGW_TECH_EXCEPTION
* +--------------------------------------------------------------------------------------</SIGNATURE>
METHOD FILTERING.
FIELD-SYMBOLS <lt_entityset> TYPE STANDARD TABLE.
ASSIGN cr_entityset->* TO <lt_entityset>.
CHECK: cr_entityset IS BOUND,
<lt_entityset> IS ASSIGNED.
DATA(lo_filter) = io_tech_request_context->get_filter( ).
/iwbep/cl_mgw_data_util=>filtering(
exporting it_select_options = lo_filter->get_filter_select_options( )
changing ct_data = <lt_entityset> ).
ENDMETHOD.
I'm using Grails plugin to work with ElasticSearch over MySQL. I have a domain column mapped in my domain class as follows:
String updateHistoryJSON
(...)
static mapping = {
updateHistoryJSON type: 'text', column: 'update_history'
}
In MySQL, this basically maps to a TEXT column, which purpose is to store JSON content.
So, in both DB and ElasticSearch index, I have 2 instances:
- instance 1 has updateHistoryJSON = '{"zip":null,"street":null,"name":null,"categories":[],"city":null}'
- instance 2 has updateHistoryJSON = '{}'
Now, what I need is an ElasticSearch query that returns only instance 2.
I've been doing a closure like this, using Groovy DSL:
{
bool {
must_not = term(updateHistoryJSON: "{}")
minimum_should_match = 1
}
}
And ElasticSearch seems to ignore it, it keeps bringing back both instances.
On the other hand, if I use a filter like "missing":{"field":"updateHistoryJSON"}, it gives back no documents. The same goes for "exists": {"field":"updateHistoryJSON"}.
Any idea about what am I doing wrong here?
I'm still not sure about what was the problem, but at least I found a workaround.
Since the search based on updateHistoryJSON contents was not working, I decided to use a script to search based on updateHistoryJSON contents size, meaning, instead of looking for documents that had a non-empty JSON, I just look for documents which updateHistoryJSON size is greater than 2 ({} == size 2).
The closure I used is like this:
{script = {
script = "doc['updateHistoryJSON'].size() > 2"
}