Omniture custom link tracking - how to track multiple events - adobe-analytics

Which property is the right one to pass tracking events when using omniture custom link tracking?
Actually i'm having this three properties:
s.linkTrackVars = 'events,prop55';
s.events = ['event12','some other event'];
s.linkTrackEvents = 'event12';
but i'm not shure if that this is correct way. Should the s.events also be passed to the s.linkTrackEvents like:
s.linkTrackEvents = s.events;
I'm implementing omniture for a customer so i haven't access to the omniture analytics tool.
Any suggestions

linkTrackVars should be a string value and expects a comma delimited list (no spaces) of each variable you want to track, no object namespace prefix. This includes events variable if you are tracking events.
linkTrackEvents should be a string value and expects a comma delimited list (no spaces) of each event you want to track. This should only be the base event itself, not serialization or custom numeric values that you may pop in events. For example, if you have s.events='event1:12345,event2=23'; you should only have s.linkTrackEvents='event1,event2';
events should be a string value and expects a comma delimited list (no spaces) of each event you want to track.
Note: I noticed you have events as an array. Fairly often I see clients do this (and also with linkTrackVars and linkTrackEvents), and then later on within the code (usually within s_doPlugins) have code that converts it to string (e.g. s.events=s.events.join();). It makes it easier to .push() values to it based on whatever logic you have, and this is fine, but to be clear, the official syntax is a comma delimited string, not array, so if you do it as an array, you need to ensure it is converted to a comma delimited string before the s.t or s.tl call. As an alternative, there is an s.apl plugin that handles appending values to the string, even ensuring it is unique in the string.
Examples:
Track event1,event2,prop55
s.prop55='some value';
s.events = 'event1,event2';
s.linkTrackEvents = 'event1,event2';
s.linkTrackVars = 'events,prop55';
Track event1 (serialized), event2, prop55
s.prop55='some value';
s.events = 'event1:12345,event2';
s.linkTrackEvents = 'event1,event2';
s.linkTrackVars = 'events,prop55';
Track event1 (custom increment), event2, prop55
s.prop55='some value';
s.events = 'event1=5,event2';
s.linkTrackEvents = 'event1,event2';
s.linkTrackVars = 'events,prop55';

Related

Different text used of sorting and filtering

in a app, I use jQueryTablesorter, and the widget https://mottie.github.io/tablesorter/docs/example-widget-filter.html
I have two main features :
- filtering (the widget)
- sorting (default feature)
Both of these feature use textExtraction() function,
https://mottie.github.io/tablesorter/docs/#textextraction
My problem is the following :
for sorting, I would like to use computer form of a date, that is "2020-04-01"
for filtering, I would like to use human form (in French "1er avril 2020").
How can I deal with it ?
You might need to use a date library like sugar or date.js - check out this demo: https://mottie.github.io/tablesorter/docs/example-parsers-dates.html. What that library does is use the parser to convert the filter into a normalized date that will match with the date in the column. You would also need to add a filter-parsed class name to the column (ref).
I have found. I need to use a hook which modify the value parsed for filtering.
$.tablesorter.filter.types.start = function(config, data) {
data.exact = data.$cells[data.index];
data.exact = data.exact.innerText;
data.iExact = data.exact.toLowerCase();
return null;
}

How do you get strip RTF formatting and get actual string value using DXL in DOORS?

I am trying to get the values in "ID" column of DOORS and I am currently doing this
string ostr=richtext_identifier(o)
When I try to print ostr, in some modules I get just the ID(which is what I want). But in other modules I will get values like "{\rtf1\ansi\ansicpg1256\deff0\nouicompat{\fonttbl{\f0\fnil\fcharset0 Times New Roman;}{\f1\froman\fcharset0 Times New Roman;}} {*\generator Riched20 10.0.17134}\viewkind4\uc1 \pard\f0\fs20\lang1033 SS_\f1\fs24 100\par } " This is the RTF value and I am wondering what the best way is to strip this formatting and get just the value.
Perhaps there is another way to go about this that I am not thinking of as well. Any help would be appreciated.
So the ID column of DOORS is actually a composite- DOORS builds it out of the Module level attribute 'Prefix' and the Object level attribute 'Absolute Number'.
If you wish to grab this value in the future, I would do the following (using your variables)
string ostr = ( module ( o ) )."Prefix" o."Absolute Number" ""
This is opposed to the following, which (despite seeming to be a valid attribute in the insert column dialog) WILL NOT WORK.
string ostr = o."Object Identifier" ""
Hope this helps!
Comment response: You should not need the module name for the code to work. I tested the following successfully on DOORS 9.6.1.10:
Object o = current
string ostr = ( module ( o ) )."Prefix" o."Absolute Number" ""
print ostr
Another solution is to use the identifier function, which takes an Object as input parameter, and returns the identifier as a plain (not RTF) string:
Declaration
string identifier(Object o)
Operation
Returns the identifier, which is a combination of absolute number and module prefix, of object o as a string.
The optimal solution somewhat depends on your underlying requirement for retrieving the object ID.

How to exclude multiple values in OData call?

I am creating a SAPUI5 application. This application is connected to a backend SAP system via OData. In the SAPUI5 application I use a smart chart control. Out of the box the smart chart lets the user create filters for the underlying data. This works fine - except if you try to use multiple 'not equals' for one property. Is there a way to accomplish this?
I found out that all properties within an 'and_expression' (including nested or_expressions) must have unique name.
The reason why two parameters with the same property don't get parsed into the select options:
/IWCOR/CL_ODATA_EXPR_UTILS=>GET_FILTER_SELECT_OPTIONS takes the expression you pass and parses it into a table of select options.
The select option table returned is of type /IWCOR/IF_ODATA_TYPES=>EDM_SELECT_OPTION_T which is a HASHED TABLE .. WITH UNIQUE KEY property.
From: https://archive.sap.com/discussions/thread/3170195
The problem is that you cannot combine NE terms with OR. Because both parameters after the NE should not be shown in the result set.
So at the end the it_filter_select_options is empty and only the iv_filter_string is filled.
Is there a manual way of facing this problem (evaluation of the iv_filter_string) to handle multiple NE terms?
This would be an example request:
XYZ/SmartChartSet?$filter=(Category%20ne%20%27Smartphone%27%20and%20Category%20ne%20%27Notebook%27)%20and%20Purchaser%20eq%20%27CompanyABC%27%20and%20BuyDate%20eq%20datetime%272018-10-12T02%3a00%3a00%27&$inlinecount=allpages
Normally I want this to exclude items with the category 'Notebook' and 'Smartphone' from my result set that I retrieve from the backend.
If there is a bug inside /iwcor/cl_odata_expr_utils=>get_filter_select_options which makes it unable to treat multiple NE filters of the same component, and you cannot wait for an OSS. I would suggest to wrap it inside a new static method that will make the following logic (if you will be stuck with the ABAP implementation i would try to at least partially implement it when i get time):
Get all instances of <COMPONENT> ne '<VALUE>' inside a () (using REGEX).
Replace each <COMPONENT> with <COMPONENT>_<i> so there will be ( <COMPONENT>_1 ne '<VALUE_1>' and <COMPONENT>_2 ne '<VALUE_2>' and... <COMPONENT>_<n> ne '<VALUE_n>' ).
Call /iwcor/cl_odata_expr_utils=>get_filter_select_options with the modified query.
Modify the rt_select_options result by changing COMPONENT_<i> to <COMPONENT> again.
I can't find the source but I recall that multiple "ne" isn't supported. Isn't that the same thing that happens when you do multiple negatives in SE16, some warning is displayed?
I found this extract for Business ByDesign:
Excluding two values using the OR operator (for example: $filter=CACCDOCTYPE ne ‘1000’ or CACCDOCTYPE ne ‘4000’) is not possible.
The workaround I see is to select the Categories you actively want, not the ones you don't in the UI5 app.
I can also confirm that my code snippet I've used a long time for filtering also has the same problem...
* <SIGNATURE>---------------------------------------------------------------------------------------+
* | Instance Public Method ZCL_MGW_ABS_DATA->FILTERING
* +-------------------------------------------------------------------------------------------------+
* | [--->] IO_TECH_REQUEST_CONTEXT TYPE REF TO /IWBEP/IF_MGW_REQ_ENTITYSET
* | [<-->] CR_ENTITYSET TYPE REF TO DATA
* | [!CX!] /IWBEP/CX_MGW_BUSI_EXCEPTION
* | [!CX!] /IWBEP/CX_MGW_TECH_EXCEPTION
* +--------------------------------------------------------------------------------------</SIGNATURE>
METHOD FILTERING.
FIELD-SYMBOLS <lt_entityset> TYPE STANDARD TABLE.
ASSIGN cr_entityset->* TO <lt_entityset>.
CHECK: cr_entityset IS BOUND,
<lt_entityset> IS ASSIGNED.
DATA(lo_filter) = io_tech_request_context->get_filter( ).
/iwbep/cl_mgw_data_util=>filtering(
exporting it_select_options = lo_filter->get_filter_select_options( )
changing ct_data = <lt_entityset> ).
ENDMETHOD.

Wireshark: display filters vs nested dissectors

I have an application that sends JSON objects over AMQP, and I want to inspect the network traffic with Wireshark. The AMQP dissector gives the payload as a series of bytes in the field amqp.payload, but I'd like to extract and filter on specific fields in the JSON object, so I'm trying to write a plugin in Lua for that.
Wireshark already has a dissector for JSON, so I was hoping to piggy-back on that, and not have to deal with JSON parsing myself.
Here is my code:
local amqp_json_p = Proto("amqp_json", "AMQP JSON payload")
local amqp_json_result = ProtoField.string("amqp_json.result", "Result")
amqp_json_p.fields = { amqp_json_result }
register_postdissector(amqp_json_p)
local amqp_payload_f = Field.new("amqp.payload")
local json_dissector = Dissector.get("json")
local json_member_f = Field.new("json.member")
local json_string_f = Field.new("json.value.string")
function amqp_json_p.dissector(tvb, pinfo, tree)
local amqp_payload = amqp_payload_f()
if amqp_payload then
local payload_tvbrange = amqp_payload.range
if payload_tvbrange:range(0,1):string() == "{" then
json_dissector(payload_tvbrange:tvb(), pinfo, tree)
-- So far so good. Let's look at what the JSON dissector came up with.
local members = { json_member_f() }
local strings = { json_string_f() }
local subtree = tree:add(amqp_json_p)
for k, member in pairs(members) do
if member.display == 'result' then
for _, s in ipairs(strings) do
-- Find the string value inside this member
if not (s < member) and (s <= member) then
subtree:add(amqp_json_result, s.range)
break
end
end
end
end
end
end
end
(To start with, I'm just looking at the result field, and the payload I'm testing with is {"result":"ok"}.)
It gets me halfway there. The following shows up in the packet dissection, whereas without my plugin I only get the AMQP section:
Advanced Message Queueing Protocol
Type: Content body (3)
Channel: 1
Length: 15
Payload: 7b22726573756c74223a226f6b227d
JavaScript Object Notation
Object
Member Key: result
String value: ok
Key: result
AMQP JSON payload
Result: "ok"
Now I want to be able to use these new fields as display filters, and also to add them as columns in Wireshark. The following work for both:
json (shows up as Yes when added as a column)
json.value.string (I can also filter with json.value.string == "ok")
amqp_json
But amqp_json.result doesn't work: if I use it as a display filter, Wireshark doesn't show any packets, and if I use it as a column, the column is empty.
Why does it behave differently for json.value.string and amqp_json.result? And how can I achieve what I want? (It seems like I do need a custom dissector, as with json.value.string I can only filter on any member having a certain value, not necessarily result.)
I found a thread on the wireshark-dev mailing list ("Lua post-dissector not getting field values", 2009-09-17, 2009-09-22, 2009-09-23), that points to the interesting_hfids hash table, but it seems like the code has changed a lot since then.
If you'd like to try this, here is my PCAP file, base64-encoded, containing a single packet:
1MOyoQIABAAAAAAAAAAAAAAABAAAAAAAjBi1WfYOCgBjAAAAYwAAAB4AAABgBMEqADcGQA
AAAAAAAAAAAAAAAAAAAAEAAAAAAAAAAAAAAAAAAAAB/tcWKO232y46mkSqgBgxtgA/AAAB
AQgKRjDNvkYwzb4DAAEAAAAPeyJyZXN1bHQiOiJvayJ9zg==
Decode with base64 -d (on Linux) or base64 -D (on OSX).
It turns out I shouldn't have tried to compare the display property of the json.member field. Sometimes it gets set by the JSON dissector, and sometimes it just stays as Member.
The proper solution would involve checking the value of the json.key field, but since the key I'm looking for presumably would never get escaped, I can get away with looking for the string literal in the range property of the member field.
So instead of:
if member.display == 'result' then
I have:
if member.range:range(1, 6):string() == 'result' then
and now both filtering and columns work.

How to remove non-ascii char from MQ messages with ESQL

CONCLUSION:
For some reason the flow wouldn't let me convert the incoming message to a BLOB by changing the Message Domain property of the Input Node so I added a Reset Content Descriptor node before the Compute Node with the code from the accepted answer. On the line that parses the XML and creates the XMLNSC Child for the message I was getting a 'CHARACTER:Invalid wire format received' error so I took that line out and added another Reset Content Descriptor node after the Compute Node instead. Now it parses and replaces the Unicode characters with spaces. So now it doesn't crash.
Here is the code for the added Compute Node:
CREATE FUNCTION Main() RETURNS BOOLEAN
BEGIN
DECLARE NonPrintable BLOB X'0001020304050607080B0C0E0F101112131415161718191A1B1C1D1E1F7F808182838485868788898A8B8C8D8E8F909192939495969798999A9B9C9D9E9FA0A1A2A3A4A5A6A7A8A9AAABACADAEAFB0B1B2B3B4B5B6B7B8B9BABBBCBDBEBFC0C1C2C3C4C5C6C7C8C9CACBCCCDCECFD0D1D2D3D4D5D6D7D8D9DADBDCDDDEDFE0E1E2E3E4E5E6E7E8E9EAEBECEDEEEFF1F2F3F4F5F6F7F8F9FAFBFCFDFEFF';
DECLARE Printable BLOB X'20202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020202020';
DECLARE Fixed BLOB TRANSLATE(InputRoot.BLOB.BLOB, NonPrintable, Printable);
SET OutputRoot = InputRoot;
SET OutputRoot.BLOB.BLOB = Fixed;
RETURN TRUE;
END;
UPDATE:
The message is being parsed as XML using XMLNSC. Thought that would cause a problem, but it does not appear to be.
Now I'm using PHP. I've created a node to plug into the legacy flow. Here's the relevant code:
class fixIncompetence {
function evaluate ($output_assembly,$input_assembly) {
$output_assembly->MRM = $input_assembly->MRM;
$output_assembly->MQMD = $input_assembly->MQMD;
$tmp = htmlentities($input_assembly->MRM->VALUE_TO_FIX, ENT_HTML5|ENT_SUBSTITUTE,'UTF-8');
if (!empty($tmp)) {
$output_assembly->MRM->VALUE_TO_FIX = $tmp;
}
// Ensure there are no null MRM fields. MessageBroker is strict.
foreach ($output_assembly->MRM as $key => $val) {
if (empty($val)) {
$output_assembly->MRM->$key = '';
}
}
}
}
Right now I'm getting a vague error about read only messages, but before that it wasn't working either.
Original Question:
For some reason I am unable to impress upon the senders of our MQ
messages that smart quotes, endashes, emdashes, and such crash our XML
parser.
I managed to make a working solution with SQL queries, but it wasted
too many resources. Here's the last thing I tried, but it didn't work
either:
CREATE FUNCTION CLEAN(IN STR CHAR) RETURNS CHAR BEGIN
SET STR = REPLACE('–',STR,'–');
SET STR = REPLACE('—',STR,'—');
SET STR = REPLACE('·',STR,'·');
SET STR = REPLACE('“',STR,'“');
SET STR = REPLACE('”',STR,'”');
SET STR = REPLACE('‘',STR,'&lsqo;');
SET STR = REPLACE('’',STR,'’');
SET STR = REPLACE('•',STR,'•');
SET STR = REPLACE('°',STR,'°');
RETURN STR;
END;
As you can see I'm not very good at this. I have tried reading about
various ESQL string functions without much success.
So in ESQL you can use the TRANSLATE function.
The following is a snippet I use to clean up a BLOB containing non-ASCII low hex values so that it then be cast into a usable character string.
You should be able to modify it to change your undesired characters into something more benign. Basically each hex value in NonPrintable gets translated into its positional equivalent in Printable, in this case always a full-stop i.e. x'2E' in ASCII. You'll need to make your BLOB's long enough to cover the desired range of hex values.
DECLARE NonPrintable BLOB X'000102030405060708090A0B0C0D0E0F101112131415161718191A1B1C1D1E1F202122232425262728292A2B2C2D2E2F303132333435363738393A3B3C3D3E3F';
DECLARE Printable BLOB X'2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E2E';
SET WorkBlob = TRANSLATE(WorkBlob, NonPrintable, Printable);
BTW if messages with invalid characters only come in every now and then I'd probably specify BLOB on the input node and then use something similar to the following to invoke the XMLNSC parser.
CREATE LASTCHILD OF OutputRoot DOMAIN 'XMLNSC'
PARSE(InputRoot.BLOB.BLOB CCSID InputRoot.Properties.CodedCharSetId ENCODING InputRoot.Properties.Encoding);
With the exception terminal wired up you can then correct the BLOB's of any messages containing parser breaking invalid characters before attempting to reparse.
Finally my best wishes as I've had a number of battles over the years with being forced to correct invalid message content in the "Integration Layer" after all that's what it's meant to do.

Resources