I am running a simple insert query inside a stored procedure with to_timeatamp_ntz("column value") along with other columns.
This works fine when I am running it with the snowflake UI and logged in with my account.
This works fine when I am calling it using python scripts from my visual studio instance.
The same stored procedure fails when it is being called by a scheduled task.
I am thinking if it has something to do with the user's timezone of 'System' vs my time zone.
Execution error in store procedure LOAD_Data(): Failed to cast variant
value "2019-11-27T13:42:03.221Z" to TIMESTAMP_NTZ At
Statement.execute, line 24 position 57
I tried to provide timezone as session parameters in task and in the stored proc but does not seem to be addressing the issue. Any ideas?
I'm guessing (since you didn't include the SQL statement that causes the error) that you are trying to bind a Date object when creating a Statement object. That won't work.
The only parameters you can bind are numbers, strings, null, and the special SfDate object that you can only get from a result set (to my knowledge). Most other parameters must be converted to string using mydate.toJSON(), JSON.stringify(myobj), etc., before binding, eg:
var stmt = snowflake.createStatement(
{ sqlText: `SELECT :1::TIMESTAMP_LTZ NOW`, binds: [(new Date).toJSON()] }
);
Date object errors can be misleading, because Date objects causing an error can be converted and displayed as strings in the error message.
I found the issue:
my Task was using a copy paste effect similar to this:
CREATE TASK TASK_LOAD_an_sp
WAREHOUSE = COMPUTE_WH
TIMEZONE = 'US/Eastern'
SCHEDULE = 'USING CRON 0/30 * * * * America/New_York'
TIMESTAMP_INPUT_FORMAT = 'YYYY-MM-DD HH24'
AS
Call LOAD_an_sp();
The Timestamp input format was causing this.
I'm new to erlang, and am running into an error with records in one of my modules. I'm emulating ships inside of a shipping_state, and I want to create a simple function that will print the ship id, name, and container cap of a certain ship, based on it's ID. I utilized list:keyfind, as I believe it will help, but perhaps I am not using it correctly. I have a .hrl file that contains the record declarations, and a .erl file with the function and initialization of my #shipping_state.
shipping.erl:
-module(shipping).
-compile(export_all).
-include_lib("./shipping.hrl").
get_ship(Shipping_State, Ship_ID) ->
{id, name, containercap} = list:keyfind(Ship_ID, 1, Shipping_State#shipping_state.ships).
shipping.hrl:
-record(ship, {id, name, container_cap}).
-record(container, {id, weight}).
-record(shipping_state,
{
ships = [],
containers = [],
ports = [],
ship_locations = [],
ship_inventory = maps:new(),
port_inventory = maps:new()
}
).
-record(port, {id, name, docks = [], container_cap}).
Result:
shipping:get_ship(shipping:init(),1).
** exception error: {badrecord,shipping_state}
in function shipping:get_ship/2 (shipping.erl, line 18)
I'd like to say that keyfind should work, and perhaps when I create the tuple {id, name, containercap}, something is wrong with the syntax there, but if I need to completely rethink how I would go about doing this problem, any assistance would be greatly appreciated.
Edit,
I've modified my code to follow Alexey's suggestions, however, there still appears to be the same error. Any further insights?
get_ship(Shipping_State, Ship_ID) ->
{ship, Id, Name, Containercap} = list:keyfind(Ship_ID, 2,
Shipping_State#shipping_state.ships),
io:format("id = ~w, name = ~s, container cap = ~w",[Id, Name, Containercap]).
See Internal Representation of Records: #ship{id=1,name="Santa Maria",container_cap=20} becomes {ship, 1, "Santa Maria", 20}, so the id is the 2nd element, not the first one.
{id, name, containercap} = ...
should be
#ship{id=Id, ...} = ...
or
{ship, Id, Name, Containercap} = ...
Your current code would only succeed if keyfind returned a tuple of 3 atoms.
The error {badrecord,shipping_state} is telling you that the code of get_ship expects its first argument to be a #shipping_state, but you pass {ok, #shipping_state{...}} (the result of init).
Records were added to the Erlang language because dealing with tuples fields by number was error-prone, especially as code changed during development and tuple fields were added, changed, or dropped. Don't use numbers to identify record fields, and don't treat records using their underlying tuple representation, as both work against the purpose of records and are unnecessary.
In your code, rather than using record field numbers with lists:keyfind/3, use the record names themselves. I've revised your get_ship/2 function to do this:
get_ship(Shipping_State, Ship_ID) ->
#ship{id=ID, name=Name, container_cap=ContainerCap} = lists:keyfind(Ship_ID, #ship.id, Shipping_State#shipping_state.ships),
io:format("id = ~w, name = ~s, container cap = ~w~n",[ID, Name, ContainerCap]).
The syntax #<record_name>.<record_field_name> provides the underlying record field number. In the lists:keyfind/3 call above, #ship.id provides the field number for the id field of the ship record. This will continue to work correctly even if you add fields to the record, and unlike a raw number it will cause a compilation error should you decide to drop that field from the record at some point.
If you load your record definitions into your shell using the rr command, you can see that #ship.id returns the expected field number:
1> rr("shipping.hrl").
[container,port,ship,shipping_state]
2> #ship.id.
2
With the additional repairs to your function above to handle the returned record correctly, it now works as expected, as this shell session shows:
3> {ok, ShippingState} = shipping:init().
{ok,{shipping_state,[{ship,1,"Santa Maria",20},
{ship,2,"Nina",20},
{ship,3,"Pinta",20},
{ship,4,"SS Minnow",20},
{ship,5,"Sir Leaks-A-Lot",20}],
[{container,1,200},
...
4> shipping:get_ship(ShippingState, 1).
id = 1, name = Santa Maria, container cap = 20
ok
Alexey's answer answers your question, in particular the 3rd point. I just want to suggest an improvement to your keyfind call. You need to pass the tuple index to it, but you can use record syntax to get that index without hard-coding it, like this:
list:keyfind(Ship_ID, #ship.id, Shipping_State#shipping_state.ships),
#ship.id returns the index of the id field, in this case 2. This makes it easier to read the code - no need to wonder what the constant 2 is for. Also, if for whatever reason you change the order of fields in the ship record, this code will still compile and do the right thing.
My problem is the following:
I have one report called Y5000112.
My colleagues always execute it manually once with selection screen variant V1 and then execute it a second time with variant V2 adding the results of the first execution to the selection.
Those results in this case are PERNR.
My goal:
Automate this - execute that query twice with one click and automatically fill the PERNR selection of the second execution with the PERNR results of the first execution.
I found out how to trigger a report execution and after that another one, how to set it to a certain variant and got this far - [EDIT] after the first answer I got a bit further but I still have no idea how to loop through my results and put them into the next Report submit:
DATA: t_list TYPE TABLE OF abaplist.
* lt_seltab TYPE TABLE OF rsparams,
* ls_selline LIKE LINE OF lt_seltab.
SUBMIT Y5000114
USING SELECTION-SET 'MA OPLAN TEST'
EXPORTING LIST TO MEMORY
AND RETURN.
CALL FUNCTION 'LIST_FROM_MEMORY'
TABLES
listobject = t_list
EXCEPTIONS
not_found = 1
OTHERS = 2.
IF sy-subrc <> 0.
WRITE 'Unable to get list from memory'.
ELSE.
* I want to fill ls_seltab here with all pernr (table pa0020) but I haven't got a clue how to do this
* LOOP AT t_list.
* WRITE /t_list.
* ENDLOOP.
SUBMIT Y5000114
* WITH-SELECTION-TABLE ls_seltab
USING SELECTION-SET 'MA OPLAN TEST2'
AND RETURN.
ENDIF.
P.S.
I'm not very familiar with ABAP so if I didn't provide enough Information just let me know in the comments and I'll try to find out whatever you need to know in order to solve this.
Here's my imaginary JS-Code that can express very generally what I'm trying to accomplish.
function submitAndReturnExport(Reportname,VariantName,OptionalPernrSelection)
{...return resultObject;}
var t_list = submitAndReturnExport("Y5000114","MA OPLAN TEST");
var pernrArr = [];
for (var i in t_list)
{
pernrArr.push(t_list[i]["pernr"]);
}
submitAndReturnExport("Y5000114","MA OPLAN TEST2",pernrArr);
It's not that easy as it supposed to, so there won't be any one-line snippet. There is no standard way of getting results from report. Try EXPORTING LIST TO MEMORY clause, but consider that the report may need to be adapted:
SUBMIT [report_name]
WITH SELECTION-TABLE [rspar_tab]
EXPORTING LIST TO MEMORY
AND RETURN.
The result of the above statement should be read from memory and adapted for output:
call function 'LIST_FROM_MEMORY'
TABLES
listobject = t_list
EXCEPTIONS
not_found = 1
others = 2.
if sy-subrc <> 0.
message 'Unable to get list from memory' type 'E'.
endif.
call function 'WRITE_LIST'
TABLES
listobject = t_list
EXCEPTIONS
EMPTY_LIST = 1
OTHERS = 2
.
if sy-subrc <> 0.
message 'Unable to write list' type 'E'.
endif.
Another (and more efficient approach, IMHO) is to gain access to resulting grid via class cl_salv_bs_runtime_info. See the example here
P.S. Executing the same report with different parameters which are mutually-dependent (output pars of 1st iteration = input pars for the 2nd) is definitely a bad design, and those manipulations should be done internally. As for me one'd better rethink the whole architecture of the report.
What would be the best way to capture the inner text in the following case?
inner_text = any*;
tag_cdata = '<![CDATA[' inner_text >cdata_start %cdata_end ']]>';
The problem is, it seems like the cdata_end action fires several times due to the fact that inner_text could match ].
I found the solution. You need to handle non-determinism. It wasn't clear initially, but the correct solution is something like this:
inner_text = any*;
tag_cdata = '<![CDATA[' inner_text >text_begin %text_end ']]>' %cdata_end;
action text_begin {
text_begin_at = p;
}
action text_end {
text_end_at = p;
}
action cdata_end {
delegate.cdata(data.byteslice(text_begin_at, text_end_at-text_begin_at))
}
Essentially, you wait until you are sure you parsed a complete CDATA tag before firing the callback, using information you previously captured.
In addition, I found that some forms of non-determinism in Ragel need to be explicitly handled using priorities. While this seems a bit ugly, it is the only solution in some cases.
When dealing with a pattern such as (a+ >a_begin %a_end | b)* you will find that the events are called for every single a encountered, rather than at the longest sub-sequence. This ambiguity, in some cases, can be solved using the longest match kleene star **. What this does is it prefers to match the existing pattern rather than wrapping around.
What was surprising to me, is that this actually modifies the way events are called, too. As an example, this produces a machine which is unable to buffer more than one character at a time when invoking callbacks:
%%{
machine example;
action a_begin {}
action a_end {}
main := ('a'+ >a_begin %a_end | 'b')*;
}%%
Produces:
You'll notice that it calls a_begin and a_end every time.
In contrast, we can make the inner loop and event handling greedy:
%%{
machine example;
action a_begin {}
action a_end {}
main := ('a'+ >a_begin %a_end | 'b')**;
}%%
which produces:
Which property is the right one to pass tracking events when using omniture custom link tracking?
Actually i'm having this three properties:
s.linkTrackVars = 'events,prop55';
s.events = ['event12','some other event'];
s.linkTrackEvents = 'event12';
but i'm not shure if that this is correct way. Should the s.events also be passed to the s.linkTrackEvents like:
s.linkTrackEvents = s.events;
I'm implementing omniture for a customer so i haven't access to the omniture analytics tool.
Any suggestions
linkTrackVars should be a string value and expects a comma delimited list (no spaces) of each variable you want to track, no object namespace prefix. This includes events variable if you are tracking events.
linkTrackEvents should be a string value and expects a comma delimited list (no spaces) of each event you want to track. This should only be the base event itself, not serialization or custom numeric values that you may pop in events. For example, if you have s.events='event1:12345,event2=23'; you should only have s.linkTrackEvents='event1,event2';
events should be a string value and expects a comma delimited list (no spaces) of each event you want to track.
Note: I noticed you have events as an array. Fairly often I see clients do this (and also with linkTrackVars and linkTrackEvents), and then later on within the code (usually within s_doPlugins) have code that converts it to string (e.g. s.events=s.events.join();). It makes it easier to .push() values to it based on whatever logic you have, and this is fine, but to be clear, the official syntax is a comma delimited string, not array, so if you do it as an array, you need to ensure it is converted to a comma delimited string before the s.t or s.tl call. As an alternative, there is an s.apl plugin that handles appending values to the string, even ensuring it is unique in the string.
Examples:
Track event1,event2,prop55
s.prop55='some value';
s.events = 'event1,event2';
s.linkTrackEvents = 'event1,event2';
s.linkTrackVars = 'events,prop55';
Track event1 (serialized), event2, prop55
s.prop55='some value';
s.events = 'event1:12345,event2';
s.linkTrackEvents = 'event1,event2';
s.linkTrackVars = 'events,prop55';
Track event1 (custom increment), event2, prop55
s.prop55='some value';
s.events = 'event1=5,event2';
s.linkTrackEvents = 'event1,event2';
s.linkTrackVars = 'events,prop55';