Filehelpers - Complex record layout assistance - edi

We are attempting to use filehelpers for some file interface projects specifically dealing with EDI. The sample definition below makes use of the EDI 858 Specification. Like most interface specs, each vendor has their own flavor. You can find a sample 858 EDI spec here
Our Vendor has their own flavor; this is the sample record definition we are currently using from before conversion:
H1_Record_type As String ' * 8 EDI858FF
H2_TRANS_DATE As String ' * 8 yyyyMMdd
H3_Trans_type As String ' * 2 00 = New, 01 = Cancel Carrier, 03 = Delete, 04 = Change
H4_Pay_type As String ' * 2 PP = Prepaid, CC = Collect, TP = Third Party (our default is PP)
H5_Load As String ' * 30 Authorization
H6_Load_Ready_date As String ' * 12 yyyyMMddHHmm
H7_Commodity As String ' * 10
H8_Customer_HQ_ID As String ' * 20 hard coded '0000000001'
H9_Customer As String
H10_Mill_ID As String ' * 20 Shipping Perdue Location or Destination Perdue Location
H11_Tender_Method As String ' * 10 blank for now
H12_Carrier_ID As String ' * 10 blank for now
H13_Weight As String ' * 7 estimated total load weight
H14_Weight_Qualifier As String ' * 2 blank for now
H15_Total_Miles As String ' * 7 zero for now
H16_Total_quantity As String ' * 7 blank for now
H17_Weight_Unit As String ' * 1 L = pounds, K = Kilograms, L=default
H18_REF As String ' * 3 REF
HR1_Ref_qualifier As String ' * 3 CO = Deliv Dest ref., P8 = Deliv Orig Ref.
HR2_Ref_value As String ' * 80
H19_END_REF As String ' * 7 END REF
H20_SPEC_INSTRUCTION As String ' * 16 SPEC INSTRUCTION
HS1_Qualifier As String ' * 3 1 = Credit Hold, 7T = Export Order, HM = Hazmat, L = Load ready for pickup
' PTS = Tarp, RO = Hot Load, TAR = Full Tarp, WTV = Weight Verification, DDN = Driver needs TWIC
' PR = Prohibited
H21_END_SPEC As String ' * 20 END SPEC INSTRUCTION
H22_NOTE As String ' * 4 NOTE
HN1_Note_Qualifier As String ' * 3 SPH = Special Handling, PRI = Load Priority, DEL = Transit Days
HN2_Note As String ' * 80
H23_END_NOTE As String ' * 8 END NOTE
H24_EQUIPMENT As String ' * 9 EQUIPMENT
H25_END As String ' * 13 END EQUIPMENT
H26_LOAD_CONTACT As String ' * 12 LOAD CONTACT
LC1_Contact_Name As String ' * 60
LC2_Contact_type As String ' * 2 EM= E-mail, FX = Fax, TE = Telephone
LC3_Contact_Data As String ' * 80
H27_END_LOAD_CONTACT As String ' * 16 END LOAD CONTACT
H28_STOP As String ' * 4 STOP There will always be 2 - CL and CU
S1_Stop_Number As String ' * 2
S2_Stop_Reason As String ' * 2 CL = Complete Load, CU = Complete Unload (one of each required for every load)
S3_LOCATION As String ' * 8 LOCATION
SL1_Location_ID As String ' * 20
SL2_Location_Name As String ' * 60
SL3_Location_Add1 As String ' * 55
SL4_Location_Add2 As String ' * 55
SL5_Location_City As String ' * 30
SL6_Location_State As String ' * 2
SL7_Location_Zip As String ' * 10 (use only 5 digits)
SL8_Location_Country As String ' * 3 USA, CAN, MEX
S4_END_LOCATION As String ' * 12 END LOCATION
S5_STOP_DATE As String ' * 9 STOP DATE
SD1_Date_Qualifier As String ' * 3 37 = No earlier than, 38 = No later than, 10 = Expected arrival time, 35 = Actual arrival
' 11 = Actual departure
SD2_Date_Time As String ' * 12 yyyyMMddHHmm
S6_END_STOP_DATE As String ' * 13 END STOP DATE
S7_STOP_REF As String ' * 8 STOP REF
SR1_Reference_Qualifier As String ' 3 72 = Transit Time, DK = Dock Number, OQ = Order Number
SR2_Reference_Value As String ' * 80
S8_END_STOP_REF As String ' * 12 END STOP REF
H29_END_STOP As String ' * 8 END STOP
H30_ORDER As String ' * 5 ORDER
O1_Order_Number As String ' * 80
H31_END_ORDER As String ' * 9 END ORDER
this is a sample message it would normally be in one long line:
EDI858FF~20140611~04~PP~1266010982~201406060700~CANOLA
ML~0000000001~Business Name~RICHLAND~~~60000~~0~~L~REF~SA~Customer
Name~END REF~STOP~01~CL~LOCATION~RICHLAND~~~~~~~~END LOCATION~STOP
DATE~37~201406060000~END STOP DATE~STOP REF~OQ~5568~END STOP REF~END
STOP~STOP~02~CU~LOCATION~261450~~~~~~~~END LOCATION~STOP
DATE~37~201406060000~END STOP DATE~STOP REF~OQ~5568~END STOP REF~END
STOP~ORDER~5568~END ORDER
I really think this may be too complex of a task for Filehelpers, but I wanted to put it out there to the community to see if you all could help.
As you can see the file is mostly tilde delimited, however certain fields in the definition also act as delimiters. For instance REF or STOP both contain additional information that could be many records deep. You could have multiple STOP definitions 1 to 999 location records. I am really really thinking that this is just too much for Filehelpers...
If you were to attempt this configuration for filehelpers, where would you start and how would you handle all the child fields?

FileHelpers is not the ideal tool for such a complex format. These are the reasons:
Not have enough flexibility with regard to identifying and reporting errors.
Difficulties with escaped characters (e.g., when a tilde is within a field and needs to be escaped) I'm not familiar enough with the EDI format to know if this is likely to be an issue.
The master/detail tools provided by FileHelpers are insufficient
For (1), you don't mention whether you are importing or exporting EDI, or both. If you are only exporting EDI, error reporting would not be necessary.
For (2), you could work around the escaping problems by providing your own custom converters for the problem fields, but you might find every single field has a custom converter.
For (3), There are two ways of handling master/detail records with FileHelpers, but neither of them are likely to cover your requirements without some serious hacking
One way would be to use the MasterDetailEngine, but this only supports one detail type and only one level of nesting, so you would have to find workarounds for both of these.
Another way would be to use the to use the MultiRecordEngine. However, it would treat each row as an unrelated record and the hierarchy (that is, which S record belongs to which H record) would be hard to determine.

Related

How to filter traded symbols in Metatrader 4 / MQL4

I am searching for a solution to just and only consider real forex pairs in my loop. I don't want CFDs, commodities, silver, gold, etc. to be included because they have a complete different logic when it comes to calculating pips etc. etc. (I want to use the data for a FX dashboard).
How can I implement a filter to do so without building if-statements for every FX pair existing?
If possible, the solution should be independent of the broker used (since the offered FX pairs might be different from broker to broker, so a common approach would be the all-in solution).
This is my current code that doesn't seperate between fx and non fx:
/*
2.) Create a string format for each pending and running trade
*/
int live_orders = OrdersTotal();
string live_trades = "";
for(int i=live_orders; i >= 0; i--)
{
if(OrderSelect(i,SELECT_BY_POS)==false) continue;
live_trades = live_trades +
"live_trades|" +
version + "|" +
DID + "|" +
AccountNumber() + "|" +
IntegerToString(OrderTicket()) + "|" +
TimeToString(OrderOpenTime(), TIME_DATE|TIME_SECONDS) + "|" +
TimeToString(OrderCloseTime(), TIME_DATE|TIME_SECONDS) + "|" +
IntegerToString(OrderType()) + "|" +
DoubleToString(OrderLots(),2) + "|" +
OrderSymbol() + "|" +
DoubleToString(OrderOpenPrice(),5) + "|" +
DoubleToString(OrderClosePrice(),5) + "|" +
DoubleToString(OrderStopLoss(),5) + "|" +
DoubleToString(OrderTakeProfit(),5) + "|" +
DoubleToString(OrderCommission(),2) + "|" +
DoubleToString(OrderSwap(),2) + "|" +
DoubleToString(OrderProfit(),2) + "|" +
"<" + OrderComment() + ">|";
}
This is probably the easiest way. Prefix symbols might be a problem (e.g. mEURUSD) but easily solved by shifting the StringSubstr values by the prefix size. Suffix is not a problem as we take the first 6 chars of the symbol string.
const string FX_CURRENCIES[]={"EUR","GBP","USD","NZD","AUD","CHF","CAD","JPY"};//add other currencies if needed
bool isFxPair(const string symbol){
return StringLen(symbol)>=6 && getCurrencyIdx(StringSubStr(symbol,0,3))>=0 &&
getCurrencyIdx(StringSubStr(symbol,3,3))>=0;
}
int getCurrencyIdx(const string smb){
for(int i=ArraySize(FX_CURRENCIES)-1;i>=0;i--){
if(FX_CURRENCIES[i]==smb)
return i;
}
return -1;
}
Use of CStringArray and caching FX symbols might be another good idea that may potentially work faster, but it seems to use the similar logic as above(but you'll have to sort the cache every time you add something in order to have CStringArray collection to work fast).
There is no direct method to ask whether a symbol is FX, CFD, Stock, Crypto or anything else.

Is there a faster way of to generate the required output than using a one-to-many join in Proc SQL?

I require an output that shows the total number of hours worked in a rolling 24 hour window. The data is currently stored such that each row is one hourly slot (for example 7-8am on Jan 2nd) per person and how much they worked in that hour stored as "Hour". What I need to create is another field that is the sum of the most recent 24 hourly slots (inclusive) for each row. So for the 7-8am example above I would want the sum of "Hour" across the 24 rows: Jan 1st 8-9am, Jan 1st 9-10am... Jan 2nd 6-7am, Jan 2nd 7-8am.
Rinse and repeat for each hourly slot.
There are 6000 people, and we have 6 months of data, which means the table has 6000 * 183 days * 24 hours = 26.3m rows.
I am currently done this using the code below, which works on a sample of 50 people very easily, but grinds to a halt when I try it on the full table, somewhat understandably.
Does anyone have any other ideas? All date/time variables are in datetime format.
proc sql;
create table want as
select x.*
, case when Hours_Wrkd_In_Window > 16 then 1 else 0 end as Correct
from (
select a.ID
, a.Start_DTTM
, a.End_DTTM
, sum(b.hours) as Hours_Wrkd_In_Window
from have a
left join have b
on a.ID = b.ID
and b.start_dttm > a.start_dttm - (24 * 60 * 60)
and b.start_dttm <= a.start_dttm
where datepart(a.Start_dttm) >= &report_start_date.
and datepart(a.Start_dttm) < &report_end_date.
group by ID
, a.Start_DTTM
, a.End_DTTM
) x
order by x.ID
, x.Start_DTTM
;quit;
The most performant DATA step solution most likely involves a ring-array to track the 1hr time slots and hours worked within. The ring will allow a rolling aggregate (sum and count) to be computed based on what goes into and out of the ring.
If you have a wide SAS license, look into the procedures in SAS/ETS (Econometrics and Time Series). Proc EXPAND might have some rolling aggregate capability.
This sample DATA Step code took <10s (WORK folder on SSD) to run on simulated data for 6k people with 6months of complete coverage of 1hr time slots.
data have(keep=id start_dt end_dt hours);
do id = 1 to 6000;
do start_dt
= intnx('dtmonth', datetime(), -12)
to intnx('dtmonth', datetime(), -6)
by dhms(0,1,0,0)
;
end_dt = start_dt + dhms(0,1,0,0);
hours = 0.25 * floor (5 * ranuni(123)); * 0, 1/4, 1/2, 3/4 or 1 hour;
output;
end;
end;
format hours 5.2;
run;
/* %let log= ; options obs=50 linesize=200; * submit this (instead of next) if you want to log the logic; */
%let log=*; options obs=max;
data want2(keep=id start_dt end_dt hours hours_rolling_sum hours_rolling_cnt hours_out_:);
array dt_ring(24) _temporary_;
array hr_ring(24) _temporary_;
call missing (of dt_ring(*));
call missing (of hr_ring(*));
if 0 then set have; * prep pdv column order;
hours_rolling_sum = 0;
hours_rolling_cnt = 0;
label hours_rolling_sum = 'Hours worked in prior 24 hours';
index = 0;
do until (last.id);
set have;
by id start_dt;
index + 1;
if index > 24 then index = 1;
hours_out_sum = 0;
hours_out_cnt = 0;
do clear = 1 by 1 until (clear=0);
if sum (dt_ring(index), 0) = 0 then do;
* index is first go through ring array, or hit a zeroed slot;
&log putlog 'NOTE: ' index= 'clear for empty ring item. ';
clear = 0;
end;
else
if start_dt - dt_ring(index) >= %sysfunc(dhms(0,24,0,0)) then do;
&log putlog / 'NOTE: ' index= 'reducting and zeroing.' /;
hours_out_sum + hr_ring(index);
hours_out_cnt + 1;
hours_rolling_sum = hours_rolling_sum - hr_ring(index);
hours_rolling_cnt = hours_rolling_cnt - 1;
dt_ring(index) = 0;
hr_ring(index) = 0;
* advance item to next item, that might also be more than 24 hours ago;
index = index + 1;
if index > 24 then index = 1;
end;
else do;
&log putlog / 'NOTE: ' index= 'back off !' /;
* index was advanced to an item within 24 hours, back off one;
index = index - 1;
if index < 1 then index = 24;
clear = 0;
end;
end; /* do clear */
dt_ring(index) = start_dt;
hr_ring(index) = hours;
hours_rolling_sum + hours;
hours_rolling_cnt + 1;
&log putlog 'NOTE: ' index= 'overlaying and aggregating.' / 'NOTE: ' start_dt= hours= hours_rolling_sum= hours_rolling_cnt=;
output;
end; /* do until */
format hours_rolling_sum 5.2 hours_rolling_cnt 2.;
format hours_out_sum 5.2 hours_out_cnt 2.;
run;
options obs=max;
When reviewing the results you should notice the delta for hours_rolling_sum is +(hours in slot) - (hours_out_sum{which is hrs removed from ring})
If you must use SQL, I would suggest following #jspascal and index the table, but rearrange the query to left join original data to inner-joined subselect (so that SQL will do an index involved hash join on the ids) . For same amount of few people it should faster than original query, but still be too slow for doing all 6K.
proc sql;
create index id on have;
create index id_slot on have (id, start_dt);
quit;
proc sql _method;
reset inobs=50; * limit data so you can see the _method;
create table want as
select
have.*
, case
when ROLLING.HOURS_WORKED_24_HOUR_PRIOR > 16
then 1
else 0
end as REVIEW_TIME_CLOCKING_FLAG
from
have
left join
(
select
EACH_SLOT.id
, EACH_SLOT.start_dt
, count(*) as SLOT_COUNT_24_HOUR_PRIOR
, sum(PRIOR_SLOT.hours) as HOURS_WORKED_24_HOUR_PRIOR
from
have as EACH_SLOT
join
have as PRIOR_SLOT
on
EACH_SLOT.ID = PRIOR_SLOT.ID
and EACH_SLOT.start_dt - PRIOR_SLOT.start_dt between 0 and %sysfunc(dhms(0,24,0,0))-0.1
group by
EACH_SLOT.id, EACH_SLOT.start_dt
) as ROLLING
on
have.ID = ROLLING.ID
and have.start_dt = ROLLING.start_dt
order by
id, start_dt
;
%put NOTE: SQLOOPS = &SQLOOPS;
quit;
The inner join is pyramid-like and still involves a lot of internal looping.
A compound index on the columns being accessed in the joined table - id + start_dttm + hours - would be useful if there isn't one already.
Using msglevel=i will print some diagnostics about how the query is executed. It may give some additional hints.

Making a print function repeat on a new line everytime it prints

So I want this final print function to print its function on a new line every time it prints. I've tried various "\n" placements to make it work but to no avail. Any tips?
from datetime import date
currentYear = date.today().year
print('Hi. What is your name?')
name = input()
while True:
try:
print('How old are you, ' + name + '?')
age = int(input())
if age >= 0:
break
else:
print('That is not a valid number.')
except ValueError:
print('That is not a valid number')
ageinHundred = 100 - int(age)
y = currentYear + int(ageinHundred)
t = 'You will be 100 years old in the year ' + str(int((y)))
print(t)
print('Give me another number')
num = input()
f = (int(num) * t)
print(f)
I want the final print function (print(f)) to print f multiple times on a new line each time. Not one after the other like the above code does now.
Thanks!
Change the last couple of lines to:
# Put t inside a list so it does list multiplication instead
# of string multiplication
f = int(num) * [t]
# Then join the individual f-lists with newlines and print
print("\n".join(f))
For the f = line, inspect f to get a better idea of what's going on there.
For the join part, join takes a list of strings, inserts the given string (in this case "\n"; a newline), and "joins" it all together. Get used to using join. It is a very helpful function.
Try this:
from datetime import date
currentYear = date.today().year
print('Hi. What is your name?')
name = input()
while True:
try:
print('How old are you, ' + name + '?')
age = int(input())
if age >= 0:
break
else:
print('That is not a valid number.')
except ValueError:
print('That is not a valid number')
ageinHundred = 100 - int(age)
y = currentYear + int(ageinHundred)
t = 'You will be 100 years old in the year ' + str(int((y)))
print(t)
print('Give me another number')
num = input()
for i in range(0,int(num)):
print(t)

Can't read Com port with QB64

I am using Arduino as a slave just to read the voltage of a battery.
The program in Arduino is is C. The computer program is written in QB64
It is connected to a USB com port#3. The slave program and QB64 work fine for making the Pin 13 Blink. But when I ask for a voltage all I get is 10 empty spaces no data.
The following QB64 code is my attempt to read. The comments show some of the results:
CLS
ctr = 1 ' Counts number times you press continue
again:
INPUT "Press any key to continue"; d$ ' Run program to receive voltage
OPEN "Com3:9600,n,8,1,ds0,cs0,rs" FOR RANDOM AS #1
_DELAY .5 ' delay to let com settle
FIELD #1, 35 AS m$
' **** Form Message to send to External Device ****
Command = 2: pin = 4: argument = 0 ' Code to Get analog voltage reading
msg$ = STR$(Command) + "," + STR$(pin) + "," + STR$(argument) + "*"
PRINT "msg$ Sent= "; msg$, CHR$(13) ' Show Message
LSET m$ = msg$
PUT #1, , m$ ' Send message to the external device
_DELAY 1
'***** C code in external device *****
'External device responds to message & prints to Com3 buffer.
' if ( cmd == 2)//getanalog
' {
' int sensorValue = analogRead(pin);//
' delay(10);
' float voltage= sensorValue*(5.0/1023.0);
' Serial.print("Voltage =");
' Serial.print(voltage);
' Serial.print(",");
' Serial.print("*");
' }
CLOSE #1
m$ = "": msg$ = "" ' Wipe out old messages
OPEN "Com3:9600,n,8,1,ds0,cs0,rs" FOR RANDOM AS #1
_DELAY .5 ' Delay to let Com Port settle
FIELD #1, 50 AS m$
t1 = TIMER ' Start time for getdata
getdata:
bytes% = LOC(1) ' Check receive buffer for data
IF bytes% = 0 THEN GOTO getdata 'After thousands of cycles it continues
PRINT "Bytes="; bytes ' Bytes remain at 0
DO UNTIL EOF(1)
GET #1, , m$ ' First GET #1 returns EOF=-1
PRINT "LEN(m$)= "; LEN(m$) ' Length of m$ = 10
IF INSTR(m$, "*") > 0 THEN GOTO duh ' Never finds End of Message (*)
LOOP
IF bytes% = 0 THEN GOTO getdata
t2 = TIMER ' End time, Have data
DeltaT = t2 - t1 ' How long did it take?
GOTO stepover ' If arrived here w/o End of Message signal
duh:
PRINT "DUH instr= "; INSTR(m$, "*") ' Never hits this line
stepover:
tmp$ = "DeltaT= #.### Seconds Until LOC(1) > 0" ' Various times
PRINT USING tmp$; DeltaT
PRINT "m$ Received= "; m$ + "&" ' Prints 10 spaces No Data
PRINT "endofdata= "; endofdata ' endofdata=0, Never got "*"
PRINT "ctr= "; ctr, CHR$(13) ' Show number of times you pressed continue
CLOSE #1
ctr = ctr + 1
GOTO again
Also included in the comments is the Slave program that writes to the com port.
Some lines of QB64 code are not necessary but were included as debugging attempt

Passing in variables to SQL Server stored procedure, where clause

I am writing an application responsible for archiving data and we have the configuration in a database table
Id | TableName | ColumnName | RetentionAmountInDays
1 | DeviceData | MetricTime | 3
So when faced with this configuration, I should archive all data in the DeviceData table where the MetricTime value is before 3 days ago.
The reason I am doing this dynamically is the table names and column names differ (there would be multiple rows)
For each configuration this stored procedure is called
CREATE PROCEDURE GetDynamicDataForArchive
#TableName nvarchar(100),
#ColumnName nvarchar(100),
#OlderThanDate datetime2(7)
AS
BEGIN
SET NOCOUNT ON;
DECLARE #sql nvarchar(1000);
SET #sql = 'SELECT * FROM ' + #TableName + ' WHERE ' + #ColumnName + ' < #OlderThanDate';
exec sp_executesql #sql;
END
And an example exec line
exec dbo.GetDynamicDataForArchive 'DeviceData', 'MetricTime', '2017-04-16 20:29:29.647'
This results in:
Conversion failed when converting date and/or time from character string.
So something is up with how I am passing in the datetime2 or how I am forming the where clause.
Replace this statement:
SET #sql = 'SELECT * FROM ' + #TableName + ' WHERE ' + #ColumnName + ' < #OlderThanDate'
by
SET #sql = 'SELECT * FROM ' + #TableName + ' WHERE [' + #ColumnName + '] < ''' + cast(#OlderThanDate as varchar(23)) + '''';
I don't particularly like having to convert the datetime to a varchar value though, perhaps there is a better way to do this(?).

Resources