IMAP status on not existing folder - imap

I am getting this IMAP response. Notice the folders INBOX.INBOX.ooo and INBOX.INBOX.dfgdfg. INBOX.INBOX folder doesn't exists and it is just a placeholder (NonExistent) folder.
C: A00000096 STATUS INBOX (UIDVALIDITY HIGHESTMODSEQ)
S: * STATUS INBOX (UIDVALIDITY 1491227400 HIGHESTMODSEQ 17436)
S: A00000096 OK Status completed (0.001 + 0.000 secs).
C: A00000097 LIST "" "INBOX.*" RETURN (SUBSCRIBED CHILDREN STATUS (UIDVALIDITY))
S: * LIST (\Subscribed \HasNoChildren \UnMarked) "." INBOX.INBOX.ooo
S: * STATUS INBOX.INBOX.ooo (UIDVALIDITY 1491227902)
S: * LIST (\HasNoChildren \UnMarked) "." INBOX.INBOX.dfgdfg
S: * STATUS INBOX.INBOX.dfgdfg (UIDVALIDITY 1491227900)
S: A00000097 OK List completed (0.001 + 0.000 secs).
However I can select this folder
A00000092 SELECT INBOX.INBOX (CONDSTORE)
S: * OK [CLOSED] Previous mailbox closed.
S: * FLAGS (\Answered \Flagged \Deleted \Seen \Draft)
S: * OK [PERMANENTFLAGS (\Answered \Flagged \Deleted \Seen \Draft \*)] Flags permitted.
S: * 554 EXISTS
S: * 0 RECENT
S: * OK [UIDVALIDITY 1491227400] UIDs valid
S: * OK [UIDNEXT 18263] Predicted next UID
S: * OK [HIGHESTMODSEQ 17436] Highest
S: A00000092 OK [READ-WRITE] Select completed (0.001 + 0.000 secs).
So it appears that INBOX.INBOX is actually INBOX folder because UIDVALIDITY match and I can see the emails there. This is very confusing and it is messing my internal cache. Can someone explain what is going on?
UPDATE:
This might give some clues to someone
C: A00000001 NAMESPACE
S: * NAMESPACE (("INBOX." ".")) NIL NIL
S: A00000001 OK Namespace completed (0.001 + 0.000 secs).
C: A00000002 LIST "" "INBOX"
S: * LIST (\HasChildren) "." INBOX
S: A00000002 OK List completed (0.002 + 0.000 + 0.001 secs).

Related

The IMAP server replied to the 'SEARCH' command with a 'BAD' response

I'm trying to fetch specific number of emails from Gmail using MailKit.net library.but there is an error while executing the search command:
The IMAP server replied to the 'SEARCH' command with a 'BAD' response
here is my code:
var startRange = (page - 1) * pageSize;
var endRage = page * pageSize;
var range = new UniqueIdRange(new UniqueId((uint)startRange),
new UniqueId((uint)endRage));
var uIds = Client.Inbox.Search(range, SearchQuery.All).Reverse();
and here is imap.log file content:
Connected to imaps://imap.gmail.com:993/
S: * OK Gimap ready for requests from 151.240.143.192 a1mb311709115lfh
C: A00000000 CAPABILITY
S: * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 XYZZY SASL-IR AUTH=XOAUTH2 AUTH=PLAIN AUTH=PLAIN-CLIENTTOKEN AUTH=OAUTHBEARER AUTH=XOAUTH
S: A00000000 OK Thats all she wrote! a1mb311709115lfh
C: A00000001 AUTHENTICATE PLAIN AHNhZS5oZW1tYXRpQGdtYWlsLmNvbSAAMDg2ODQxMTkxOTA4NjMx
S: * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE MOVE CONDSTORE ESEARCH UTF8=ACCEPT LIST-EXTENDED LIST-STATUS LITERAL- SPECIAL-USE APPENDLIMIT=35651584
S: A00000001 OK sae.hemmati#gmail.com authenticated (Success)
C: A00000002 NAMESPACE
S: * NAMESPACE (("" "/")) NIL NIL
S: A00000002 OK Success
C: A00000003 LIST "" "INBOX"
S: * LIST (\HasNoChildren) "/" "INBOX"
S: A00000003 OK Success
C: A00000004 LIST (SPECIAL-USE) "" "*"
S: * LIST (\All \HasNoChildren) "/" "[Gmail]/All Mail"
S: * LIST (\Drafts \HasNoChildren) "/" "[Gmail]/Drafts"
S: * LIST (\HasNoChildren \Sent) "/" "[Gmail]/Sent Mail"
S: * LIST (\HasNoChildren \Junk) "/" "[Gmail]/Spam"
S: * LIST (\Flagged \HasNoChildren) "/" "[Gmail]/Starred"
S: * LIST (\HasNoChildren \Trash) "/" "[Gmail]/Trash"
S: A00000004 OK Success
C: A00000005 LIST "" "[Gmail]"
S: * LIST (\HasChildren \Noselect) "/" "[Gmail]"
S: A00000005 OK Success
C: A00000006 EXAMINE INBOX (CONDSTORE)
S: * FLAGS (\Answered \Flagged \Draft \Deleted \Seen $NotPhishing $Phishing)
S: * OK [PERMANENTFLAGS ()] Flags permitted.
S: * OK [UIDVALIDITY 611076938] UIDs valid.
S: * 16612 EXISTS
S: * 0 RECENT
S: * OK [UIDNEXT 16834] Predicted next UID.
S: * OK [HIGHESTMODSEQ 1442114]
S: A00000006 OK [READ-ONLY] INBOX selected. (Success)
C: A00000007 UID SEARCH RETURN () UID 0:10
S: A00000007 BAD Could not parse command
Any clues?
thanks in advance!

how to group a date column based on date range in oracle

I have a table which contains a feedback about a product.It has feedback type (positive ,negative) which is a text column, date on which comments made. I need to get total count of positive ,negative feedback for particular time period . For example if the date range is 30 days, I need to get total count of positive ,negative feedback for 4 weeks , if the date range is 6 months , I need to get total count of positive ,negative feedback for each month. How to group the count based on date.
+------+------+----------+----------+---------------+--+--+--+
| Slno | User | Comments | type | commenteddate | | | |
+------+------+----------+----------+---------------+--+--+--+
| 1 | a | aaaa | positive | 22-jun-2016 | | | |
| 2 | b | bbb | positive | 1-jun-2016 | | | |
| 3 | c | qqq | negative | 2-jun-2016 | | | |
| 4 | d | ccc | neutral | 3-may-2016 | | | |
| 5 | e | www | positive | 2-apr-2016 | | | |
| 6 | f | s | negative | 11-nov-2015 | | | |
+------+------+----------+----------+---------------+--+--+--+
Query i tried is
SELECT type, to_char(commenteddate,'DD-MM-YYYY'), Count(type) FROM comments GROUP BY type, to_char(commenteddate,'DD-MM-YYYY');
Here's a kick at the can...
Assumptions:
you want to be able to switch the groupings to weekly or monthly only
the start of the first period will be the first date in the feedback data; intervals will be calculated from this initial date
output will show feedback value, time period, count
time periods will not overlap so periods will be x -> x + interval - 1 day
time of day is not important (time for commented dates is always 00:00:00)
First, create some sample data (100 rows):
drop table product_feedback purge;
create table product_feedback
as
select rownum as slno
, chr(65 + MOD(rownum, 26)) as userid
, lpad(chr(65 + MOD(rownum, 26)), 5, chr(65 + MOD(rownum, 26))) as comments
, trunc(sysdate) + rownum + trunc(dbms_random.value * 10) as commented_date
, case mod(rownum * TRUNC(dbms_random.value * 10), 3)
when 0 then 'positive'
when 1 then 'negative'
when 2 then 'neutral' end as feedback
from dual
connect by level <= 100
;
Here's what my sample data looks like:
select *
from product_feedback
;
SLNO USERID COMMENTS COMMENTED_DATE FEEDBACK
1 B BBBBB 2016-08-06 neutral
2 C CCCCC 2016-08-06 negative
3 D DDDDD 2016-08-14 positive
4 E EEEEE 2016-08-16 negative
5 F FFFFF 2016-08-09 negative
6 G GGGGG 2016-08-14 positive
7 H HHHHH 2016-08-17 positive
8 I IIIII 2016-08-18 positive
9 J JJJJJ 2016-08-12 positive
10 K KKKKK 2016-08-15 neutral
11 L LLLLL 2016-08-23 neutral
12 M MMMMM 2016-08-19 positive
13 N NNNNN 2016-08-16 neutral
...
Now for the fun part. Here's the gist:
find out what the earliest and latest commented dates are in the data
include a query where you can set the time period (to "WEEKS" or "MONTHS")
generate all of the (weekly or monthly) time periods between the min/max dates
join the product feedback to the time periods (commented date between start and end) with an outer join in case you want to see all time periods whether or not there was any feedback
group the joined result by feedback, period start, and period end, and set up a column to count one of the 3 possible feedback values
x
with
min_max_dates -- get earliest and latest feedback dates
as
(select min(commented_date) min_date, max(commented_date) max_date
from product_feedback
)
, time_period_interval
as
(select 'MONTHS' as tp_interval -- set the interval/time period here
from dual
)
, -- generate all time periods between the start date and end date
time_periods (start_of_period, end_of_period, max_date, time_period) -- recursive with clause - fun stuff!
as
(select mmd.min_date as start_of_period
, CASE WHEN tpi.tp_interval = 'WEEKS'
THEN mmd.min_date + 7
WHEN tpi.tp_interval = 'MONTHS'
THEN ADD_MONTHS(mmd.min_date, 1)
ELSE NULL
END - 1 as end_of_period
, mmd.max_date
, tpi.tp_interval as time_period
from time_period_interval tpi
cross join
min_max_dates mmd
UNION ALL
select CASE WHEN time_period = 'WEEKS'
THEN start_of_period + 7 * (ROWNUM )
WHEN time_period = 'MONTHS'
THEN ADD_MONTHS(start_of_period, ROWNUM)
ELSE NULL
END as start_of_period
, CASE WHEN time_period = 'WEEKS'
THEN start_of_period + 7 * (ROWNUM + 1)
WHEN time_period = 'MONTHS'
THEN ADD_MONTHS(start_of_period, ROWNUM + 1)
ELSE NULL
END - 1 as end_of_period
, max_date
, time_period
from time_periods
where end_of_period <= max_date
)
-- now put it all together
select pf.feedback
, tp.start_of_period
, tp.end_of_period
, count(*) as feedback_count
from time_periods tp
left outer join
product_feedback pf
on pf.commented_date between tp.start_of_period and tp.end_of_period
group by tp.start_of_period
, tp.end_of_period
, pf.feedback
order by pf.feedback
, tp.start_of_period
;
Output:
negative 2016-08-06 2016-09-05 6
negative 2016-09-06 2016-10-05 7
negative 2016-10-06 2016-11-05 8
negative 2016-11-06 2016-12-05 1
neutral 2016-08-06 2016-09-05 6
neutral 2016-09-06 2016-10-05 5
neutral 2016-10-06 2016-11-05 11
neutral 2016-11-06 2016-12-05 2
positive 2016-08-06 2016-09-05 17
positive 2016-09-06 2016-10-05 16
positive 2016-10-06 2016-11-05 15
positive 2016-11-06 2016-12-05 6
-- EDIT --
New and improved, all in one easy to use procedure. (I will assume you can configure the procedure to make use of the query in whatever way you need.) I made some changes to simplify the CASE statements in a few places and note that for whatever reason using a LEFT OUTER JOIN in the main SELECT results in an ORA-600 error for me so I switched it to INNER JOIN.
CREATE OR REPLACE PROCEDURE feedback_counts(p_days_chosen IN NUMBER, p_cursor OUT SYS_REFCURSOR)
AS
BEGIN
OPEN p_cursor FOR
with
min_max_dates -- get earliest and latest feedback dates
as
(select min(commented_date) min_date, max(commented_date) max_date
from product_feedback
)
, time_period_interval
as
(select CASE
WHEN p_days_chosen BETWEEN 1 AND 10 THEN 'DAYS'
WHEN p_days_chosen > 10 AND p_days_chosen <=31 THEN 'WEEKS'
WHEN p_days_chosen > 31 AND p_days_chosen <= 365 THEN 'MONTHS'
ELSE '3-MONTHS'
END as tp_interval -- set the interval/time period here
from dual --(SELECT p_days_chosen as days_chosen from dual)
)
, -- generate all time periods between the start date and end date
time_periods (start_of_period, end_of_period, max_date, tp_interval) -- recursive with clause - fun stuff!
as
(select mmd.min_date as start_of_period
, CASE tpi.tp_interval
WHEN 'DAYS'
THEN mmd.min_date + 1
WHEN 'WEEKS'
THEN mmd.min_date + 7
WHEN 'MONTHS'
THEN mmd.min_date + 30
WHEN '3-MONTHS'
THEN mmd.min_date + 90
ELSE NULL
END - 1 as end_of_period
, mmd.max_date
, tpi.tp_interval
from time_period_interval tpi
cross join
min_max_dates mmd
UNION ALL
select CASE tp_interval
WHEN 'DAYS'
THEN start_of_period + 1 * ROWNUM
WHEN 'WEEKS'
THEN start_of_period + 7 * ROWNUM
WHEN 'MONTHS'
THEN start_of_period + 30 * ROWNUM
WHEN '3-MONTHS'
THEN start_of_period + 90 * ROWNUM
ELSE NULL
END as start_of_period
, start_of_period
+ CASE tp_interval
WHEN 'DAYS'
THEN 1
WHEN 'WEEKS'
THEN 7
WHEN 'MONTHS'
THEN 30
WHEN '3-MONTHS'
THEN 90
ELSE NULL
END * (ROWNUM + 1)
- 1 as end_of_period
, max_date
, tp_interval
from time_periods
where end_of_period <= max_date
)
-- now put it all together
select pf.feedback
, tp.start_of_period
, tp.end_of_period
, count(*) as feedback_count
from time_periods tp
inner join -- currently a bug that prevents the procedure from compiling with a LEFT OUTER JOIN
product_feedback pf
on pf.commented_date between tp.start_of_period and tp.end_of_period
group by tp.start_of_period
, tp.end_of_period
, pf.feedback
order by tp.start_of_period
, pf.feedback
;
END;
Test the procedure (in something like SQLPlus or SQL Developer):
var x refcursor
exec feedback_counts(10, :x)
print :x

String variable to log file loses return characters

I am using powershell to perform a tracert command and putting the result into a variable.
$TraceResult = tracert $server
When using write-host the output looks fine on screen, which each hop on a new line.
Tracing route to fhotmail.com [65.55.39.10]
over a maximum of 30 hops:
1 <1 ms <1 ms <1 ms BTHomeHub.home [192.168.50.1]
2 * * * Request timed out.
3 * * * Request timed out.
4 * * * Request timed out.
5 * * * Request timed out.
When I output the variable to a file using add-content the return characters are lost and the formatting looks terrible as it's on one line:
17/06/2014 14:48:26 Tracing route to fhotmail.com [64.4.6.100] over a maximum of 30 hops: 1 <1 ms <1 ms <1 ms BTHomeHub.home [192.168.50.1] 2 * * * Request timed out. 3 * * * Request timed out. 4 * * * Request timed out. 5 * * * Request timed out. 6 * * * Request timed out. 7 * * * Request timed out. 8 * * * Request timed out. 9 * * * Request timed out. 10 * * * Request timed out. 11 * * * Request timed out. 12 * * * Request timed out. 13 * * * Request timed out. 14 * * * Request timed out. 15 * * * Request timed out. 16 * * * Request timed out. 17 * * * Request timed out. 18 * * * Request timed out. 19 * * * Request timed out. 20 * * * Request timed out. 21 * * * Request timed out. 22 * * * Request timed out. 23 * * * Request timed out. 24 * * * Request timed out. 25 * * * Request timed out. 26 * * * Request timed out. 27 * * * Request timed out. 28 * * * Request timed out. 29 * * * Request timed out. 30 * * * Request timed out. Trace complete.
Any idea on where I'm going wrong?
$servers = "localhost","google.com","fhotmail.com"
$Logfile = "c:\temp\$(Get-Content env:computername).log" # Sets the logfile as c:\temp\[COMPUTERNAME].log
# Countdown function with progress bar
Function Start-Countdown
{
Param(
[Int32]$Seconds = 10,
[string]$Message = "Pausing for 10 seconds..."
)
ForEach ($Count in (1..$Seconds))
{ Write-Progress -Id 1 -Activity $Message -Status "Waiting for $Seconds seconds, $($Seconds - $Count) left" -PercentComplete (($Count / $Seconds) * 100)
Start-Sleep -Seconds 1
}
Write-Progress -Id 1 -Activity $Message -Status "Completed" -PercentComplete 100 -Completed
}
# Log Write Function to log a message with date and time (tab delimited)
Function LogWrite
{
Param ([string]$logstring)
Add-content $Logfile -value "$(get-date -UFormat '%d/%m/%Y') `t$(get-date -UFormat '%T') `t$logstring"
}
# Looping script to run ping tests, then countdown for a period, before repeating
while ($true) {
# Ping servers and perform tracert if not responding to ping
foreach ( $server in $servers ) {
write-Host "Pinging: $server" -ForegroundColor Green
if ((test-Connection -ComputerName $server -Count 2 -Quiet) -eq $true ) {
# Ping response received.
write-Host "Response Received Sucessfully [$server].`n" -ForegroundColor Green
}
else {
# Ping response failed, next perform trace route.
write-Host "Response FAILED [$server]." -ForegroundColor RED
LogWrite "Ping Response FAILED [$server]."
Write-Host "Traceroute: $server - $(get-date -UFormat '%d/%m/%Y - %T')" -ForegroundColor Yellow
$TraceResult = tracert $server
LogWrite $TraceResult
Write-Host "Traceroute Completed [$server]`n" -ForegroundColor Yellow
}
}
# Pause before repeating
Start-Countdown -Seconds 60 -Message "Pausing script before restarting tests. Use CTRL-C to Terminate."
}
The output of tracert isn't being handled by PowerShell as a single string but rather an array of strings (one per line) and it's trying to force them all into a single string (concatenating them, basically).
Assuming you're OK with output like this:
17/06/2014 16:54:07
Tracing route to SERVERNAME [IP]
over a maximum of 30 hops:
1 1 ms <1 ms <1 ms HOP1
2 1 ms <1 ms <1 ms IP
Trace complete.
You'll need to do this:
$TraceResult = tracert $server
LogWrite ($TraceResult -join "`r`n")
This will concatenate all the elements of the $TraceResult array into one string, with CRLF as the EOL marker.

Filehelpers - Complex record layout assistance

We are attempting to use filehelpers for some file interface projects specifically dealing with EDI. The sample definition below makes use of the EDI 858 Specification. Like most interface specs, each vendor has their own flavor. You can find a sample 858 EDI spec here
Our Vendor has their own flavor; this is the sample record definition we are currently using from before conversion:
H1_Record_type As String ' * 8 EDI858FF
H2_TRANS_DATE As String ' * 8 yyyyMMdd
H3_Trans_type As String ' * 2 00 = New, 01 = Cancel Carrier, 03 = Delete, 04 = Change
H4_Pay_type As String ' * 2 PP = Prepaid, CC = Collect, TP = Third Party (our default is PP)
H5_Load As String ' * 30 Authorization
H6_Load_Ready_date As String ' * 12 yyyyMMddHHmm
H7_Commodity As String ' * 10
H8_Customer_HQ_ID As String ' * 20 hard coded '0000000001'
H9_Customer As String
H10_Mill_ID As String ' * 20 Shipping Perdue Location or Destination Perdue Location
H11_Tender_Method As String ' * 10 blank for now
H12_Carrier_ID As String ' * 10 blank for now
H13_Weight As String ' * 7 estimated total load weight
H14_Weight_Qualifier As String ' * 2 blank for now
H15_Total_Miles As String ' * 7 zero for now
H16_Total_quantity As String ' * 7 blank for now
H17_Weight_Unit As String ' * 1 L = pounds, K = Kilograms, L=default
H18_REF As String ' * 3 REF
HR1_Ref_qualifier As String ' * 3 CO = Deliv Dest ref., P8 = Deliv Orig Ref.
HR2_Ref_value As String ' * 80
H19_END_REF As String ' * 7 END REF
H20_SPEC_INSTRUCTION As String ' * 16 SPEC INSTRUCTION
HS1_Qualifier As String ' * 3 1 = Credit Hold, 7T = Export Order, HM = Hazmat, L = Load ready for pickup
' PTS = Tarp, RO = Hot Load, TAR = Full Tarp, WTV = Weight Verification, DDN = Driver needs TWIC
' PR = Prohibited
H21_END_SPEC As String ' * 20 END SPEC INSTRUCTION
H22_NOTE As String ' * 4 NOTE
HN1_Note_Qualifier As String ' * 3 SPH = Special Handling, PRI = Load Priority, DEL = Transit Days
HN2_Note As String ' * 80
H23_END_NOTE As String ' * 8 END NOTE
H24_EQUIPMENT As String ' * 9 EQUIPMENT
H25_END As String ' * 13 END EQUIPMENT
H26_LOAD_CONTACT As String ' * 12 LOAD CONTACT
LC1_Contact_Name As String ' * 60
LC2_Contact_type As String ' * 2 EM= E-mail, FX = Fax, TE = Telephone
LC3_Contact_Data As String ' * 80
H27_END_LOAD_CONTACT As String ' * 16 END LOAD CONTACT
H28_STOP As String ' * 4 STOP There will always be 2 - CL and CU
S1_Stop_Number As String ' * 2
S2_Stop_Reason As String ' * 2 CL = Complete Load, CU = Complete Unload (one of each required for every load)
S3_LOCATION As String ' * 8 LOCATION
SL1_Location_ID As String ' * 20
SL2_Location_Name As String ' * 60
SL3_Location_Add1 As String ' * 55
SL4_Location_Add2 As String ' * 55
SL5_Location_City As String ' * 30
SL6_Location_State As String ' * 2
SL7_Location_Zip As String ' * 10 (use only 5 digits)
SL8_Location_Country As String ' * 3 USA, CAN, MEX
S4_END_LOCATION As String ' * 12 END LOCATION
S5_STOP_DATE As String ' * 9 STOP DATE
SD1_Date_Qualifier As String ' * 3 37 = No earlier than, 38 = No later than, 10 = Expected arrival time, 35 = Actual arrival
' 11 = Actual departure
SD2_Date_Time As String ' * 12 yyyyMMddHHmm
S6_END_STOP_DATE As String ' * 13 END STOP DATE
S7_STOP_REF As String ' * 8 STOP REF
SR1_Reference_Qualifier As String ' 3 72 = Transit Time, DK = Dock Number, OQ = Order Number
SR2_Reference_Value As String ' * 80
S8_END_STOP_REF As String ' * 12 END STOP REF
H29_END_STOP As String ' * 8 END STOP
H30_ORDER As String ' * 5 ORDER
O1_Order_Number As String ' * 80
H31_END_ORDER As String ' * 9 END ORDER
this is a sample message it would normally be in one long line:
EDI858FF~20140611~04~PP~1266010982~201406060700~CANOLA
ML~0000000001~Business Name~RICHLAND~~~60000~~0~~L~REF~SA~Customer
Name~END REF~STOP~01~CL~LOCATION~RICHLAND~~~~~~~~END LOCATION~STOP
DATE~37~201406060000~END STOP DATE~STOP REF~OQ~5568~END STOP REF~END
STOP~STOP~02~CU~LOCATION~261450~~~~~~~~END LOCATION~STOP
DATE~37~201406060000~END STOP DATE~STOP REF~OQ~5568~END STOP REF~END
STOP~ORDER~5568~END ORDER
I really think this may be too complex of a task for Filehelpers, but I wanted to put it out there to the community to see if you all could help.
As you can see the file is mostly tilde delimited, however certain fields in the definition also act as delimiters. For instance REF or STOP both contain additional information that could be many records deep. You could have multiple STOP definitions 1 to 999 location records. I am really really thinking that this is just too much for Filehelpers...
If you were to attempt this configuration for filehelpers, where would you start and how would you handle all the child fields?
FileHelpers is not the ideal tool for such a complex format. These are the reasons:
Not have enough flexibility with regard to identifying and reporting errors.
Difficulties with escaped characters (e.g., when a tilde is within a field and needs to be escaped) I'm not familiar enough with the EDI format to know if this is likely to be an issue.
The master/detail tools provided by FileHelpers are insufficient
For (1), you don't mention whether you are importing or exporting EDI, or both. If you are only exporting EDI, error reporting would not be necessary.
For (2), you could work around the escaping problems by providing your own custom converters for the problem fields, but you might find every single field has a custom converter.
For (3), There are two ways of handling master/detail records with FileHelpers, but neither of them are likely to cover your requirements without some serious hacking
One way would be to use the MasterDetailEngine, but this only supports one detail type and only one level of nesting, so you would have to find workarounds for both of these.
Another way would be to use the to use the MultiRecordEngine. However, it would treat each row as an unrelated record and the hierarchy (that is, which S record belongs to which H record) would be hard to determine.

How to split a string using an integer array?

I am trying to split a string using an integer array as mask.
The task is simple but I am not accustomed to ADA (which is a constraint).
Here is my code. It works exept that I have an one character offset when testing against a file. Can someone help me remove this offset. it is drinving me nuts.
generic_functions.adb :
package body Generic_Functions is
-- | Sums up the elements of an array of Integers
function Sum_Arr_Int(Arr_To_Sum: Int_Array) return Integer is
Sum: Integer;
begin
Sum := 0;
for I in Arr_To_Sum'Range loop
Sum := Sum + Arr_To_Sum(I);
end loop;
return Sum;
end Sum_Arr_Int;
-- | Split up a String into a array of Unbounded_String following pattern from an Int_Array
function Take_Enregistrements(Decoup_Tab: Int_Array; Str_To_Read: String) return Str_Array is
Previous, Next : Integer;
Arr_To_Return : Str_Array(Decoup_Tab'Range);
begin
if Sum_Arr_Int(Decoup_Tab) > Str_To_Read'Length then
raise Constraint_Error;
else
Previous := Decoup_Tab'First;
Next := Decoup_Tab(Decoup_Tab'First);
for I in Decoup_Tab'Range loop
if I /= Decoup_Tab'First then
Previous := Next + 1;
Next := (Previous - 1) + Decoup_Tab(I);
end if;
Arr_To_Return(I) := To_Unbounded_String(Str_To_Read(Previous..Next));
end loop;
return Arr_To_Return;
end if;
end Take_Enregistrements;
end Generic_Functions;
generic_functions.ads :
with Ada.Strings.Unbounded; use Ada.Strings.Unbounded;
package Generic_Functions is
-- | Types
type Int_Array is array(Positive range <>) of Integer;
type Str_Array is array(Positive range <>) of Unbounded_String;
-- | end of Types
-- | Functions
function Sum_Arr_Int(Arr_To_Sum: Int_Array) return Integer;
function Take_Enregistrements(Decoup_Tab: Int_Array; Str_To_Read: String) return Str_Array;
-- | end of Functions
end Generic_Functions;
generic_functions_tests.adb :
with Ada.Text_IO; use Ada.Text_IO;
with Ada.Strings.Unbounded; use Ada.Strings.Unbounded;
with Generic_Functions; use Generic_Functions;
procedure Generic_Functions_Tests is
-- | Variables
Decoup_Test : constant Int_Array(1..8) := (11, 19, 60, 24, 255, 10, 50, 255);
Test_Str_Arr : Str_Array(Decoup_Test'Range);
Test_Str_Arr2 : Str_Array(Decoup_Test'Range);
Test_Str_Arr3 : Str_Array(Decoup_Test'Range);
--Test_Int : Integer;
Test_Handle : File_Type;
-- | end of Variables
begin
Open(Test_Handle, In_File, "EXPORTFINAL.DAT");
Test_Str_Arr := Take_Enregistrements(Decoup_Test, Get_Line(Test_Handle));
Test_Str_Arr2 := Take_Enregistrements(Decoup_Test, Get_Line(Test_Handle));
Test_Str_Arr3 := Take_Enregistrements(Decoup_Test, Get_Line(Test_Handle));
for I in Test_Str_Arr'Range loop
Put_Line(To_String(Test_Str_Arr(I)));
end loop;
for I in Test_Str_Arr2'Range loop
Put_Line(To_String(Test_Str_Arr2(I)));
end loop;
for I in Test_Str_Arr3'Range loop
Put_Line(To_String(Test_Str_Arr3(I)));
end loop;
-- for I in Test_Str_Arr'Range loop
-- Test_Int := To_String(Test_Str_Arr(I))'Length;
-- Put_Line(Integer'Image(Test_Int));
-- end loop;
-- for I in Test_Str_Arr2'Range loop
-- Test_Int := To_String(Test_Str_Arr2(I))'Length;
-- Put_Line(Integer'Image(Test_Int));
-- end loop;
-- for I in Test_Str_Arr3'Range loop
-- Test_Int := To_String(Test_Str_Arr3(I))'Length;
-- Put_Line(Integer'Image(Test_Int));
-- end loop;
Close(Test_Handle);
end Generic_Functions_Tests;
and finaly the file:
000000000012012-01-01 10:00:00 IBM IBM COMPAGNIE IBM FRANCE 17 AVENUE DE l'EUROPE 92275 BOIS-COLOMBES CEDEX CONFIGURATION COMPLETE SERVEUR000000000000000000000019 .6000000000001000000000000000000001000.00000000000000000000000000000196.00000000000000000000000000001196.00000000
000000000022012-01-01 11:00:00 MICROSOFT MSC 39 QUAI DU PRESIDENT ROOSEVELT 92130 ISSY-LES-MOULINEAUX AMENAGEMENT SALLE INFORMATIQUE000000000000000000000019.6000000000001000000000000000000001000.00000000000000000000000000000196.00000000000000000000000000001196.00000000
000000000032012-01-01 12:00:00 MICROSOFT MSC 39 QUAI DU PRESIDENT ROOSEVELT 92130 ISSY-LES-MOULINEAUX TESTS SUR SITE000000000000000000000019.6000000000001000000000000000000003226.52000000000000000000000000000632.39792000000000000000000000003858.91792000 DELEGATION TECHNICIEN HARD000000000000000000000019.60000000000000000000000000000001.00000000000000000000000000001000.00000000000000000000000000000196.00000000000000000000000000001196.00000000
These lines:
if I = Decoup_Tab'Last then
Arr_To_Return(I) := To_Unbounded_String(Str_To_Read(Previous..Next));
end if;
will overwrite the last element in your array.
Also, are you sure that the line number (00000000001, 00000000002, etc) is one of the strings you want to split based on the integer mask? As your code is right now, you use '11' twice, once for the line number and once for the date-field. If you skip the line number, the other numbers seem to make more sense.

Resources