Powershell script for data parsing - parsing

I am looking for a Powershell script that could help me for this task:
Got data like this:
"No.","time","1-1","1-2","1-3","1-4","1-5","1-6","1-7","1-8","1-9","1-10","1-11","1-12","1-13","1-14","1-15","5-1","5-2","5-3","5-4","5-5","5-6","5-7","5-8","5-9","5-11","5-13","5-15","9-1","9-3","9-5","9-7","9-8","9-9","13-1","13-2","13-3","13-4","13-5","13-6","13-7","13-8","13-9","13-10","17-1","17-2","17-3","17-4","17-5","17-6","17-7","17-8","17-9","E1-1","00:FE:FFX(X2049-1)","00:00:8DX(X2050-1)","00:00:8CX(X2051-1)","00:00:8BX(X2052-1)","00:00:8EX(X2053-1)","00:00:8FX(X2054-1)","00:00:97X(X2055-1)","00:00:96X(X2056-1)","00:00:92X(X2057-1)","00:00:99X(X2058-1)","00:00:98X(X2059-1)","00:00:94X(X2060-1)","00:00:93X(X2061-1)","00:00:90X(X2062-1)","00:00:95X(X2063-1)","00:00:91X(X2064-1)","00:00:9FX(X2065-1)","00:00:9CX(X2066-1)","00:00:A0X(X2067-1)","00:00:A1X(X2068-1)","00:00:9AX(X2069-1)","00:00:9EX(X2070-1)","00:00:A5X(X2071-1)","00:00:A3X(X2072-1)","00:00:A4X(X2073-1)","00:00:9BX(X2074-1)","00:00:A2X(X2075-1)","00:02:D2X(X2076-1)","00:00:A6X(X2077-1)","00:00:A7X(X2078-1)","00:01:0CX(X2079-1)","00:60:48X(X2080-1)","00:00:B2X(X2081-1)","00:02:B4X(X2082-1)","00:02:43X(X2083-1)","00:00:AEX(X2084-1)","00:00:ADX(X2085-1)","00:02:E4X(X2086-1)","00:02:BDX(X2087-1)","00:00:B1X(X2088-1)","00:00:DFX(X2089-1)","00:00:B3X(X2090-1)","00:60:40X(X2091-1)","00:60:41X(X2092-1)","00:00:B5X(X2093-1)","00:00:B7X(X2094-1)","00:00:C3X(X2095-1)","00:60:42X(X2096-1)","00:00:C9X(X2097-1)","00:00:C2X(X2098-1)","00:00:C1X(X2099-1)","00:00:C4X(X2100-1)","00:00:B4X(X2101-1)","00:00:2FX(X2102-1)","00:00:BAX(X2103-1)","00:00:B6X(X2104-1)","00:00:BFX(X2105-1)","00:00:C8X(X2106-1)","00:00:D3X(X2107-1)","00:00:B8X(X2108-1)","00:00:C5X(X2109-1)","00:00:CFX(X2110-1)","00:00:CAX(X2111-1)","00:00:CCX(X2112-1)","00:60:43X(X2113-1)","00:00:D9X(X2114-1)","00:00:BCX(X2115-1)","00:00:A8X(X2116-1)","00:00:C7X(X2117-1)","00:00:D0X(X2118-1)","00:00:BBX(X2119-1)","00:01:3BX(X2120-1)","00:01:3EX(X2121-1)","00:00:BEX(X2122-1)","00:00:BDX(X2123-1)"
"1","2013/11/04 15:45",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1550,0,1010,58,81,73,197,91,275,286,378,44,58,101,140,41,66,144,107,62,17,36,8,46,76,98,-5,130,217,-5,-5,0,-5,-5,0,0,-5,-5,144,0,5,-5,-5,15,281,2859,-5,1,442,724,13,12,880,97,171,130,30,0,49,15,0,82,12,-5,0,443,0,55,64,1269,-5,-5,41,172
"2","2013/11/04 15:46",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1710,0,903,57,91,42,172,95,609,281,274,34,126,384,254,39,49,315,90,46,20,197,8,71,61,89,-5,247,220,-5,-5,0,-5,-5,0,0,-5,-5,126,0,12,-5,-5,16,258,3298,-5,4,647,716,1,9,868,101,208,26,30,0,53,17,0,89,9,-5,0,448,0,36,68,1394,-5,-5,39,67
"3","2013/11/04 15:47",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1548,0,853,55,91,71,193,145,103,269,272,38,77,142,184,39,180,796,85,44,18,517,7,101,64,88,-5,549,138,-5,-5,0,-5,-5,0,0,-5,-5,156,0,3,-5,-5,22,260,2496,-5,18,448,620,15,6,789,194,239,66,96,0,31,13,0,164,8,-5,0,344,0,33,55,1121,-5,-5,72,121
"4","2013/11/04 15:48",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1558,0,874,34,76,38,201,550,113,288,158,18,64,116,458,42,51,127,90,44,16,50,6,69,66,102,-5,116,294,-5,-5,0,-5,-5,0,0,-5,-5,116,0,1,-5,-5,7,210,3038,-5,5,81,553,5,6,834,53,248,26,88,0,36,17,0,17,9,-5,0,78,0,206,55,1450,-5,-5,45,92
"5","2013/11/04 15:49",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1620,0,900,39,88,37,229,171,211,311,264,23,104,128,506,42,201,50,98,46,19,62,6,61,59,102,-5,102,306,-5,-5,0,-5,-5,0,0,-5,-5,126,0,3,-5,-5,16,241,3235,-5,11,353,740,8,8,818,68,244,24,111,0,21,14,0,19,10,-5,0,91,0,93,63,1567,-5,-5,50,103
"6","2013/11/04 15:50",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1745,0,907,44,83,37,189,213,293,265,130,47,68,514,222,42,106,142,92,62,18,338,6,49,79,88,-5,140,231,-5,-5,0,-5,-5,0,0,-5,-5,135,0,5,-5,-5,43,376,3095,-5,1,300,656,1,9,790,91,263,54,103,0,29,14,0,15,11,-5,0,91,0,81,58,1579,-5,-5,57,104
"7","2013/11/04 15:51",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1786,0,972,45,84,55,195,110,798,324,150,31,191,1406,332,1126,225,60,87,57,70,203,7,45,62,81,-5,112,235,-5,-5,0,-5,-5,0,0,-5,-5,121,0,60,-5,-5,4,354,3378,-5,2,421,629,2,136,737,81,196,128,92,0,21,16,0,18,13,-5,0,71,0,90,55,1184,-5,-5,41,170
"8","2013/11/04 15:52",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1704,0,928,31,87,38,199,111,286,341,195,24,299,1065,292,329,60,54,87,45,18,54,6,67,72,89,-5,102,204,-5,-5,0,-5,-5,0,0,-5,-5,172,0,22,-5,-5,5,494,3337,-5,9,169,792,6,15,764,159,227,45,92,0,36,16,0,16,11,-5,0,78,0,93,65,1706,-5,-5,61,81
"9","2013/11/04 15:53",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1494,0,857,28,112,47,188,649,111,318,153,21,87,445,288,34,45,52,87,44,29,94,10,61,74,98,-5,152,129,-5,-5,0,-5,-5,0,0,-5,-5,172,0,1,-5,-5,10,324,3371,-5,1,46,625,3,7,824,54,216,25,85,0,34,17,0,34,12,-5,0,85,0,104,66,1578,-5,-5,32,40
"10","2013/11/04 15:54",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1565,0,850,80,116,38,217,98,127,370,329,174,96,251,184,37,107,66,380,43,18,92,8,41,65,96,-5,104,231,-5,-5,0,-5,-5,0,0,-5,-5,162,0,2,-5,-5,6,272,3743,-5,11,314,545,7,5,962,66,5,20,28,0,13,15,0,17,11,-5,0,40,0,149,65,1419,-5,-5,31,63
"11","2013/11/04 15:55",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1650,0,841,55,77,37,168,80,133,291,286,17,64,138,152,43,57,936,97,57,16,112,8,52,72,103,-5,134,407,-5,-5,0,-5,-5,0,0,-5,-5,129,0,5,-5,-5,2,274,3401,-5,3,297,522,2,8,805,96,5,23,23,0,16,14,0,15,12,-5,0,37,0,186,74,1623,-5,-5,14,45
"12","2013/11/04 15:56",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1471,0,826,42,81,38,162,92,477,284,191,32,68,130,144,45,66,244,100,63,16,146,14,139,102,96,-5,104,302,-5,-5,0,-5,-5,0,0,-5,-5,127,0,10,-5,-5,8,298,3363,-5,2,440,582,3,18,1010,79,8,68,19,0,14,15,0,15,11,-5,0,45,0,129,68,1539,-5,-5,4,93
"13","2013/11/04 15:57",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,1035,0,1002,39,308,36,226,101,104,269,185,24,91,122,137,46,140,59,87,49,18,273,7,156,75,87,-5,113,145,-5,-5,0,-5,-5,0,0,-5,-5,202,0,3,-5,-5,6,214,3794,-5,9,192,500,4,18,1095,161,90,142,84,0,15,15,0,25,17,-5,0,59,0,207,59,1563,-5,-5,29,164
"14","2013/11/04 15:58",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,707,0,968,33,230,37,179,139,138,303,255,21,92,104,161,234,67,55,100,43,18,168,6,145,87,93,-5,126,294,-5,-5,0,-5,-5,0,0,-5,-5,140,0,2,-5,-5,13,305,3448,-5,1,262,648,4,30,928,58,281,51,163,0,19,18,0,40,17,-5,0,155,0,90,50,1631,-5,-5,15,60
"15","2013/11/04 15:59",0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-5,-5,-5,-5,-5,-5,-5,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,-1,-1,-1,-5,-5,-5,761,0,954,58,103,41,176,79,107,310,109,20,86,142,146,846,51,68,91,50,18,184,6,45,71,96,-5,109,142,-5,-5,0,-5,-5,0,0,-5,-5,254,0,3,-5,-5,6,276,3513,-5,3,171,545,4,4,958,51,91,34,60,0,27,16,0,22,12,-5,0,62,0,91,52,1651,-5,-5,12,51
"No.","time","00:01:3CX(X2124-1)","00:00:C0X(X2125-1)","00:00:C6X(X2126-1)","00:01:04X(X2127-1)","00:01:08X(X2128-1)","00:00:DBX(X2129-1)","00:01:B9X(X2130-1)","00:00:DDX(X2131-1)","00:00:DCX(X2132-1)","00:01:64X(X2133-1)","00:00:E0X(X2134-1)","00:00:E1X(X2135-1)","00:00:E2X(X2136-1)","00:00:E6X(X2137-1)","00:00:E8X(X2138-1)","00:00:E5X(X2139-1)","00:00:E4X(X2140-1)","00:00:E3X(X2141-1)","00:00:E7X(X2142-1)","00:00:E9X(X2143-1)","00:00:CEX(X2144-1)","00:00:D8X(X2145-1)","00:00:AAX(X2146-1)","00:00:EDX(X2147-1)","00:60:3FX(X2148-1)","00:00:F7X(X2149-1)","00:00:31X(X2150-1)","00:00:D6X(X2151-1)","00:00:D7X(X2152-1)","00:00:EEX(X2153-1)","00:00:EFX(X2154-1)","00:60:46X(X2155-1)","00:00:F0X(X2156-1)","00:00:F1X(X2157-1)","00:00:ECX(X2158-1)","00:00:F3X(X2159-1)","00:00:EBX(X2160-1)","00:00:F4X(X2161-1)","00:00:32X(X2162-1)","00:01:86X(X2163-1)","00:00:2BX(X2164-1)","00:02:10X(X2165-1)","00:02:11X(X2166-1)","00:00:2CX(X2167-1)","00:01:0AX(X2168-1)","00:01:0BX(X2169-1)","00:00:A9X(X2170-1)","00:60:02X(X2171-1)","00:60:01X(X2172-1)","00:60:03X(X2173-1)","00:60:04X(X2174-1)","00:60:05X(X2175-1)","00:60:06X(X2176-1)","00:60:07X(X2177-1)","00:60:08X(X2178-1)","00:60:09X(X2179-1)","00:60:0AX(X2180-1)","00:60:00X(X2181-1)","00:60:3EX(X2182-1)","00:01:06X(X2183-1)","00:01:0DX(X2184-1)","00:01:07X(X2185-1)","00:01:05X(X2186-1)","00:02:7BX(X2187-1)","00:02:7CX(X2188-1)","00:02:B5X(X2189-1)","00:02:E5X(X2190-1)","00:02:0FX(X2191-1)","00:01:0EX(X2192-1)","00:01:11X(X2193-1)","00:01:14X(X2194-1)","00:01:10X(X2195-1)","00:01:12X(X2196-1)","00:01:13X(X2197-1)","00:01:09X(X2198-1)","00:00:FBX(X2199-1)","00:00:33X(X2200-1)","00:01:0FX(X2201-1)","00:01:27X(X2202-1)","00:01:15X(X2203-1)","00:01:1DX(X2204-1)","00:01:1BX(X2205-1)","00:01:1AX(X2206-1)","00:01:1CX(X2207-1)","00:02:4CX(X2208-1)","00:01:39X(X2209-1)","00:01:16X(X2210-1)","00:01:38X(X2211-1)","00:02:E7X(X2212-1)","00:01:18X(X2213-1)","00:00:FEX(X2214-1)","00:01:19X(X2215-1)","00:00:FDX(X2216-1)","00:00:FFX(X2217-1)","00:01:29X(X2218-1)","00:01:28X(X2219-1)","00:01:17X(X2220-1)","00:01:2DX(X2221-1)","00:01:2EX(X2222-1)","00:01:2FX(X2223-1)","00:01:2BX(X2224-1)","00:01:2CX(X2225-1)","00:60:0BX(X2226-1)","00:02:07X(X2227-1)","00:60:0FX(X2228-1)","00:60:0CX(X2229-1)","00:60:0DX(X2230-1)","00:01:00X(X2231-1)","00:01:4CX(X2232-1)","00:01:56X(X2233-1)","00:01:61X(X2234-1)","00:01:4EX(X2235-1)","00:01:55X(X2236-1)","00:01:58X(X2237-1)","00:01:59X(X2238-1)","00:01:52X(X2239-1)","00:01:5DX(X2240-1)","00:01:60X(X2241-1)","00:01:4DX(X2242-1)","00:01:5AX(X2243-1)","00:01:54X(X2244-1)","00:01:46X(X2245-1)","00:01:5EX(X2246-1)","00:01:5CX(X2247-1)","00:01:49X(X2248-1)","00:01:4AX(X2249-1)","00:01:50X(X2250-1)","00:01:4BX(X2251-1)"
"1","2013/11/04 15:45",-5,9,62,-5,-5,0,-5,0,0,-5,7,0,0,40,21,55,21,79,24,203,3,0,88,51,-5,0,2,272,15,1967,51,-5,61,58,31,243,24,0,3,-5,0,-5,-5,13,-5,-5,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,0,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,0,-5,1,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5
"2","2013/11/04 15:46",-5,10,47,-5,-5,0,-5,0,0,-5,7,0,0,45,24,68,25,94,24,185,3,0,93,40,-5,0,3,285,116,2195,75,-5,117,70,41,216,27,0,3,-5,0,-5,-5,13,-5,-5,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,9,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,35,-5,24,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5
"3","2013/11/04 15:47",-5,19,111,-5,-5,0,-5,0,0,-5,2,0,0,44,30,62,24,91,32,190,1,0,93,121,-5,0,3,346,283,1534,10,-5,93,29,32,218,14,0,3,-5,0,-5,-5,12,-5,-5,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,34,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,125,-5,74,0,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5,-5
...
... etc etc ...
Its a CSV file which seems to be split when the number of columns > 130: next columns are added to the file with new lines.
I don't know the number of columns which is dynamic, but I always have 1 Header Line + 15 Results followed by 1 Header Line + 15 Results and so on a certain number of times.
What I'm looking is reserve this split thing and have one correct CSV file that I can add into Splunk later. That means EACH LINE HAVE ONE DISTINCT TIME (And No.) so my new file much have only 16 lines. (1 Header Line + 15 Results, 1 by minute)
So I need to append :
all the (16+1)*n lines (n belongs to 1,EndOfFile) to the 1rst line
without the first 2 columns (No and Time are the same)
all the
(16+2)*n lines to the 2nd line without the first 2 columns (No and
Time are the same)
all the (16+3)*n lines to the 3nd line without the
first 2 columns (No and Time are the same)
etc etc...
If someone can help me with this script that would be awesome !
EDIT: Here's where I am but no success:
Import-Csv .\data.txt |Group-Object -Property No.,time |% {
$text = $_.name+","
$text += ($_.group | % {$i=0;$j=$_.count}{$i++ ; ($_|%{$_.toString() + ","})*($j-$i -gt 0)})
$text += "`n"
Write-Output $text
}
EDIT 2: My problem is that I got an hashtable but I don't know any Names to get all the elements. I tried with getEnumerator() without success :
Method invocation failed because [System.Management.Automation.PSCustomObject] doesn't contain a method named 'getEnumerator'.
Import-Csv .\data.txt |Group-Object -Property No.,time |% {
$text = $_.name+","
$text += ($_.group | % {$i=0;$j=$_.count}{$i++ ; ($_.GetEnumerator()|%{$_ + ","})*($j-$i -gt 0)})
$text += "`n"
Write-Output $text
}
If I put a column name like "1-1" instead of getEnumerator() its working but I can't do that for all columns since I don't know the names.

I used get-content and array treatment its a lot more easier than trying to get these ** cmd-let working...
$ofs = ',' # ! Variable Interne utilisée pour le cast de [string] venant d'un [array], définit le séparateur
#Lecture des fichiers en entrée
$txt = gc .\data.txt
if($txt -is [system.array]){
#Declaration des variables
$res = #()
$count = 0
#Traitement
foreach ($line in $txt) { #On traite toutes les lignes du fichier
$count++ #On incremente le compteur de lignes
If($count-le 16) { #Les 16 premieres lignes [1 Header + 15 Datas row]
$res += $line+"," #sont copiées telles quelles
} else { #Pour toutes les lignes suivantes [17,+inf]
$newline = $line.Split(',')[2..500] #On supprime leur deux premières colonnes
$res[($count%16)-1] += [string] $newline #On les ajoute aux 16 premières lignes avec Mod[16]
}
}
#Ecriture des fichiers de sortie
$res
}

Related

How to find a string from a specific word to other word using rails

I have the following string and I'm trying to display the information from specific start word to specific end word:
Protocolo de medición
\r\n\r\nEnsayador disruptivo
\r\nDPA 75C Versión: 1.07
\r\nNúmero de serie: 1101908010
\r\n11/02/2022 02:15\r\n_____________________________
\r\n\r\nInformación sobre el ensayo
\r\n\r\nNombre de protocolo: .......................
\r\nNúmero de muestra: 0569.1
\r\nMedición según norma: ASTM D1816:2004 2mm
\r\nForma de electrodos: Forma de seta
\r\nDistancia entre electrodos: 2 mm
\r\nFrec. del ensayo: 60 Hz\r\n\r\n_____________________________
\r\n\r\nConfig. según norma
\r\n\r\nDiámetro de los electrodos: 36 mm\r\n\r\n_____________________________
\r\n\r\nValores de medición
\r\n\r\nTemperatura: 20 °C
\r\n\r\nMedición 1: 60.6 kV
\r\nMedición 2: 72.7 kV\r\nMedición 3: >75.0 kV
\r\nMedición 4: 54.7 kV\r\nMedición 5: 66.4 kV
\r\n\r\nValor medio: 65.9 kV
\r\nDesviación estándar: 8.4 kV
\r\nDesviación estándar/val. medio: 12.8 %
\r\n\r\n\r\nEnsayo correctamente realiz.
\r\n\r\n\r\nEnsayo ejecutado por: .......................
The code should find the string line
\r\nNúmero de muestra: 0569.1 \r\
Final result should be
0569.1
I tried this code only display the word searched
#article.description.match(/Número de muestra:\b/)
I tried this code and works but i need to count the number from and to
<%= #article.description.slice(249..260) %>
What i want is write the FROM WORD - TO WORD string without typing the index word.
If the string you are looking to capture always has a line end character after it at some point you can do:
data = #article.description.match(/Número de muestra:*(.*)$/)
returns a Match object like:
#<MatchData "Número de muestra: 0569.1" 1:"0569.1">
you can then access the match with
data[1]
# => "0569.1"
The Match object stores the matching string in data[0] and the first capture is in data[1]. In the regexp we are using the .* matches the spaces after the string Número de muestra:. The (.*) matches any characters after the spaces. The $ matches the end of line character. Anything that matches what is between the parens () gets stored as matches in the Match object.

Google Sheet merged cells create space

I got a sheet with merged cells and in those merged cells, a script write its results.
When I copy this result in the merged cells, it gave me multiple spaces at the end.
Like : Result #1________ (« _ » represent invisible space)
When I put the same result in a normal cell (not merged), it doesn’t put any space at the end.
Result #1
I tried multiple cell format (Center aligned, left aligned, etc.) but nothing changed.
Do you have any idea why ?
Thanks !
EDIT : add script
Script
function Devise() {
const sheetName = "Missions";
const sheet = SpreadsheetApp.getActiveSpreadsheet().getSheetByName(sheetName);
var Devise = "";
var NombreMission = "";
var NomOperateurs = "";
if(sheet.getRange("H2").getValue()=="") { // Si la mission 7 est vide
NombreMission = 6; // On compte seulement 6 missions
} else {
NombreMission = 7; // Sinon on compte 7 missions
}
for (i = 1; i < NombreMission+1; i++) { // Boucle FOR pour NombreMission missions
if(sheet.getRange(2,i+1).getValue()=="") { continue; } // Si la mission est vide, on la passe
Devise = Devise + i + "/";
l = 0; // Variable pour indiquer "Rien" si personne à placer dans la mission
NomOperateurs = ""; // Reset les noms pour la mission d'après
for (j = 1; j < 27+1; j++) { // Boucle FOR pour tous les opérateurs
if(sheet.getRange(j+2,i+1).getFontWeight() == 'bold') { // Vérifie si la case est en gras
/*if(i!=NombreMission) { // Tant qu'il ne s'agit pas de la dernière mission ...
Devise = Devise + sheet.getRange(j+2,1).getValue() + " "; // ... on affiche les opérateurs
}*/
NomOperateurs = NomOperateurs + sheet.getRange(j+2,1).getValue() + " ";
l = l + 1; // On compte les opérateurs
}
} // Fin Boucle FOR opérateurs
if (l==24) { // S'il y a tous les operateurs sur une mission...
Devise = Devise + "ALL OPs! " // ... On affiche "All Op!"
} else if (i==NombreMission && l!=0) { // Sinon s'il s'agit de la dernière mission et qu'il reste des opérateurs à placer...
Devise = Devise + "Autres + Epic "; // ... On indique qu'il s'agit du reste et des épiques
} else if (l==0) { // Sinon s'il n'y a aucun opérateurs à placer...
Devise = Devise + "RIEN " // ... On indique "RIEN"
} else { // Sinon ...
Devise = Devise + NomOperateurs; // ... On affiche les opérateurs
}
} // FIN BOUCLE FOR NombreMission
if(NombreMission==6 && Devise!="") { Devise = Devise + "7/!NOTHING!";}
sheet.getRange("K13").setValue(Devise);
}
Your problem is related to the data you copied and the way that you copied it as pasting text in merged cells doesn't create any new lines.
Also, an important thing to keep in mind is that CTRL+ENTER creates the mentioned space also known as a line break.
So, for example, if this cell contains the text Text + line break:
And the text from the above cell is copied and pasted into a merged cell it will look like this - which is the same outcome as the one that you have mentioned:
But if you paste the same text to a simple cell, it will look like this:
This is essentially because the line break will signify the start of a new cell.
For example, this cell contains this text with line breaks:
After the text is copied and pasted into a different cell, this is how it will actually be pasted as:
In order to solve your issue, I suggest you to copy only the text needed and if possible to avoid using any line breaks.
Reference
Edit and Format a Spreadsheet.
Encountering the same issue.
I have a merged cell with text. If I select the cell and paste it into notepad, It includes quite a lot of white space.
I've checked and if the merged cell spans two rows, the white space includes a line break.
If the merged cell spans one row but two columns, the white space does not include a line break.
If I have a single cell and have it take its value from the mered cell "=A1", the text does not include the white space.
So the addition of the whitespace is definitely the result of having a merged cell.

splitting a string into similar elements along with the values

I have a string like
$query = "date=20.10.2007&amount=400+date=11.02.2008&amount=1400+date=12.03.2008&amount=1500";
there are two variables named date and amount containing a value e.g date= 20.10.2007 and amount=400 and these two variables repeat itself with different values and each set (date & amount) are separated by '+' sign. Now i want to display this string like this:
Date Amount
20.10.2007 400
11.02.2008 1400
12.02.2008 1500
Need help
We can make judicious use of explode and preg_split here to get the output you want:
$query = "date=20.10.2007&amount=400+date=11.02.2008&amount=1400+date=12.03.2008&amount=1500";
$array = explode("+", $query);
$counter = 0;
echo "Date Amount\n";
foreach($array as $item) {
if ($counter > 0) echo "\n";
$parts = explode("&", $item);
echo preg_split("/=/", $parts[0])[1] . " ";
echo preg_split("/=/", $parts[1])[1];
$counter = $counter + 1;
}
This prints:
Date Amount
20.10.2007 400
11.02.2008 1400
12.03.2008 1500
The logic here is that we first split the query string on + to obtain components looking like:
date=20.10.2007&amount=400
Then, inside the loop over all such components, we split again by & to obtain the date and amount terms. Finally, each of these are split again on = to get the actual values.
Thanks a lot Tim Biegeleisen for your kind guidance. From your help i did it with this code below:
$str = "date=20.10.2007&amount=400+date=11.02.2008&amount=1400+date=12.03.2008&amount=1500"; $array = explode("+",$str);
$i = 0;
echo nl2br("Date Amount \n");
foreach($array as $item[$i])
{
parse_str($item[$i]);
echo $date;
echo $amount."<br>";
$i++;
}

Split EDI X12 files using Powershell

I am likely recreating the wheel here but this is my stab and solving an issue partly and asking for community assistance to resolve the remaining.
My task is to split EDI X12 documents into their own file (ISA to IEA)
and CRLF each line separately (similar to ex. EDI2.EDI below).
Below is my Powershell script and example EDI documents 1, 2 and 3.
My script will successfully split a contiguous X12 EDI document from ISA to IEA and CRLF into a file so that one contiguous string becomes something more readable. This works well and will even handle any segment delimiter as well as any line delimiter.
My issue is dealing with non-contiguous documents (ex. EDI2) or combined (ex. EDI3). The source folder could have any of the formatted files shown below. If the file already contains the CRLF, then I just need to split it from ISA to IEA. My script is failing when i pull in CRLF'd files.
Could someone help me solving this?
$sourceDir = "Z:\temp\EDI\temp\"
$targetDir = "Z:\temp\EDI\temp\archive"
<##### F U N C T I O N S #####>
<#############################>
Function FindNewFile
{
Param (
[Parameter(mandatory=$true)]
[string]$filename,
[int]$counter)
$filename = Resolve-Path $filename
$validFileName = "{0}\{1} {2}{3}" -f $targetDir, #([system.io.fileinfo]$filename).DirectoryName,
([system.io.fileinfo]$filename).basename,
$counter, #"1", #([guid]::newguid()).tostring("N"),
([system.io.fileinfo]$filename).extension
Return $validFileName
}
<###### M A I N L I N E ######>
<#############################>
If(test-path $sourceDir)
{
$files = #(Get-ChildItem $sourceDir | Where {!$_.PsIsContainer -and $_.extension -eq ".edi" -and $_.length -gt 0})
"{0} files to process. . ." -f $files.count
If($files)
{
If(!(test-path $targetDir))
{
New-Item $targetDir -ItemType Directory | Out-Null
}
foreach ($file in $files)
{
$me = $file.fullname
# Get the new file name
$isaCount = 1
$newFile = FindNewFile $me $isaCount
$data = get-content $me
# Reset variables for each new file
$dataLen = [int] $data.length
$linDelim = $null
$textLine = $null
$firstRun = $True
$errorFlag = $False
for($x=0; $x -lt $data.length; $x++)
{
$textLine = $data.substring($x, $dataLen)
$findISA = "ISA{0}" -f $textLine.substring(3,1)
If($textLine.substring(0,4) -eq $findISA)
{
$linDelim = $textLine.substring(105, 1)
If(!($FirstRun))
{
$isaCount++
$newFile = FindNewFile $me $isaCount
}
$FirstRun = $False
}
If($linDelim)
{
$delimI = $textLine.IndexOf($linDelim) + 1
$textLine = $textLine.substring(0,$delimI)
$fLine = $textLine
add-content $newFile $fLine
$x += $fLine.length - 1
$dataLen = $data.length - ($x + 1)
}
Else
{
$errorFlag = $True
"`t=====> {0} is not a valid EDI X12 file!" -f $me
$x += $data.length
}
}
If(!($errorFlag))
{
"{0} contained {1} ISA's" -f $me, $isaCount
}
}
}
Else
{
"No files in {0}." -f $sourceDir
}
}
Else
{
"{0} does not exist!" -f $sourceDir
}
Filename: EDI1.EDI
ISA*00* *00* *08*925xxxxxx0 *01*78xxxx100 *170331*1630*U*00401*000000114*0*P*>~GS*FA*8473293489*782702100*20170331*1630*42*T*004010UCS~ST*997*116303723~SE*6*116303723~GE*1*42~IEA*1*000000114~ISA*00* *00* *08*WARxxxxxx *01*78xxxxxx0 *170331*1545*U*00401*000002408*0*T*>~GS*FA*5035816100*782702100*20170331*1545*1331*T*004010UCS~ST*997*000001331~~SE*24*000001331~GE*1*1331~IEA*1*000002408~
Filename: EDI2.EDI
ISA*00* *00* *ZZ*REINxxxxxxxDSER*01*78xxxx100 *170404*0819*|*00501*100000097*0*P*}~
GS*PO*REINHxxxxxxDSER*782702100*20170404*0819*1097*X*005010~
ST*850*1097~
SE*14*1097~
GE*1*1097~
IEA*1*100000097~
Filename: EDI3.EDI
ISA*00* *00* *08*925xxxxxx0 *01*78xxxx100 *170331*1630*U*00401*000000114*0*P*>~GS*FA*8473293489*782702100*20170331*1630*42*T*004010UCS~ST*997*116303723~SE*6*116303723~GE*1*42~IEA*1*000000114~ISA*00* *00* *08*WARxxxxxx *01*78xxxxxx0 *170331*1545*U*00401*000002408*0*T*>~GS*FA*5035816100*782702100*20170331*1545*1331*T*004010UCS~ST*997*000001331~~SE*24*000001331~GE*1*1331~IEA*1*000002408~
ISA*00* *00* *ZZ*REINxxxxxxxDSER*01*78xxxx100 *170404*0819*|*00501*100000097*0*P*}~
GS*PO*REINHxxxxxxDSER*78xxxxxx0*20170404*0819*1097*X*005010~
ST*850*1097~
SE*14*1097~
GE*1*1097~
IEA*1*100000097~
FWIW, I've compiled this code from all over the net including stackoverflow.com. If you see your code and desire recognition, let me know and I'll add it. I'm not claiming any of this is original! My motto is "ARRRGH!"
EDI3 is an invalid X12 document, each file should only contain one ISA segment with repeated envelopes if required.
The segment terminator should also be consistent. In EDI3 it is both ~ and ~ which is again invalid.
Segment terminator should be tilde "~".
It can be suffixed by: nothing, "\n" or, "\r\n", what is optional is the suffix for human reading. Some implementations might be more relaxed in terms of the X12 standard.
https://www.ibm.com/support/knowledgecenter/en/SS6V3G_5.3.1/com.ibm.help.gswformstutscreen.doc/GSW_EDI_Delimiters.html
https://docs.oracle.com/cd/E19398-01/820-1275/agdbj/index.html
https://support.microsoft.com/en-sg/help/2723596/biztalk-2010-configuring-segment-terminator-for-an-x12-encoded-interch
BTW, check my splitter/viewer: https://gist.github.com/ppazos/94a63ab18910ab0c0d23c9ff4ff7e5c2

Python splitting text returns a str and a list of str

I wonder whether someone can help me with the syntax to split my text file into key, value pairs.
Abbasso: termine con cui si indicano gli ambienti situati sotto il ponte di coperta.
Abbattuta: manovra che consiste nel puggiare sino a fare prendere il vento alle vele sulle mure opposte.
Abbisciare: (fr.: prendre la biture; ingl.: to coil) stendere un cavo o una catena come fosse una biscia in modo da evitare che si imbrogli successivamente, quando sarà posto in opera.
Abbordo: (fr.: abordage; ingl.: collision) collisione in mare. Sinonimo, poco usato, di accosto e di abbordaggio.
Abbrivo: (fr.: erre; ingl.: way-on) inerzia dell'imbarcazione a continuare nel suo movimento anche quando è cessata la spinta propulsiva, sia essa a vela che a motore.
Abbuono: (fr.: bonification, rating; ingl.: rating) compenso: (o vantaggio) dato ad una imbarcazione per permetterle di gareggiare più equamente: (ad esempio abbuono per anzianità di costruzione dello scafo).
My function at the minute gives me a key str, but a value type(list). Instead I want the value also to be a str. I get what my problem is that what should a be the value is splitting on every colon instead of only on the leftmost colon.
def create_dict():
eng_fr_it_dict={}
f_name = "dizionario_della_vela.txt"
handle = open(f_name, encoding = 'utf8')
for line in handle:
#print(line)
if line.startswith(" ") : continue
line.lstrip()
terms = line.split(": ")
#print(terms[1:])
term = terms[0].lstrip()
expan = terms[1:]
print(type(term), type(expan))
eng_fr_it_dict[term] = eng_fr_it_dict.get(term, expan)
with open("eng_fr_it_dict.txt", "wb") as infile:
pickle.dump(eng_fr_it_dict,infile)
print(eng_fr_it_dict)
Can you suggest a cleverer way to do this or will I have to work out how to covert the list of str to a single str? I thought that there was a split in-built function, but obviously not
file = open("dizionario_della_vela.txt", "r")
data = file.read()
file.close()
data = data.split("\n") # getting every line as seperate list
myDict = {}
for line in data:
line = line.split(":")
key = line[0] # getting first element as key
value = ":".join(line[1:]) # joins elements (starting with second) with
# ":". We need this because previous line
# was splitted by ":" to get your key. This
# is where "string" value is produced.
myDict[key] = value
for key in myDict.keys():
print(myDict[key])

Resources