Parse wikipedia {{Location map}} templates - parsing

I would like to parse the Wikipedia power plant lists, which contain the {{Location map}} template. In my example I'm using the German translation, but this shouldn't change the basic process.
How can I get out the label=, lat=, lon= and region= parameters from such code?
Probably this is nothing for a html parser like BeautifulSoup, but rather awk?
{{ Positionskarte+
| Tadschikistan
| maptype = relief
| width = 600
| float = right
| caption =
| places =
{{ Positionskarte~
| Tadschikistan
| label = <small>[[Talsperre Baipasa|Baipasa]]</small>
| marktarget =
| mark = Blue pog.svg
| position = right
| lat = 38.267584
| long = 69.123906
| region = TJ
| background = #FEFEE9
}}
{{ Positionskarte~
| Tadschikistan
| label = <small>[[Kraftwerk Duschanbe|Duschanbe]]</small>
| marktarget =
| mark = Red pog.svg
| position = left
| lat = 38.5565
| long = 68.776
| region = TJ
| background = #FEFEE9
}}
...
}}
Thanks in advance!

Just extract information with regular expressions.
For example like this (PHP)
$k = "{{ Positionskarte+
| Tadschikistan
| maptype = relief
| width = 600
| float = right
| caption =
| places =
{{ Positionskarte~
| Tadschikistan
| label = <small>[[Talsperre Baipasa|Baipasa]]</small>
| marktarget =
| mark = Blue pog.svg
| position = right
| lat = 38.267584
| long = 69.123906
| region = TJ
| background = #FEFEE9
}}
{{ Positionskarte~
| Tadschikistan
| label = <small>[[Kraftwerk Duschanbe|Duschanbe]]</small>
| marktarget =
| mark = Red pog.svg
| position = left
| lat = 38.5565
| long = 68.776
| region = TJ
| background = #FEFEE9
}}
}}";
$items = explode("Positionskarte~", $k);
$result = [];
foreach ($items as $item) {
$info = [];
$pattern1 = '/label\s+=\s+(.+)/';
preg_match($pattern1, $item, $matches);
if (!empty($matches)) {
$info['label'] = $matches[1];
}
$pattern2 = '/lat\s+=\s+(.+)/';
preg_match($pattern2, $item, $matches);
if (!empty($matches)) {
$info['lat'] = $matches[1];
}
$pattern3 = '/long\s+=\s+(.+)/';
preg_match($pattern3, $item, $matches);
if (!empty($matches)) {
$info['long'] = $matches[1];
}
$pattern4 = '/region\s+=\s+(.+)/';
preg_match($pattern4, $item, $matches);
if (!empty($matches)) {
$info['region'] = $matches[1];
}
if(!empty($info)) {
$result[] = $info;
}
}
var_dump($result);

Related

Google Sheets Query - transform rows before a query

I have a Google Sheets table similar to this:
| date | buyer | country | item 1 | item 2 | item 3 | ...
| 2022\1\1 | A.B. | LAT | milk | coffee | sugar | ...
| 2022\1\2 | C.D. | GER | milk | cocoa | cookies | ...
Is it possible to transform it somehow to a table, that has only one item per row, example:
| date | buyer | country | item |
| 2022\1\1 | A.B. | LAT | milk |
| 2022\1\1 | A.B. | LAT | coffee |
| 2022\1\1 | A.B. | LAT | sugar |
| 2022\1\2 | C.D. | GER | milk |
| 2022\1\2 | C.D. | GER | cocoa |
| 2022\1\2 | C.D. | GER | cookies |
So that I can afterward query it for what was sold when and where.
In a regular DB, I would have two (or 3) tables with 1-N relations doing a simple join/joins, but I cannot figure out how to do it in google sheets. Any ideas?
Paste this simple QUERY function formula in H2.
=QUERY({A2:C,D2:D;A2:C,E2:E;A2:C,F2:F}, " Select * Where Col1 is not null ")
And this in H1.
=ArrayFormula({A1:C1,"Items"})
Like this, take a look at the spreadsheet example.
Answer
The following formula should produce the result you desire:
=FILTER({{A2:C;A2:C;A2:C},{D2:D;E2:E;F2:F}},NOT(ISBLANK({A2:A;A2:A;A2:A})))
Explanation
This creates an array where each entry in columns A through C is duplicated three times, once for each entry in columns D through F. The =FILTER then returns only rows where column A is not empty.
Note that this is not an easily scalable function if you have a large number of different item columns to parse.
Functions used:
=FILTER
=NOT
=ISBLANK
Formula =ArrayFormula(SPLIT(FLATTEN(FILTER(A2:A&"|"&B2:B&"|"&C2:C&"|"&D2:F,A2:A<>"")),"|"))
Note: You only need to change the range when there are many items
3 items => D2:F
4 items => D2:G
5 items => D2:H
...
Function References
FILTER
FLATTEN
SPLIT
As another alternative. You can also utilize AppScript to do so. I developed a project that you can review with access to a Sheet and access "Extensions" and open the AppScript Project.
Sheet
You just need to run the function. It was based on a Macro from one of my Excel projects. Sample below.
/** #OnlyCurrentDoc */
function concatenarItems() {
// Declarando las variables necesarias
var LastColumn = 0;
var LastRow = 0;
var i = 0; // rows
var j = 0; //gathering data
var iT = 0;
var jT = 0;
var jR = 0; // extra items
var dte = ''; // Date
var buyer = ''; // buyer
var country = ''; // country
var items; // row of items
// finding the last
LastColumn =
6; // columns
var dummy_RowIndex = countCells(getActiveSheetRows());
LastRow = convertToInteger(getRowIndex(makeIterableRangeList(
getLastCellInRange(getCells(SpreadsheetApp.getActiveSheet(), dummy_RowIndex, 1), xlUp))));
// original table
i = 1;
j = 1;
// Ubicacion de la nueva tabla
iT = 2; // row
jT = 8; // Column
while (_vba_operator_lt(i, LastRow)) {
// Save row
var dummy_RowIndex_2 = _vba_operator_add(i, 1);
dte = convertToString(getRangeValue(
xlRangeValueDefault,
getCells(SpreadsheetApp.getActiveSheet(), dummy_RowIndex_2, j).rangeList));
var dummy_RowIndex_3 = _vba_operator_add(i, 1);
var dummy_ColumnIndex = _vba_operator_add(j, 1);
buyer = convertToString(getRangeValue(
xlRangeValueDefault,
getCells(SpreadsheetApp.getActiveSheet(), dummy_RowIndex_3, dummy_ColumnIndex).rangeList));
var dummy_RowIndex_4 = _vba_operator_add(i, 1);
var dummy_ColumnIndex_2 = _vba_operator_add(j, 2);
country = convertToString(getRangeValue(
xlRangeValueDefault,
getCells(SpreadsheetApp.getActiveSheet(), dummy_RowIndex_4, dummy_ColumnIndex_2)
.rangeList));
var dummy_RowIndex_5 = _vba_operator_add(i, 1);
var dummy_ColumnIndex_3 = _vba_operator_add(j, 3);
items = getRangeValue(
xlRangeValueDefault,
getCells(SpreadsheetApp.getActiveSheet(), dummy_RowIndex_5, dummy_ColumnIndex_3)
.rangeList); //save item
// save row
setValueForRangeList(
getCells(SpreadsheetApp.getActiveSheet(), iT, jT),
verifyRangeValueType(xlRangeValueDefault, dte));
// date
var dummy_ColumnIndex_4 = _vba_operator_add(jT, 1);
setValueForRangeList(
getCells(SpreadsheetApp.getActiveSheet(), iT, dummy_ColumnIndex_4),
verifyRangeValueType(xlRangeValueDefault, buyer));
// buyer
var dummy_ColumnIndex_5 = _vba_operator_add(jT, 2);
setValueForRangeList(
getCells(SpreadsheetApp.getActiveSheet(), iT, dummy_ColumnIndex_5),
verifyRangeValueType(xlRangeValueDefault, country));
// country
var dummy_ColumnIndex_6 = _vba_operator_add(jT, 3);
setValueForRangeList(
getCells(SpreadsheetApp.getActiveSheet(), iT, dummy_ColumnIndex_6),
verifyRangeValueType(xlRangeValueDefault, items));
// items
jR = 4; // count reset
// Rest of items.
while (_vba_operator_lt(jR, _vba_operator_add(LastColumn, 1))) {
var dummy_RowIndex_6 = _vba_operator_add(i, 1);
items = defaultRangeGetter(getCells(
SpreadsheetApp.getActiveSheet(), dummy_RowIndex_6,
jR)); // rewrite item
setValueForRangeList(
getCells(SpreadsheetApp.getActiveSheet(), iT, jT),
verifyRangeValueType(xlRangeValueDefault, dte));
var dummy_ColumnIndex_7 = _vba_operator_add(jT, 1);
setValueForRangeList(
getCells(SpreadsheetApp.getActiveSheet(), iT, dummy_ColumnIndex_7),
verifyRangeValueType(xlRangeValueDefault, buyer));
var dummy_ColumnIndex_8 = _vba_operator_add(jT, 2);
setValueForRangeList(
getCells(SpreadsheetApp.getActiveSheet(), iT, dummy_ColumnIndex_8),
verifyRangeValueType(xlRangeValueDefault, country));
var dummy_ColumnIndex_9 = _vba_operator_add(jT, 3);
setValueForRangeList(
getCells(SpreadsheetApp.getActiveSheet(), iT, dummy_ColumnIndex_9),
verifyRangeValueType(xlRangeValueDefault, items));
iT = _vba_operator_add(iT, 1);
jR = _vba_operator_add(jR, 1);
}
i = _vba_operator_add(i, 1);
}
}

How To Distinct OrderSymbol() In MQL4

How to distinct OrderSymbol() in MQL4?
I have data:
Symbol | Type | Size
GBPUSD | Buy | 1.5
GBPUSD | Buy | 0.5
EURUSD | Sell | 1
USDJPY | Buy | 2
I want the result:
GBPUSD
EURUSD
USDJY
Thanks
No direct way. Collect the data and put into array, maybe sort after every insert in order to use binary search (if list is large). Here is an example of parsing current orders.
#include<Arrays\ArrayString.mqh>
ArrayString *list = listOfUniqueSymbols();
ArrayString* listOfUniqueSymbols()
{
CArrayString *result = new CArrayString();
for(int i=OrdersTotal()-1;i>=0;i--)
{
if(!OrderSelect(i,SELECT_BY_POS))continue;
const string symbol=OrderSymbol();
if(result.Search(symbol)==-1)
{
result.Add(symbol);
result.Sort();
}
}
return result;
}

Detect words and graphs in image and slice image into 1 image per word or graph

I'm building a web app to help students with learning Maths.
The app needs to display Maths content that comes from LaTex files.
These Latex files render (beautifully) to pdf that I can convert cleanly to svg thanks to pdf2svg.
The (svg or png or whatever image format) image looks something like this:
_______________________________________
| |
| 1. Word1 word2 word3 word4 |
| a. Word5 word6 word7 |
| |
| ///////////Graph1/////////// |
| |
| b. Word8 word9 word10 |
| |
| 2. Word11 word12 word13 word14 |
| |
|_______________________________________|
Real example:
The web app intent is to manipulate and add content to this, leading to something like this:
_______________________________________
| |
| 1. Word1 word2 | <-- New line break
|_______________________________________|
| |
| -> NewContent1 |
|_______________________________________|
| |
| word3 word4 |
|_______________________________________|
| |
| -> NewContent2 |
|_______________________________________|
| |
| a. Word5 word6 word7 |
|_______________________________________|
| |
| ///////////Graph1/////////// |
|_______________________________________|
| |
| -> NewContent3 |
|_______________________________________|
| |
| b. Word8 word9 word10 |
|_______________________________________|
| |
| 2. Word11 word12 word13 word14 |
|_______________________________________|
Example:
A large single image cannot give me the flexibility to do this kind of manipulations.
But if the image file was broken down into smaller files which hold single words and single Graphs I could do these manipulations.
What I think I need to do is detect whitespace in the image, and slice the image into multiple sub-images, looking something like this:
_______________________________________
| | | | |
| 1. Word1 | word2 | word3 | word4 |
|__________|_______|_______|____________|
| | | |
| a. Word5 | word6 | word7 |
|_____________|_______|_________________|
| |
| ///////////Graph1/////////// |
|_______________________________________|
| | | |
| b. Word8 | word9 | word10 |
|_____________|_______|_________________|
| | | | |
| 2. Word11 | word12 | word13 | word14 |
|___________|________|________|_________|
I'm looking for a way to do this.
What do you think is the way to go?
Thank you for your help!
I would use horizontal and vertical projection to first segment the image into lines, and then each line into smaller slices (e.g. words).
Start by converting the image to grayscale, and then invert it, so that gaps contain zeros and any text/graphics are non-zero.
img = cv2.imread('article.png', cv2.IMREAD_COLOR)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_gray_inverted = 255 - img_gray
Calculate horizontal projection -- mean intensity per row, using cv2.reduce, and flatten it to a linear array.
row_means = cv2.reduce(img_gray_inverted, 1, cv2.REDUCE_AVG, dtype=cv2.CV_32F).flatten()
Now find the row ranges for all the contiguous gaps. You can use the function provided in this answer.
row_gaps = zero_runs(row_means)
Finally calculate the midpoints of the gaps, that we will use to cut the image up.
row_cutpoints = (row_gaps[:,0] + row_gaps[:,1] - 1) / 2
You end up with something like this situation (gaps are pink, cutpoints red):
Next step would be to process each identified line.
bounding_boxes = []
for n,(start,end) in enumerate(zip(row_cutpoints, row_cutpoints[1:])):
line = img[start:end]
line_gray_inverted = img_gray_inverted[start:end]
Calculate the vertical projection (average intensity per column), find the gaps and cutpoints. Additionally, calculate gap sizes, to allow filtering out the small gaps between individual letters.
column_means = cv2.reduce(line_gray_inverted, 0, cv2.REDUCE_AVG, dtype=cv2.CV_32F).flatten()
column_gaps = zero_runs(column_means)
column_gap_sizes = column_gaps[:,1] - column_gaps[:,0]
column_cutpoints = (column_gaps[:,0] + column_gaps[:,1] - 1) / 2
Filter the cutpoints.
filtered_cutpoints = column_cutpoints[column_gap_sizes > 5]
And create a list of bounding boxes for each segment.
for xstart,xend in zip(filtered_cutpoints, filtered_cutpoints[1:]):
bounding_boxes.append(((xstart, start), (xend, end)))
Now you end up with something like this (again gaps are pink, cutpoints red):
Now you can cut up the image. I'll just visualize the bounding boxes found:
The full script:
import cv2
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
def plot_horizontal_projection(file_name, img, projection):
fig = plt.figure(1, figsize=(12,16))
gs = gridspec.GridSpec(1, 2, width_ratios=[3,1])
ax = plt.subplot(gs[0])
im = ax.imshow(img, interpolation='nearest', aspect='auto')
ax.grid(which='major', alpha=0.5)
ax = plt.subplot(gs[1])
ax.plot(projection, np.arange(img.shape[0]), 'm')
ax.grid(which='major', alpha=0.5)
plt.xlim([0.0, 255.0])
plt.ylim([-0.5, img.shape[0] - 0.5])
ax.invert_yaxis()
fig.suptitle("FOO", fontsize=16)
gs.tight_layout(fig, rect=[0, 0.03, 1, 0.97])
fig.set_dpi(200)
fig.savefig(file_name, bbox_inches='tight', dpi=fig.dpi)
plt.clf()
def plot_vertical_projection(file_name, img, projection):
fig = plt.figure(2, figsize=(12, 4))
gs = gridspec.GridSpec(2, 1, height_ratios=[1,5])
ax = plt.subplot(gs[0])
im = ax.imshow(img, interpolation='nearest', aspect='auto')
ax.grid(which='major', alpha=0.5)
ax = plt.subplot(gs[1])
ax.plot(np.arange(img.shape[1]), projection, 'm')
ax.grid(which='major', alpha=0.5)
plt.xlim([-0.5, img.shape[1] - 0.5])
plt.ylim([0.0, 255.0])
fig.suptitle("FOO", fontsize=16)
gs.tight_layout(fig, rect=[0, 0.03, 1, 0.97])
fig.set_dpi(200)
fig.savefig(file_name, bbox_inches='tight', dpi=fig.dpi)
plt.clf()
def visualize_hp(file_name, img, row_means, row_cutpoints):
row_highlight = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
row_highlight[row_means == 0, :, :] = [255,191,191]
row_highlight[row_cutpoints, :, :] = [255,0,0]
plot_horizontal_projection(file_name, row_highlight, row_means)
def visualize_vp(file_name, img, column_means, column_cutpoints):
col_highlight = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
col_highlight[:, column_means == 0, :] = [255,191,191]
col_highlight[:, column_cutpoints, :] = [255,0,0]
plot_vertical_projection(file_name, col_highlight, column_means)
# From https://stackoverflow.com/a/24892274/3962537
def zero_runs(a):
# Create an array that is 1 where a is 0, and pad each end with an extra 0.
iszero = np.concatenate(([0], np.equal(a, 0).view(np.int8), [0]))
absdiff = np.abs(np.diff(iszero))
# Runs start and end where absdiff is 1.
ranges = np.where(absdiff == 1)[0].reshape(-1, 2)
return ranges
img = cv2.imread('article.png', cv2.IMREAD_COLOR)
img_gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
img_gray_inverted = 255 - img_gray
row_means = cv2.reduce(img_gray_inverted, 1, cv2.REDUCE_AVG, dtype=cv2.CV_32F).flatten()
row_gaps = zero_runs(row_means)
row_cutpoints = (row_gaps[:,0] + row_gaps[:,1] - 1) / 2
visualize_hp("article_hp.png", img, row_means, row_cutpoints)
bounding_boxes = []
for n,(start,end) in enumerate(zip(row_cutpoints, row_cutpoints[1:])):
line = img[start:end]
line_gray_inverted = img_gray_inverted[start:end]
column_means = cv2.reduce(line_gray_inverted, 0, cv2.REDUCE_AVG, dtype=cv2.CV_32F).flatten()
column_gaps = zero_runs(column_means)
column_gap_sizes = column_gaps[:,1] - column_gaps[:,0]
column_cutpoints = (column_gaps[:,0] + column_gaps[:,1] - 1) / 2
filtered_cutpoints = column_cutpoints[column_gap_sizes > 5]
for xstart,xend in zip(filtered_cutpoints, filtered_cutpoints[1:]):
bounding_boxes.append(((xstart, start), (xend, end)))
visualize_vp("article_vp_%02d.png" % n, line, column_means, filtered_cutpoints)
result = img.copy()
for bounding_box in bounding_boxes:
cv2.rectangle(result, bounding_box[0], bounding_box[1], (255,0,0), 2)
cv2.imwrite("article_boxes.png", result)
The image is top quality, perfectly clean, not skewed, well separated characters. A dream !
First perform binarization and blob detection (standard in OpenCV).
Then cluster the characters by grouping those with an overlap in the ordinates (i.e. facing each other in a row). This will naturally isolate the individual lines.
Now in every row, sort the blobs left-to-right and cluster by proximity to isolate the words. This will be a delicate step, because the spacing of characters within a word is close to the spacing between distinct words. Don't expect perfect results. This should work better than a projection.
The situation is worse with italics as the horizontal spacing is even narrower. You may have to also look at the "slanted distance", i.e. find the lines that tangent the characters in the direction of the italics. This can be achieved by applying a reverse shear transform.
Thanks to the grid, the graphs will appear as big blobs.

Parsing robocopy log file to PSCustomObject

I'm trying to create a PSCustomObject from a robocopy log file. The first piece is pretty easy but I'm struggling with the $Footer part. I can't seem to find a good way to split up the values.
It would be nice if every entry has it's own Property, so it's possible to use for example $Total.Dirs or $Skipped.Dirs. I was thinking about Import-CSV, because that's just great on how it allows you to have column headers. But this doesn't seem to fit here. There's another solution I found here but it seems a bit of overkill.
Code:
Function ConvertFrom-RobocopyLog {
Param (
[Parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true,Position=0)]
[String]$LogFile
)
Process {
$Header = Get-Content $LogFile | select -First 10
$Footer = Get-Content $LogFile | select -Last 7
$Header | ForEach-Object {
if ($_ -like "*Source*") {$Source = (($_.Split(':'))[1]).trim()}
if ($_ -like "*Dest*") {$Destination = (($_.Split(':'))[1]).trim()}
}
$Footer | ForEach-Object {
if ($_ -like "*Dirs*") {$Dirs = (($_.Split(':'))[1]).trim()}
if ($_ -like "*Files*") {$Files = (($_.Split(':'))[1]).trim()}
if ($_ -like "*Times*") {$Times = (($_.Split(':'))[1]).trim()}
}
$Obj = [PSCustomObject]#{
'Source' = $Source
'Destination' = $Destination
'Dirs' = $Dirs
'Files' = $Files
'Times' = $Times
}
Write-Output $Obj
}
}
Log file:
-------------------------------------------------------------------------------
ROBOCOPY :: Robust File Copy for Windows
-------------------------------------------------------------------------------
Started : Wed Apr 01 14:28:11 2015
Source : \\SHARE\Source\
Dest : \\SHARE\Target\
Files : *.*
Options : *.* /S /E /COPY:DAT /PURGE /MIR /Z /NP /R:3 /W:3
------------------------------------------------------------------------------
0 Files...
0 More Folders and files...
------------------------------------------------------------------------------
Total Copied Skipped Mismatch FAILED Extras
Dirs : 2 0 2 0 0 0
Files : 203 0 203 0 0 0
Bytes : 0 0 0 0 0 0
Times : 0:00:00 0:00:00 0:00:00 0:00:00
Ended : Wed Apr 01 14:28:12 2015
Thank you for your help.
You can clean this up more but this is the basic approach I would take.
Function ConvertFrom-RobocopyLog {
Param (
[Parameter(Mandatory=$true,ValueFromPipelineByPropertyName=$true,Position=0)]
[String]$LogFile
)
Process {
$Header = Get-Content $LogFile | select -First 10
$Footer = Get-Content $LogFile | select -Last 7
$Header | ForEach-Object {
if ($_ -like "*Source*") {$Source = (($_.Split(':'))[1]).trim()}
if ($_ -like "*Dest*") {$Destination = (($_.Split(':'))[1]).trim()}
}
$Footer | ForEach-Object {
if ($_ -like "*Dirs*"){
$lineAsArray = (($_.Split(':')[1]).trim()) -split '\s+'
$Dirs = [pscustomobject][ordered]#{
Total = $lineAsArray[0]
Copied = $lineAsArray[1]
Skipped = $lineAsArray[2]
Mismatch = $lineAsArray[3]
FAILED = $lineAsArray[4]
Extras = $lineAsArray[5]
}
}
if ($_ -like "*Files*"){
$lineAsArray = ($_.Split(':')[1]).trim() -split '\s+'
$Files = [pscustomobject][ordered]#{
Total = $lineAsArray[0]
Copied = $lineAsArray[1]
Skipped = $lineAsArray[2]
Mismatch = $lineAsArray[3]
FAILED = $lineAsArray[4]
Extras = $lineAsArray[5]
}
}
if ($_ -like "*Times*"){
$lineAsArray = ($_.Split(':',2)[1]).trim() -split '\s+'
$Times = [pscustomobject][ordered]#{
Total = $lineAsArray[0]
Copied = $lineAsArray[1]
FAILED = $lineAsArray[2]
Extras = $lineAsArray[3]
}
}
}
$Obj = [PSCustomObject]#{
'Source' = $Source
'Destination' = $Destination
'Dirs' = $Dirs
'Files' = $Files
'Times' = $Times
}
Write-Output $Obj
}
}
I wanted to make a function to parse the footer lines but $Times is a special case since it does not have all the same columns of data.
With $Times the important difference is how we are doing the split. Since the string contains more than one colon we need to account for that. Using the other paramenter in .Split() we specify the number of elements to return.
$_.Split(':',2)[1]
Since these logs always have output and no blanks row elements we can do this assuming that the parsed elements of $lineAsArray will always have 6 elements.
Sample Output
Source : \\SHARE\Source\
Destination : \\SHARE\Target\
Dirs : #{Total=2; Copied=0; Skipped=2; Mismatch=0; FAILED=0; Extras=0}
Files : #{Total=203; Copied=0; Skipped=203; Mismatch=0; FAILED=0; Extras=0}
Times : #{Total=0:00:00; Copied=0:00:00; FAILED=0:00:00; Extras=0:00:00}
So if you wanted the total files copied you can now use dot notation.
(ConvertFrom-RobocopyLog C:\temp\log.log).Files.Total
203
Not that clear what you want to do, but this will go some way to showing you how to get the stats into an array of objects
$statsOut = #()
$stats = Get-Content $LogFile | select -Last 6 | select -first 4
$stats | % {
$s = $_ -split "\s+"
$o = new-object -type pscustomobject -property #{"Name"=$s[0];"Total"=$s[2];"Copied"=$s[3];"Skipped"=$s[4];"mismatch"=$s[5]};
$statsOut += ,$o
}
Gives:
[PS] > $statsOut | ft -Auto
mismatch Name Skipped Total Copied
-------- ---- ------- ----- ------
0 Dirs 2 2 0
0 Files 203 203 0
0 Bytes 0 0 0

Doctrine 2 ManyToOne with multiple joinColumns

I'm trying to select the matching row in the product_item_sortorder table based on a productId and toolboxItemId from the product_item table.
In normal SQL that would be for a given productId:
SELECT pi.*, pis.* FROM product_item pi
LEFT JOIN product_item_sortorder pis
ON pi.productId = pis.productId
AND pi.toolboxItemId = pis.toolboxItemId
WHERE pi.productId = 6
I wrote the DQL for it as followed:
$this->_em->createQuery(
'SELECT pi
FROM Entities\ProductItem pi
LEFT JOIN pi.sequence s
WHERE pi.product = ?1'
);
Then I get following SQL if I output the $query->getSQL():
SELECT p0_.id AS id0, p0_.productId AS productId1, p0_.priceGroupId AS priceGroupId2, p0_.toolboxItemId AS toolboxItemId3, p0_.levelId AS levelId4, p0_.parentId AS parentId5, p0_.productId AS productId6, p0_.toolboxItemId AS toolboxItemId7 FROM product_item p0_ LEFT JOIN product_item_sortorder p1_ ON p0_.productId = p1_. AND p0_.toolboxItemId = p1_. WHERE p0_.productId = ? ORDER BY p0_.id ASC
As you can see the referencedColumnNames are not found:
LEFT JOIN product_item_sortorder p1_ ON p0_.productId = p1_. AND p0_.toolboxItemId = p1_.
Details of the product_item table:
+-----+-----------+---------------+
| id | productId | toolboxItemId |
+-----+-----------+---------------+
| 467 | 1 | 3 |
| 468 | 1 | 10 |
| 469 | 1 | 20 |
| 470 | 1 | 4 |
| 471 | 1 | 10 |
+-----+-----------+---------------+
Details of the product_item_sortorder table:
+-----+-----------+---------------+----------+
| id | productId | toolboxItemId | sequence |
+-----+-----------+---------------+----------+
| 452 | 1 | 3 | 1 |
| 457 | 1 | 4 | 6 |
| 474 | 1 | 20 | 4 |
+-----+-----------+---------------+----------+
ProductItem Entity
<?php
/**
* #Entity(repositoryClass="Repositories\ProductItem")
* #Table(name="product_item")
*/
class ProductItem
{
...
/**
* #ManyToOne(targetEntity="ProductItemSortorder")
* #JoinColumns({
* #JoinColumn(name="productId", referencedColumnName="productId"),
* #JoinColumn(name="toolboxItemId", referencedColumnName="toolboxItemId")
* })
*/
protected $sequence;
...
?>
ProductItemSortOrder Entity
<?php
/**
* #Entity(repositoryClass="Repositories\ProductItemSortorder")
* #Table(name="product_item_sortorder")
*/
class ProductItemSortorder
{
...
/**
* #ManyToOne(targetEntity="Product")
* #JoinColumn(name="productId", referencedColumnName="id")
*/
protected $product;
/**
* #ManyToOne(targetEntity="ToolboxItem")
* #JoinColumn(name="toolboxItemId", referencedColumnName="id")
*/
protected $toolboxItem;
...
}
?>
Your mappings are seriously wrong. You are using ManyToOne on both ends, how is this possible? You have both associations defined as "owning"-side, no mapped-by or inversed-by (See Association Mappings chapter). And you are using join columns of one association to map to many fields in another entity. I suppose you want to do something else, can you describe exactly your use-case?
How you would map your example in YAML (since #Hernan Rajchert's example is only in annotations):
ProductItem:
type: entity
manyToOne:
sequence:
targetEntity: ProductItemSortorder
joinColumns:
productId:
referencedColumnName: productId
toolboxItemId:
referencedColumnName: toolboxItemId

Resources