I want to parse a file in scala (probably using JavaTokerParsers?). Possibly without using too many vars :-)
The file is the input for a ray tracer.
It is a line based file structure.
Three types of lines exists: empty line, comment line and command line
The comment line starts with # (maybe has some whitespace before the #)
Command line starts with an identifier optionally followed by a number of parameters (float or filename).
How would I go about this. I would want to parser to be called like this
val scene = parseAll(sceneFile, file);
Sample file:
#Cornell Box
size 640 480
camera 0 0 1 0 0 -1 0 1 0 45
output scene6.png
maxdepth 5
maxverts 12
#planar face
vertex -1 +1 0
vertex -1 -1 0
vertex +1 -1 0
vertex +1 +1 0
#cube
vertex -1 +1 +1
vertex +1 +1 +1
vertex -1 -1 +1
vertex +1 -1 +1
vertex -1 +1 -1
vertex +1 +1 -1
vertex -1 -1 -1
vertex +1 -1 -1
ambient 0 0 0
specular 0 0 0
shininess 1
emission 0 0 0
diffuse 0 0 0
attenuation 1 0.1 0.05
point 0 0.44 -1.5 0.8 0.8 0.8
directional 0 1 -1 0.2 0.2 0.2
diffuse 0 0 1
#sphere 0 0.8 -1.5 0.1
pushTransform
#red
pushTransform
translate 0 0 -3
rotate 0 1 0 60
scale 10 10 1
diffuse 1 0 0
tri 0 1 2
tri 0 2 3
popTransform
#green
pushTransform
translate 0 0 -3
rotate 0 1 0 -60
scale 10 10 1
diffuse 0 1 0
tri 0 1 2
tri 0 2 3
popTransform
#back
pushTransform
scale 10 10 1
translate 0 0 -2
diffuse 1 1 1
tri 0 1 2
tri 0 2 3
popTransform
#sphere
diffuse 0.7 0.5 0.2
specular 0.2 0.2 0.2
pushTransform
translate 0 -0.7 -1.5
scale 0.1 0.1 0.1
sphere 0 0 0 1
popTransform
#cube
diffuse 0.5 0.7 0.2
specular 0.2 0.2 0.2
pushTransform
translate -0.25 -0.4 -1.8
rotate 0 1 0 15
scale 0.25 0.4 0.2
diffuse 1 1 1
tri 4 6 5
tri 6 7 5
tri 4 5 8
tri 5 9 8
tri 7 9 5
tri 7 11 9
tri 4 8 10
tri 4 10 6
tri 6 10 11
tri 6 11 7
tri 10 8 9
tri 10 9 11
popTransform
popTransform
popTransform
Maybe I've pushed it too hard for the one liner but that's my take (although idiomatic it might not be optimal):
First, CommandParams represents a command along with its arguments in a list format. If no arguments then we have None args:
case class CommandParams(command:String, params:Option[List[String]])
Then here's the file parsing and construction one liner along with line-by-line explanation:
val fileToDataStructure = Source.fromFile("file.txt").getLines() //open file and get lines iterator
.filter(!_.isEmpty) //exclude empty lines
.filter(!_.startsWith("#")) //exclude comments
.foldLeft(List[CommandParams]()) //iterate and store in a list of CommandParams
{(listCmds:List[CommandParams], line:String) => //tuple of a list of objs so far and the current line
val arr = line.split("\\s") //split line on any space delim
val command = arr.head //first element of array is the command
val args = if(arr.tail.isEmpty) None else Option(arr.tail.toList) //rest are their params
new CommandParams(command, args)::listCmds //construct the obj and cons it to the list
}
.reverse //due to cons concat we need to reverse to preserve order
A demo output iterating through it:
fileToDataStructure.foreach(println)
yields:
CommandParams(size,Some(List(640, 480)))
CommandParams(camera,Some(List(0, 0, 1, 0, 0, -1, 0, 1, 0, 45)))
CommandParams(output,Some(List(scene6.png)))
CommandParams(maxdepth,Some(List(5)))
CommandParams(maxverts,Some(List(12)))
CommandParams(vertex,Some(List(-1, +1, 0)))
...
CommandParams(pushTransform,None)
CommandParams(pushTransform,None)
CommandParams(translate,Some(List(0, 0, -3)))
...
A demo of how to iterate through it to do actual work once loaded:
fileToDataStructure.foreach{
cmdParms => cmdParms match {
case CommandParams(cmd, None) => println(s"I'm a ${cmd} with no args")
case CommandParams(cmd, Some(args))=> println(s"I'm a ${cmd} with args: ${args.mkString(",")}")
}
}
yields output:
I'm a size with args: 640,480
I'm a camera with args: 0,0,1,0,0,-1,0,1,0,45
I'm a output with args: scene6.png
I'm a maxdepth with args: 5
I'm a maxverts with args: 12
I'm a vertex with args: -1,+1,0
...
I'm a popTransform with no args
I'm a popTransform with no args
Related
for example, stitch
first image
1 1 1
1 1 1
1 1 1
second image
2 2 2 2
2 2 2 2
2 2 2 2
and What I want
0 0 0 2 2 2 2
1 1 1 2 2 2 2
1 1 1 2 2 2 2
1 1 1 0 0 0 0
or
1 1 1 0 0 0 0
1 1 1 2 2 2 2
1 1 1 2 2 2 2
0 0 0 2 2 2 2
In python, that is easy to make like..
temp_panorama = np.zeros((1's height+abs(2's upper part length), 1's width+2's width))
temp_panorama[(2's upper part length) : 1's height, 0 : 1's width] = img1[:]
temp_panorama[0 : 2's height, 1's width +1 :] = img2[:, :]
but how can I implement the same function in C++'s opencv?
use subimages:
// ROI where first image will be placed
cv::Rect firstROI = cv::Rect(x1,y2, first.cols, first.height);
cv::Rect secondROI = cv::Rect(x2,y2, second.cols, second.height);
// create an image big enought to hold the result
cv::Mat canvas = cv::Mat::zeros(cv::Size(std::max(x1+first.cols, x2+second.cols), std::max(y1+first.rows, y2+second.rows)), first.type());
// use subimages:
first.copyTo(canvas(firstROI));
second.copyTo(canvas(secondROI));
in your example:
x1 = 0,
y1 = 1,
x2 = 3,
y2 = 0
first.cols == 3
first.rows == 3
second.cols == 4
second.rows == 3
I'm developing a PdfParser and I want to print the text content of the pdf on a coordinate plane. Below is the text object and matrices that are used to render text. How can I isolate the scaling, rotation and translation and use for printing the text content on exact coordinates on a canvas?
//Decoded text stream containing text objects
S
Q
q
0.000 0.750 0.750 -0.000 15.000 301.890 cm
0.000 g
/F10 16.000 Tf
0 Tr
0.000 Tc
BT
1 0 0 -1 20.000 13.600 Tm
[<007a>]TJ
ET
Q
q
0.000 0.750 0.750 -0.000 15.000 301.890 cm
1.000 0.416 0.000 rg
/F10 6.667 Tf
0 Tr
0.000 Tc
BT
1 0 0 -1 136.667 13.600 Tm
[<0024>12<0046><0046><0058><0055>6<0048><0003><0032><0058><0057><0053><0058><0057><0003><0036>-4<0052><004f><0058><0057><004c><0052><0051><0003><0026>3<004f><0052><0058><0047><0003><0048><0051>18<0059><004c><0055>6<0052><0051><0050><0048><0051>3<0057>7<000f><0003><0027><0028><0030><0032><0003><0044><0046><0046><0058><0055>6<0048>]TJ
ET
Q
q
0.000 0.750 0.750 -0.000 15.000 301.890 cm
0.000 g
/F10 16.000 Tf
0 Tr
0.000 Tc
BT
1 0 0 -1 603.333 13.600 Tm
[<007a>]TJ
ET
Q
q
The initial S Q is a leftover of a previous instruction block ending in some path stroking and graphics state restoring. As we don't know anything to the contrary, let's assume that 'Q' restores to the initial graphics state, in particular to an unmodified current transformation matrix (CTM).
As we are interested in coordinates according to the default user space coordinate system, we can assume accordingly that the current CTM is the identity matrix,
Let's take a look at the block
q
0.000 0.750 0.750 -0.000 15.000 301.890 cm
0.000 g
/F10 16.000 Tf
0 Tr
0.000 Tc
BT
1 0 0 -1 20.000 13.600 Tm
[<007a>]TJ
ET
Q
As you implied yourself in a comment, the only relevant instructions for the total transformation matrix at the time the text rendering instruction [<007a>]TJ begins to be executed are
0.000 0.750 0.750 -0.000 15.000 301.890 cm
and
1 0 0 -1 20.000 13.600 Tm
setting the current transformation matrix to
0 0.75 0 1 0 0 0 0.75 0
0.75 0 0 * 0 1 0 = 0.75 0 0
15.00 301.89 1 0 0 1 15.00 301.89 1
and the text and text line matrices both to
1 0 0
0 -1 0
20.0 13.6 1
Thus, the effects of text matrix and current transformation matrix combine to:
1 0 0 0 0.75 0 0 0.75 0
0 -1 0 * 0.75 0 0 = -0.75 0 0
20.0 13.6 1 15.00 301.89 1 25.2 316.89 1
You can split up that combined matrix in a scaling, rotation, and translation like this:
0 0.75 0 0.75 0 0 0 1 0 1 0 0
-0.75 0 0 = 0 0.75 0 * -1 0 0 * 0 1 0
25.2 316.89 1 0 0 1 0 0 1 25.2 316.89 1
We have a scaling by .75, a rotation by 90° counterclockwise, and a translation by (25.2, 316.89).
(Of course this can still be subject to a page rotation...)
I'm implementing Convolutions using Radix-2 Cooley-Tukey FFT/FFT-inverse, and my output is correct but shifted upon completion.
My solution is to zero-pad both input size and kernel size to 2^m for smallest possible m, tranforming both input and kernel using FFT, then multiply the two element-wise and transform the result back using FFT-inverse.
As an example on the resulting problem:
0 1 2 3 0 0 0 0
4 5 6 7 0 0 0 0
8 9 10 11 0 0 0 0
12 13 14 15 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
with identity kernel
0 0 0 0
0 1 0 0
0 0 0 0
0 0 0 0
becomes
0 0 0 0 0 0 0 0
0 0 1 2 3 0 0 0
0 4 5 6 7 0 0 0
0 8 9 10 11 0 0 0
0 12 13 14 15 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
It seems any sizes of inputs and kernels produces the same shift (1 row and 1 col), but I could be wrong. I've performed the same computations using the online calculator at this link! and get same results, so it's probably me missing some fundamental knowledge. My available litterature has not helped. So my question, why does this happen?
So I ended up finding the answer why this happens myself. The answered is given through the definition of the convolution and the indexing that happens there. So by definition the convolution of s and k is given by
(s*k)(x) = sum(s(k)k(x-k),k=-inf,inf)
The center of the kernel is not "known" by this formula, and thus an abstraction we make. Define c as the center of the convolution. When x-k = c in the sum, s(k) is s(x-c). So the sum containing the interesting product s(x-c)k(c) ends up at index x. In other words, shifted to the right by c.
FFT fast convolution does a circular convolution. If you zero pad so that both the data and kernel are circularly centered around (0,0) in the same size NxN arrays, the result will also stay centered. Otherwise any offsets will add.
When I try to encode a video the encoder crashes after finishing first GOP.
This is the configuration I'm using:
MaxCUWidth : 16 # Maximum coding unit width in pixel
MaxCUHeight : 16 # Maximum coding unit height in pixel
MaxPartitionDepth : 2 # Maximum coding unit depth
QuadtreeTULog2MaxSize : 3 # Log2 of maximum transform size for
# quadtree-based TU coding (2...5) = MaxPartitionDepth + 2 - 1
QuadtreeTULog2MinSize : 2 # Log2 of minimum transform size for
# quadtree-based TU coding (2...5)
QuadtreeTUMaxDepthInter : 1
QuadtreeTUMaxDepthIntra : 1
#======== Coding Structure =============
IntraPeriod : 8 # Period of I-Frame ( -1 = only first)
DecodingRefreshType : 1 # Random Accesss 0:none, 1:CDR, 2:IDR
GOPSize : 4 # GOP Size (number of B slice = GOPSize-1)
# Type POC QPoffset QPfactor tcOffsetDiv2 betaOffsetDiv2 temporal_id #ref_pics_active #ref_pics reference pictures predict deltaRPS #ref_idcs reference idcs
Frame1: P 4 1 0.5 0 0 0 1 1 -4 0
Frame2: B 2 2 0.5 1 0 1 1 2 -2 2 1 2 2 1 1
Frame3: B 1 3 0.5 2 0 2 1 3 -1 1 3 1 1 3 1 1 1
Frame4: B 3 3 0.5 2 0 2 1 2 -1 1 1 -2 4 0 1 1 0
This also happens with CU=16x16 with depth=1
Note: I encoded CU=64x64 with depth=4 with the same GOP configuration and every thing went fine.
This is most probably due to the fact that you have compiled the binary for a 32-bit system?
Please rebuild it for a 64-bit system and the problem will go away.
I am trying to parse some data using C.
The data is of the form:
REMARK 280 100 MM MES PH 6.5, 5 % GLYCEROL
REMARK 290
REMARK 290 CRYSTALLOGRAPHIC SYMMETRY
REMARK 290 SYMMETRY OPERATORS FOR SPACE GROUP: P 1 21 1
REMARK 290
REMARK 290 SYMOP SYMMETRY
REMARK 290 NNNMMM OPERATOR
REMARK 290 1555 X,Y,Z
REMARK 290 2555 -X,Y+1/2,-Z
I want to extract the "Symmetry Operator" data: X,Y,Z and -X,Y+1/2,-Z and turn the data into two matrices for each set of symmetry operators of the form:
[1 0 0 [0 [-1 0 0 [0
0 1 0 0 and 0 1 0 1/2
0 0 1] 0] 0 0 -1] 0]
for X,Y,Z, and -X,Y+1/2,-Z respectively.
I have not done much data parsing and would appreciate any help anyone could offer.