Apply hash pattern to polygon in openlayers 3 - openlayers-3

I have tried to figure out a way to apply advanced fill style to polygons in OpenLayers-3. I would like to find a way to reproduce the following style with OL3:
Apply hash pattern to polygon in openlayers
http://dev.openlayers.org/sandbox/ossipoff/openlayers/examples/graphicfill.html
The OL2 solution uses the SLD format, which does not seem to be implemented in OL3.
I have found a great article from boundless geo discussing the geometry option of ol.style.Style that allows to provide advanced styling. This option is great, but applying a hash pattern to a polygon using this technique would heavily impact performance.
http://boundlessgeo.com/2015/04/geometry-based-styling-openlayers-3/
Any suggestions?
Thanks!

This is not yet supported but see https://github.com/openlayers/ol3/issues/2208 for a proposal

Related

Is opencv API cv::cuda::cvtColor() feasible to extend by myself to support UYVY to RGB conversion?

I've found that cv::cuda::cvtColor() doesn't support all the color spaces. It only supports maybe half the color spaces known to be supported by cv::cvtColor()
cv::cvtColor(raw_mat, bgr_mat, cv::COLOR_YUV2BGR_UYVY);
This code above works well, but
cv::cuda::cvtColor(gpu_raw_mat, gpu_bgr_mat, cv::COLOR_YUV2BGR_UYVY);
This code above doesn't work. Because cv::cuda::cvtColor() doesn't include a conversion function for cv::COLOR_YUV2BGR_UYVY.
So I looked other functions for other color spaces like cv::CV_YUV2BGR. It is YUV_to_BGR(). From my seeing this function, I think I could implemet one for cv::COLOR_YUV2BGR_UYVY myself. I guessed it would be similar to YUV_to_BGR().
Can I easily implement it?
For me to do this, don't you have any information I can study? If it is possible I wish I impelemt one and contribute the new API.

Is there a Halide::BoundaryConditions to mimic OpenCV default border type?

The documentation says this is similar to GL_MIRRORED_REPEAT. I tried to research this, but it doesn't seem as specific as the OpenCV border types.
BORDER_REFLECT_101 as gfedcb|abcdefgh|gfedcba, this is the default.
BORDER_REFLECT as fedcba|abcdefgh|hgfedcb
I guess the corners are not strictly defined by this, but I can clearly see what the edges are. The documentation for GL_MIRRORED_REPEAT seems to focus on corner behaviour. Overall, it does not matter with our application as there are physical limitations on the targets of interest that keep them within the bounds of the field of view. However, if I am writing regression tests and these specifics matter.
How can I replicate BORDER_REFLECT_101 in Halide? Is it possible with Halide::BoundaryConditions or do I need to implement my own clamping? I can relax the conditions after proving we have replicated behaviour and use Halide::BoundaryConditions::mirror_image.
Bonus: Is Halide::BoundaryConditions more performant than using clamp or is this just syntactic sugar? It seems the opposite; it is better to use clamp?
Bonus: Is Halide::BoundaryConditions more performant than using clamp or is this just syntactic sugar? It seems the opposite; it is better to use clamp?
The boundary conditions are just a convenience. They're implemented here. They should be no more or less performant than writing the same yourself since they're just metaprogramming Exprs (i.e. they aren't compiler intrinsics).

Detect table with OpenCV

I often work with scanned papers. The papers contain tables (similar to Excel tables) which I need to type into the computer manually. To make the task worse the tables can be of different number of columns. Manually entering them into Excel is mundane to say the least.
I thought I can save myself a week of work if I can put a program to OCR it. Would it be possible to detect headers text areas with the OpenCV and OCR the text behind the detected image coordinates.
Can I achieve this with the help of OpenCV or do I need entirely different approach?
Edit: Example table is really just a standard table similar to what you can see in Excel and other spread-sheet applications, see below.
This question seems a little old but i was also working on a similar problem and got my own solution which i am explaining here.
For reading text using any OCR engine there are many challanges in getting good accuracy which includes following main cases:
Presence of noise due to poor image quality / unwanted elements/blobs in the background region. This will require some pre-processing like noise removal which can be easily done using gaussian filter or normal median filter methods. These are also available in opencv.
Wrong orientation of image: Because of wrong orientation OCR engine fails to segment the lines and words in image correctly which gives the worst accuracy.
Presence of lines: While doing word or line segmentation OCR engine sometimes also tries to merge the words and lines together and thus processing wrong content and hence giving wrong results.
There are other issues also but these are the basic ones.
In this case i think the scan image quality is quite good and simple and following steps can be used solve the problem.
Simple image binarization will remove the background content leaving only necessary content as shown here.
Now we have to remove lines which in this case is tabular grid. This can also be identified using connected components and removing the large connected components. So our final image that is needed to be fed to OCR engine will look like this.
For OCR we can use Tesseract Open Source OCR Engine. I got following results from OCR:
Caption title
header! header2 header3
row1cell1 row1cell2 row1cell3
row2cell1 row2cell2 row2cell3
As we can see here that result is quite accurate but there are some issues like
header! which should be header1, this is because OCR engine misunderstood ! with 1. This problem can be solved by further processing the result using Regex based operations.
After post processing the OCR result it can be parsed to read the row and column values.
Also here in this case to classify the sheet title, heading and normal cell values their font information can be used.

Is there a Xamarin.iOS equivalent of Java's GlyphVector or .NET's GraphicsPath?

Using Xamarin.iOS, (or just the iOS API) I need to get the outline path of text as rendered in some typeface. The exact outline is needed because I'm going to tesselate the outlines and apply 2D and 3D transformations to them.
In Java, this is straightforward by turning rendered text into a Shape (via GlyphVectors).
In GDI (.NET) this can be done with System.Drawing.GraphicsPath, adding text and getting the path. This is not available in Xamarin.iOS.
Is there a straightforward way to create paths for rendered text in iOS or Xamarin.iOS?
The MonoTouch.CoreText.CTFont.GetPathForGlyph overloads that returns instances of CGPath are likely what you're looking for. It maps to the native CTFontCreatePathForGlyph API (for further documentation / samples).
You'll need to iterate your string (for each glyph) and create subpaths - but you should end up with your string as a vector (and be able to further transform then as you need).

What are the ways to create draw data structures for latex?

I tried tikz/pgf a bit but have not had much luck creating a nice diagram to visualize bitfields or byte fields of packed data structures (i.e. in memory). Essentially I want a set of rectangles representing ranges of bits with labels inside, and offsets along the top. There should be multiple rows for each word of the data structure. This is similar to most of the diagrams in most processor manuals labeling opcode encoding etc.
Has anyone else tried to do this using latex or is there a package for this?
I have successfully used the bytefield package for something like this. If it doesn't do exactly what you want, please extend your question with an example...
You will find several examples with both tikz code source and a visual rendering of this code at http://www.texample.net/tikz/examples/

Resources