Is there a way to get the points of a path created with beginPath-closePath, as XY coordinates?
Something like Context.getPath -> Array of(x,y).
As the path is implemented in real code it would be way faster, than using a bezier function written in javascript.
There is no trivial way to do what you are suggesting. You'll likely need to come up with some kind of a solution of your own. If you can describe your problem better, we'll probably be able to provide you something more concrete.
At the worst case you'll need to implement the bezier algo yourself and accumulate the data you'll need. Alternatively you could render using the native method and then redo the math without rendering to get the data you are after (might be faster even, not sure).
Related
Does anybody knows how to create/apply grunge or vintage-worn filters? I'm creating an iOS app to apply filters to photos, just for fun and to learn more about CIImage. Now, I'm using Core-Image to apply CIGaussianBlur, CIGloom, and the like through commands such as ciFilter.setValue(value, forKey:key) and corresponding commands.
So far, core image filters such as blur, color adjustment, sharpen , stylize work OK. But I'd like to learn how to apply one of those grunge, vintage-worn effects available in other photo editing apps, something like this:
Does anybody knows how to create/apply those kind of filters?
Thanks!!!
You have two options.
(1) Use "canned" filters in a chain. If the output of one filter is the input of the next, code things that way. It won't waste any resources until you actually call for output.
(2) Write your own kernel code. It can be a color kernel that mutates a single pixel independently, a warp kernel that checks the values of a pixel and it's surrounding ones to generate the output pixel, or a general kernel that isn't optimized like the last two. Either way, you can use GLSL pretty much for the code (it's pretty much C language for the GPU).
Okay, there's a third option - a combination of the two above options. Also, in iOS 11 and above, you can write kernels using Metal 2.
Is there a way to reorder points in geojson so that my line "sticks" to the road. Right now I tried sorting based on longitude, but "S" shaped curves puts some points out of gps sequence, but in sort order (hence, the zig-zag)
How would I go about reordering my points correctly? Currently I'm using turf for other stuff, but another library would also be fine.
Where did these points come from? If they were ordered either chronologically or antichronologically, then perhaps that order was fine to begin with. Perhaps there is additional metadata that can help order your points with ease.
If not, the only thing I can think of is to employ some sort of nearest neighbor sorting: https://en.wikipedia.org/wiki/Nearest-neighbor_chain_algorithm
This page: https://github.com/pastelsky/nnc seems to be the source of the animation seen on wikipedia and relies on javascript code, so perhaps you can make use of the underyling library used?
I want to get the properly rendered projection result from a Stage3D framework that presents something of a 'gray box' interface via its API. It is gray rather than black because I can see this critical snippet of source code:
matrix3D.copyFrom (renderable.getRenderSceneTransform (camera));
matrix3D.append (viewProjection);
The projection rendering technique that perfectly suits my needs comes from a helpful tutorial that works directly with AGAL rather than any particular framework. Its comparable rendering logic snippet looks like this:
cube.mat.copyToMatrix3D (drawMatrix);
drawMatrix.prepend (worldToClip);
So, I believe the correct, general summary of what is going on here is that both pieces of code are setting up the proper combined matrix to be sent to the Vertex Shader where that matrix will be a parameter to the m44 AGAL operation. The general description is that the combined matrix will take us from Object Local Space through Camera View Space to Screen or Clipping Space.
My problem can be summarized as arising from my ignorance of proper matrix operations. I believe my failed attempt to merge the two environments arises precisely because the semantics of prepending one matrix to another is not, and is never intended to be, equivalent to appending that matrix to the other. My request, then, can be summarized in this way. Because I have no control over the calling sequence that the framework will issue, e.g., I must live with an append operation, I can only try to fix things on the side where I prepare the matrix which is to be appended. That code is not black-boxed, but it is too complex for me to know how to change it so that it would meet the interface requirements posed by the framework.
Is there some sequence of inversions, transformations or other manuevers which would let me modify a viewProjection matrix that was designed to be prepended, so that it will turn out right when it is, instead, appended to the Object's World Space coordinates?
I am providing an answer more out of desperation than sure understanding, and still hope I will receive a better answer from those more knowledgeable. From Dunn and Parberry's "3D Math Primer" I learned that "transposing the product of two matrices is the same as taking the product of their transposes in reverse order."
Without being able to understand how to enter text involving superscripts, I am not sure if I can reduce my approach to a helpful mathematical formulation, so I will invent a syntax using functional notation. The equivalency noted by Dunn and Parberry would be something like:
AB = transpose (B) x transpose (A)
That comes close to solving my problem, which problem, to restate, is really just a problem arising out of the fact that I cannot control the behavior of the internal matrix operations in the framework package. I can, however, perform appropriate matrix operations on either side of the workflow from local object coordinates to those required by the GPU Vertex Shader.
I have not completed the test of my solution, which requires the final step to be taken in the AGAL shader, but I have been able to confirm in AS3 that the last 'un-transform' does yield exactly the same combined raw data as the example from the author of the camera with the desired lens properties whose implementation involves prepending rather than appending.
BA = transpose (transpose (A) x transpose (B))
I have also not yet tested to see if these extra calculations are so processing intensive as to reduce my application frame rate beyond what is acceptable, but am pleased at least to be able to confirm that the computations yield the same result.
Using Rails and Googlemaps V3, I'm looking some advice (before i head off down the wrong path) for the best approach to build the functionality to:
1) draw a polygon that describes a geographical area
2) capture and save the polygon data to the db (postgres)
3) make a query that will tell me if a point is inside the polygon or not
As far as i can see from examples that are out there - the polygon drawing bit is fairly doable but i'm not clear how to capture that data, and in which format i should save it (i see postgres has a polygon data type...). Also for the query, i'm not sure how to go about making that happen - does postgres have any magic that can make this happen (we're using heroku).
Any advice or pointers would be greatly appreciated!
Thanks!
In general your options depend in part on scale. If this is a flat, plane map, the best approach would be a polygon and point approach. This would work best for things like city maps.
For a global map you probably want to use PostGIS's geometry and geography types since they are more flexible.
I am in kind of newbies in matlab. I am trying to write a code which divide the image in nonoverlaping blocks of size 3*3 and I am supposed to do an operation of the specific block like getting the value of the center pixel of block and do some operations. But I don't know where to start from. Using command like blockproc won't help. Can anyone suggest me where to start from?
You could easily use blockproc for this:
http://www.mathworks.com/help/toolbox/images/ref/blockproc.html
But if that isn't working for you, what errors do you get?
If you want to do it manually (like extracting the value of the center pixel of each block) you could simply use two loops for this.. but be aware, this is rather an unelegant and not really fast way to do it...
image = imread('image.png');
s = size(image);
for i=2:3:s(1)-1
for j=2:3:s(2)-1
%% here you have the midpoint of each 3x3 block...
%% you could then easily crop the image around it if you
%% really need separated blocks...
end
end
This isn't a really fast way though... but it works...
Hope that helps...