draw outlines to a volume in webgl (xtk) - webgl

i want to achieve that the user can go through the slices of a volume but to guarantee a little bit more of orientation i'd like to draw the outlines of a cube which represent the dimensions of a volume.
what i think i need to do:
1) get the dimensions of the volume
2) start drawing lines from e.g. [0,0,0] to [0,1,0] from [0,1,0] to [1,1,0] and from [1,1,0] to [1,0,0] and back again to [0,0,0] and so on...
is there an easy way to draw a line in xtk? like using something similar like the sphere-constructor here?
example (black outlines):
cube
thanks in advance

In X.slice, we create the borders of the current slice like this.
var borders = new X.object();
borders._points.add(point0.x, point0.y, point0.z); // 0
borders._points.add(point1.x, point1.y, point1.z); // 1
borders._points.add(point1.x, point1.y, point1.z); // 1
borders._points.add(point4.x, point4.y, point4.z); // 4
borders._points.add(point4.x, point4.y, point4.z); // 4
borders._points.add(point2.x, point2.y, point2.z); // 2
borders._points.add(point2.x, point2.y, point2.z); // 2
borders._points.add(point0.x, point0.y, point0.z); // 0
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._normals.add(0, 0, 0);
borders._color = [1, 0, 0];
# set the drawing type to lines
borders._type = X.displayable.types.LINES;
borders._linewidth = 2;
This is an example of internal usage right now but it should be possible to do the same with the public API.
Ah I just see that the type getter/setter does not exist yet. We need to create it to enable setting the type externally. So I just created an Issue for that https://github.com/xtk/X/issues/62
Feel free to contribute it :) Should be easy :)

Related

Why sprites render over objects?

I want to render a cube on a picture like in this tutorial. Problem is that it renders only the picture and the cube doesn't render. Can you help me ? Thankyou
m_spriteBatch->Begin();
m_spriteBatch->Draw(m_background.Get(), m_fullscreenRect);
//
// Clear the back buffer
//
g_pImmediateContext->ClearRenderTargetView( g_pRenderTargetView, Colors::MidnightBlue );
g_pImmediateContext->ClearDepthStencilView(g_pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
g_pImmediateContext->OMSetRenderTargets(1, &g_pRenderTargetView, g_pDepthStencilView);
//
// Update variables
//
ConstantBuffer cb;
cb.mWorld = XMMatrixTranspose( g_World );
cb.mView = XMMatrixTranspose( g_View );
cb.mProjection = XMMatrixTranspose( g_Projection );
g_pImmediateContext->UpdateSubresource( g_pConstantBuffer, 0, nullptr, &cb, 0, 0 );
//
// Renders a triangle
//
g_pImmediateContext->VSSetShader( g_pVertexShader, nullptr, 0 );
g_pImmediateContext->VSSetConstantBuffers( 0, 1, &g_pConstantBuffer );
g_pImmediateContext->PSSetShader( g_pPixelShader, nullptr, 0 );
g_pImmediateContext->DrawIndexed( 36, 0, 0 ); // 36 vertices needed for 12 triangles in a triangle list
//
// Present our back buffer to our front buffer
//
m_spriteBatch->End();
g_pSwapChain->Present( 0, 0 );
SpriteBatch batches up draws for performance, so it's likely being drawn after the cube draw. If you want to make sure the sprite background draws first, then you need to call End before you submit your cube. You also need to call Begin after you set up the render target:
// Clear
g_pImmediateContext->ClearDepthStencilView(g_pDepthStencilView, D3D11_CLEAR_DEPTH, 1.0f, 0);
g_pImmediateContext->OMSetRenderTargets(1, &g_pRenderTargetView, g_pDepthStencilView);
// Draw background image
m_spriteBatch->Begin();
m_spriteBatch->Draw(m_background.Get(), m_fullscreenRect);
m_spriteBatch->End();
// Draw objects
context->OMSetBlendState(…);
context->OMSetDepthStencilState(…);
context->IASetInputLayout(…);
context->IASetVertexBuffers(…);
context->IASetIndexBuffer(…);
context->IASetPrimitiveTopology(…);
You can omit the ClearRenderTargetView if the m_background texture covers the whole screen.
For more on how SpriteBatch draw order and batching works, see the wiki.
Based on this answer by #ChuckWalbourn I fixed the problem.
g_pImmediateContext->ClearRenderTargetView( g_pRenderTargetView, Colors::MidnightBlue );
g_pImmediateContext->ClearDepthStencilView(g_pDepthStencilView, D3D11_CLEAR_DEPTH |
D3D11_CLEAR_STENCIL, 1.0f, 0);
m_spriteBatch->Begin();
m_spriteBatch->Draw(m_background.Get(), m_fullscreenRect);
m_spriteBatch->End();
states = std::make_unique<CommonStates>(g_pd3dDevice);
g_pImmediateContext->OMSetBlendState(states->Opaque(), Colors::Black, 0xFFFFFFFF);
g_pImmediateContext->OMSetDepthStencilState(states->DepthDefault(), 0);
// Set the input layout
g_pImmediateContext->IASetInputLayout(g_pVertexLayout);
UINT stride = sizeof(SimpleVertex);
UINT offset = 0;
g_pImmediateContext->IASetVertexBuffers(0, 1, &g_pVertexBuffer, &stride, &offset);
// Set index buffer
g_pImmediateContext->IASetIndexBuffer(g_pIndexBuffer, DXGI_FORMAT_R16_UINT, 0);
// Set primitive topology
g_pImmediateContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
// Draw objects

Instagram Filters RGBA Values

I need to add image filters to an iOS app that is being developed using Adobe Air.
To be more specific, i need to apply the following filters (or similar):
Nashville
Hemingway
Jarques
Cross Process
Hazy Days
You can use this page as a reference http://techslides.com/demos/canvas/instagram.html
I know how to apply filters to a BitmapData on AS3, but i need the RGBA values of the above filters, does anyone knows them or knows how to obtain them?
I need something like the following values:
private static const NASHVILLE_FILTER_MATRIX: Array = [
1, 0, 0, 0, 0, //R
0, 0, 0, 0, 0, //G
0, 0, 0, 0, 0, //B
0, 0, 0, 1, 0 //A
];
Thank you for any help you can give me.
I recommend you to to check this framework GPUImage
There is an example here
You can create filters like:
// creating GPU Image
_gpuImg = new GPUImage();
_gpuImg.init(context3D, antiAlias, false, stageW, stageH, streamW, streamH);
// saving all filters
_imageProcessors = new Vector.<IGPUImageProcessor>();
// setup filters
var _gpuSepia:GPUImageSepia = new GPUImageSepia();
var _gpuGauss:GPUImageGaussianBlur = new GPUImageGaussianBlur(1.0, 4);
var _gpuGray:GPUImageGrayscale = new GPUImageGrayscale();
// Bloom Filter
var bloomEffect:GPUImageBloomEffect = new GPUImageBloomEffect(GPUImageBloomEffect.PRESET_DESATURATED, 4);
bloomEffect.initPreset(GPUImageBloomEffect.PRESET_SATURATED);
// TiltShift
var tiltShift:GPUImageTiltShiftEffect = new GPUImageTiltShiftEffect(2, 0.4, 0.6, 0.2);
// adding filter to image processor
_imageProcessors.push(bloomEffect);
// adding processor to the respective GPU Image
_gpuImg.addProcessor(_imageProcessors[0]);

get Original image from ROI image in opencv

is it possible to get back original image from image ROI? for example say we have
cv::Mat image = imread("image.jpg", 0);
cv::Mat imageROI = image(0, 0, 100, 100);
myFunction(imageROI);
and in myFunction I want to work with original image. is there any way to convert imageROI to original image when we don't access the original image?
I don't know if I understood the question exactly like you think, but if you ask if let's say we have header
void myFunc(cv::Mat &m);
// .... later on
cv::Mat image = imread("image.jpg", 0);
cv::Mat imageROI = image(0, 0, 100, 100);
myFunction(imageROI);
// .... later on myFuncDefinition
void myFunc(cv::Mat &m) {
// some code
// here you would like to have an original image, right?
}
So the answer for that is no and the proof is by simplicity: why want you to design opencv api in such way to make it possible store unnecessary data? If you do
cv::Mat imageROI = image(0, 0, 100, 100);
by purpose you would like to forgot about entire image and you are particulary interested in some ROI. Mat container is designed in such way to copy only matrix 'headers' and not matrix content. So if you do cv::Mat imageROI = image(0, 0, 100, 100) perhaps the matrix content (ie image data) might be stored somewhere in memory (because roi is the part of it, so by optimalization purposes it might no be deleted even is original image variable went out of scope), but your matrix header changed. Namely, from pointing to (0, 0, imageWisth, imageHeight) to (0, 0, 100, 100) and there's no way to bring it back just using variable m.
Why don't pass additional parameter as a reference?
Just incase anybody looks at this question you can actually do this
cv::Mat mat = ...
cv::Size size;
cv::Point offset;
// find original image size, and get offset of roi
mat.locateROI(size, offset);
// put image back to original size;
mat.adjustROI(offset.y, size.height - mat.rows, offset.x, size.width- mat.cols);

How to display dashed lines in objective-c?

I have this code to display a grid with dashed line. In runtime on an iphone 5 below it shows fine, but if I run the app on iphone 5s there's no grid. I tested in iPhone Simulator and on real devices and happens the same.
Here's the code:
if (self.dashLongitude) {
CGFloat lengths[] = {3.0, 3.0};
CGContextSetLineDash(context, 0.0, lengths, 2);
}
//other stuff here
CGContextSetLineDash(context, 0, nil, 0);
So anyone could help??
EDIT: Hey guys I solved the issue using the same code I posted here, but in a different method. So I have now two methods: one just for the drawing the grid and another on to draw the line with data and finally got everything working.
The code looks fine.
As I look into the code I saw you are setting the nil in the context. nil is used to remove the dashed line.
Try to use
CGFloat dash[] = {2.0, 2.0};
CGContextSetLineDash(context, 0.0, dash, 2);
where you are creating or provide stroke your line(underneath).
CGContextSetLineDash(context, 0, NULL, 0);
is used to remove that dash pattern.
Instead of setting nil in CGContextSetLineDash, just wrap your code into CGContextSaveGState/CGContextRestoreGState to preserve context state before applying line dash:
CGContextSaveGState(context);
CGFloat dash[] = {2.0, 2.0};
CGContextSetLineDash(context, 0.0, dash, 2);
// Draw some lines here
CGContextRestoreGState(context);

How to straighten line endings in Cocos2d?

I'm new to Cocos2d and am trying out some of the basic drawing functions. When I draw a straight line with a high width (50 in this case), the ends of the line are not what I'd expect. What I'd like is for the line to be the same as it would be if I were using CoreGraphics, like this:
however what I see in Cocos2d is this:
The code I'm using to draw the line is in the layer's draw method:
-(void)draw
{
glColor4f(1, 0, 0, 1);
glLineWidth(50);
ccDrawLine(ccp(50, 50), ccp(250, 250));
}
Can anyone tell me how I can get cocos2d to draw a line with the same shape as the green image, rather than the red image?
Try drawing it antialiased.
glColor4f(1, 0, 0, 1);
glLineWidth(50);
glEnable(GL_LINE_SMOOTH);
ccDrawLine(ccp(50, 50), ccp(250, 250));

Resources