I'm having an issue using Pixi.js, where the lineTo method seems to draw lines that aren't specified. The bad lines aren't uniform width (seem to taper off towards the ends) and are much longer than they should be. Example jsfiddle showing the problem here:
http://jsfiddle.net/b1e48upd/1/
var stage, renderer;
function init() {
stage = new PIXI.Stage(0x001531, true);
renderer = new PIXI.WebGLRenderer(800, 600);
// renderer = new PIXI.CanvasRenderer(400, 300);
document.body.appendChild(renderer.view);
requestAnimFrame( animate );
graphics = new PIXI.Graphics();
stage.addChild(graphics);
graphics.beginFill(0xFF0000);
graphics.lineStyle(3, 0xFF0000);
graphics.moveTo(200, 200);
graphics.lineTo(192, 192);
graphics.lineTo(198, 183);
graphics.lineTo(189, 197);
}
function animate() {
requestAnimFrame( animate );
renderer.render(stage);
}
init();
Using the canvas renderer gives correct results.
Trying to search for this problem, I've gathered that the WebGL renderer may have an issue with non-integer values (shown in the question here: Pixi.js lines are rendered outside dedicated area), and I've also seen that sending consecutive lineTo commands to the same coordinates will cause issues, but my example doesn't have either of those.
Related
I'm writing my first Qt 5 application... This uses a third-party map library (QGeoView).
I need to draw an object (something like a stylized airplane) over this map. Following the library coding guidelines, I derived from the base class QGVDrawItem my QGVAirplane.
The airplane class contains heading and position values: such values must be used to draw the airplane on the map (of course in the correct position and with correct heading). The library requires QGVDrawItem derivatives to override three base class methods:
QPainterPath projShape() const;
void projPaint(QPainter* painter);
void onProjection(QGVMap* geoMap)
The first method is used to achieve the area of the map that needs to be updated. The second is the method responsible to draw the object on the map. The third method is needed to reproject the point from the coordinate space on the map (it's not relevant for the solution of my problem).
My code looks like this:
void onProjection(QGVMap* geoMap)
{
QGVDrawItem::onProjection(geoMap);
mProjPoint = geoMap->getProjection()->geoToProj(mPoint);
}
QPainterPath projShape() const
{
QRectF _bounding = createGlyph().boundingRect();
double _size = fmax(_bounding.height(), _bounding.width());
QPainterPath _bounding_path;
_bounding_path.addRect(0,0,_size,_size);
_bounding_path.translate(mProjPoint.x(), mProjPoint.y());
return _bounding_path;
}
// This function creates the path containing the airplane glyph
// along with its label
QPainterPath createGlyph() const
{
QPainterPath _path;
QPolygon _glyph = QPolygon();
_glyph << QPoint(0,6) << QPoint(0,8) << QPoint(14,6) << QPoint(28,8) << QPoint(28,6) << QPoint(14,0);
_path.addPolygon(_glyph);
_path.setFillRule(Qt::FillRule::OddEvenFill);
_path.addText(OFF_X_TEXT, OFF_Y_TEXT, mFont , QString::number(mId));
QTransform _transform;
_transform.rotate(mHeading);
return _transform.map(_path);
}
// This function is the actual painting method
void drawGlyph(QPainter* painter)
{
painter->setRenderHints(QPainter::Antialiasing, true);
painter->setBrush(QBrush(mColor));
painter->setPen(QPen(QBrush(Qt::black), 1));
QPainterPath _path = createGlyph();
painter->translate(mProjPoint.x(), mProjPoint.y());
painter->drawPath(_path);
}
Of course:
mProjPoint is the position of the airplane,
mHeading is the heading (the direction where the airplane is pointing),
mId is a number identifying the airplane (will be displayed as a label under airplane glyph),
mColor is the color assigned to the airplane.
The problem here is the mix of rotation and translation. Transformation: since the object is rotated, projShape() methods return a bounding rectangle that's not fully overlapping the object drawn on the map...
I also suspect that the center of the object is not correctly pointed on mProjPoint. I tried many times trying to translate the bounding rectangle to center the object without luck.
Another minor issue is the fillup of the font... the label under the airplane glyph is not solid, but it is filled with the same color of the airplane.
How can I fix this?
Generically speaking, the general pattern for rotation is to scale about the origin first and then finish with your final translation.
The following is pseudocode, but it illustrates the need to shift your object's origin to (0, 0) prior to doing any rotation or scaling. After the rotate and scale are done, the object can be moved back from (0, 0) back to where it came from. From here, any post-translation step may be applied.
translate( -origin.x, -origin.y );
rotate( angle );
scale( scale.x, scale y);
translate( origin.x, origin.y );
translate( translation.x, translation.y )
I finally managed to achieve the result I meant....
QPainterPath projShape() const
{
QPainterPath _path;
QRectF _glyph_bounds = _path.boundingRect();
QPainterPath _textpath;
_textpath.addText(0, 0, mFont, QString::number(mId));
QRectF _text_bounds = _textpath.boundingRect();
_textpath.translate(_glyph_bounds.width()/2-_text_bounds.width()/2, _glyph_bounds.height()+_text_bounds.height());
_path.addPath(_textpath);
QTransform _transform;
_transform.translate(mProjPoint.x(),mProjPoint.y());
_transform.rotate(360-mHeading);
_transform.translate(-_path.boundingRect().width()/2, -_path.boundingRect().height()/2);
return _transform.map(_path);
}
void projPaint(QPainter* painter)
{
painter->setRenderHint(QPainter::Antialiasing, true);
painter->setRenderHint(QPainter::TextAntialiasing, true);
painter->setRenderHint(QPainter::SmoothPixmapTransform, true);
painter->setRenderHint(QPainter::HighQualityAntialiasing, true);
painter->setBrush(QBrush(mColor));
painter->setPen(QPen(QBrush(Qt::black), 1));
painter->setFont(mFont);
QPainterPath _path = projShape();
painter->drawPath(_path);
}
Unluckly I still suffer the minor issue with text fill mode:
I would like to have a solid black fill for the text instead of the mColor fill I use for the glyph/polygon.
I'm using Direct2D with SharpDX. I'm filling a rectangle with a LinearGradientBrush:
var brush = new LinearGradientBrush(
renderTarget,
new LinearGradientBrushProperties
{
StartPoint = bounds.ToRawVector2TopLeft(),
EndPoint = bounds.ToRawVector2BottomLeft()
},
new GradientStopCollection(
renderTarget,
new[]
{
new GradientStop
{
Color = new RawColor4(0.2f, 0.2f, 0.2f, 1f),
Position = 0
},
new GradientStop
{
Color = new RawColor4(0.1f, 0.1f, 0.1f, 1f),
Position = 1
}
}));
...
renderTarget.FillRectangle(bounds.ToRawRectangleF(), brush);
Unfortunately, the gradient quality is terrible (500x250 so StackOverflow doesn't shrink it):
The problem is much more visible at larger render target sizes.
Same machine, here's Photoshop CC 2019 with the same stop colors (500x250 so StackOverflow doesn't shrink it):
I zoomed way in on each screenshot and Direct2D's dithering algorithm seems to be terrible compared to Photoshop's. I realize that there simply aren't many colors available in this fine of a gradient, so I'm not asking for more colors but rather better dithering. Are there any settings I can use to improve the rendering quality of fine gradients like this in Direct2D?
I also read about GradientStopCollection1, which has a wrapper in SharpDX but doesn't appear to be usable anywhere. Would this solve my issue?
When mapping texture to a sphere in ThreeJS with I am loosing the sphere. Instead I am getting consol errors that read --
Uncaught TypeError: Cannot call method 'add' of undefined index.html:28
and
Cross-origin image load denied by Cross-Origin Resource Sharing policy.
The image is the correct size and resolution since it works in another instance where I was attempting texture mapping, however it is not working here. It must be a problem with how I am applying the map. I am new to both javascript and ThreeJS, so bear with me. Thank you.
<body>
<div id="container"></div>
<script src="javascript/mrdoob-three.js-ad419d4/build/three.js"></script>
<script defer="defer">
// renderer
var renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
// camera
var camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 1, 1000);
camera.position.z = 500;
// material
var material = new THREE.MeshLambertMaterial({
map: THREE.ImageUtils.loadTexture('images/physicalworldmapcolor.jpg')
});
// add subtle ambient lighting
var ambientLight = new THREE.AmbientLight(0x000044);
scene.add(ambientLight);
// directional lighting
var directionalLight = new THREE.DirectionalLight(0xffffff);
directionalLight.position.set(1, 1, 1).normalize();
scene.add(directionalLight);
// scene
var scene = new THREE.Scene();
// sphere
// the first argument of THREE.SphereGeometry is the radius,
// the second argument is the segmentsWidth
// the third argument is the segmentsHeight.
var sphere = new THREE.Mesh(new THREE.SphereGeometry(150, 70, 50),
new THREE.MeshNormalMaterial(material));
sphere.overdraw = true;
scene.add(sphere);
renderer.render(scene, camera);
</script>
There are MANY errors with the code you provided.
Just check a basic example at:
https://github.com/mrdoob/three.js/
your script is missing a render loop, your camera is not added to the scene, the Three.Scene() constructor is called after already adding objects to "scene". Then you have MeshNormalMaterial() and wrapped in there another material. This won't work , just do Three.Mesh(SphereGeometry(...), material). "overdraw" is a material propery so you will have to do sphere.material.overdraw. But i think overdraw only affects stuff for the CSS canvas renderer and i am not sure if it has any meaning if you use WebGLRenderer
Concerning the error with cross-origin, read up here:
https://github.com/mrdoob/three.js/wiki/How-to-run-things-locally
I created a skin called customSliderTrack in the graphical editor of Adobe Flash CS5.5. This Slider is now in the "library" of the FLA file.
I can apply this skin with the following code:
var cls:Class = getDefinitionByName("CustomSliderTrack") as Class;
var tmpTrack:Sprite = new cls();
slider.setStyle("sliderTrackSkin",tmpTrack);
However due to the binary nature of the FLA file and lack of compatibility of different Versions of Adobe Flash I need to implement it all in Actionscript.
I understand that cls is a MovieClip object but I cant get the same results with new MovieClip(). I think this might be related to the dashed Lines in the graphical editor(I modified the default SliderTrack_skin). I havn't found out yet what they mean and how to replace them with Actionscript code.
setStyle automatically sets the track.height and track.width. In case of the track.height the slider.height attribute does not seem to have any effect. To work around this problem simply set the track.height to the best value.
To access the track extend the Slider class and override the configUI Function:
public class CustomSlider extends Slider
{
override protected function configUI():void
{
// Call configUI of Slider
super.configUI();
// The sprite that will contain the track
var t:Sprite = new Sprite();
// Draw the content into the sprite
t.graphics.beginFill(0x000000, 0.1);
t.graphics.drawRect(0, -15, width, 30);
t.graphics.endFill();
// Set the Sprite to be the one used by the Slider
this.setStyle("sliderTrackSkin",t);
// Reset the height to the value that it should be
track.height = 30;
}
}
Depending on the complexity of your track asset, you could accomplish this with the drawing API: http://help.adobe.com/en_US/FlashPlatform/reference/actionscript/3/flash/display/Graphics.html
A very simple example would be:
var track:Sprite = new Sprite();
track.graphics.lineStyle(2, 0xcccccc);
track.graphics.beginFill(0x000000, 1);
track.graphics.drawRect(0, 0, 400, 20);
track.graphics.endFill();
track.scale9Grid = new Rectangle(2, 2, 396, 16);
slider.setStyle("sliderTrackSkin",track);
This creates a track that is just a black rectangle, 400x20 pixels in size. You can set the scale9grid in code to control how the skin scales. In the example above the rectangle's border wont scale, but the black rectangle inside will. Experimenting with the methods in the drawing API could be all you need.
If you need a more complex asset, I'd recommend loading an image and then passing that in to slider.setStyle.
I want to be able to draw images to the viewport in my 3d Max Plugin,
The GraphicsWindow Class has functions for drawing 3d objects in the viewport but these drawing calls are limited by the current viewport and graphics render limits.
This is undesirable as the image I want to draw should always be drawn no matter what graphics mode 3d max is in and or hardware is used, futher i am only drawing 2d images so there is no need to draw it in a 3d context.
I have managed to get the HWND of the viewport and the max sdk has the function
DrawIconButton();
and i have tried using this function but it does not function properly, the image flickers randomly with user interaction, but disappears when there is no interactivity.
i Have implemented this function in the
RedrawViewsCallback function, however the DrawIconButton() function is not documented and i am not sure if this is the correct way to implemented it.
Here is the code i am using to draw the image:
void Sketch_RedrawViewsCallback::proc (Interface * ip)
{
Interface10* ip10 = GetCOREInterface10();
ViewExp* viewExp = ip10->GetActiveViewport();
ViewExp10* currentViewport;
if (viewExp != NULL)
{
currentViewport = reinterpret_cast<ViewExp10*>(viewExp->Execute(ViewExp::kEXECUTE_GET_VIEWEXP_10));
} else {
return;
}
GraphicsWindow* gw = currentViewport->getGW();
HWND ViewportWindow = gw->getHWnd();
HDC hdc = GetDC(ViewportWindow);
HBITMAP bitmapImage = LoadBitmap(hInstance, MAKEINTRESOURCE(IDB_BITMAP1));
Rect rbox(IPoint2(0,0),IPoint2(48,48));
DrawIconButton(hdc, bitmapImage, rbox, rbox, true);
ReleaseDC(ViewportWindow, hdc);
ip->ReleaseViewport(currentViewport);
};
I could not find a way to draw directly to the view-port window, however I have solved the problem by using a transparent modeless dialog box.
May be a complete redraw will solve the issue. ForceCompleteRedraw