I use AS3 in Flex 3 to create new image and seem unable to get the exact size of the original image. percentHeight & percentWidth to 100 can do the job, but limitation in ObjectHandlers require me to set the image scale in pixel.
Any solution?
Note: this is also applicable for displaying Image original dimension without ObjectHandler control, just remove those lines that are not applicable.
After struggle hours for solution, I found my own answer thru in actionscript forum, in fact, only one solution, I surprise there was no such topic elsewhere.
private function init():void {
var image:Image = new Image();
image.source = "http://www.colorjack.com/software/media/circle.png";
image.addEventListener(Event.COMPLETE, imageLoaded);
/* wait for completion as Image control is asynchronous,
* which mean ObjectHandler will attempt to load asap
* and you are not able to get the correct dimension for scaling.
* EventListener fixed that.
*/
this.addChild(image);
//whenever you scale ObjectHandler control, the image is always fit by 100%
image.percentHeight = 100;
image.percentWidth = 100;
}
private function imageLoaded(e:Event):void{
var img:Image = e.target as Image;
trace("Height ", img.contentHeight);
trace("Width ", img.contentWidth);
var oh:ObjectHandles = new ObjectHandles();
oh.x = 200;
oh.y = 200;
oh.height = img.contentHeight;
oh.width = img.contentWidth;
oh.allowRotate = true;
oh.autoBringForward = true;
oh.addChild(img);
genericExamples.addChild(oh);
}
Related
I have points generated one by one, and when a new point is generated, I want to draw a line segment connecting with the previous point. Like this:
var x by remember { mutableStateOf( 0.0f)}
var y by remember { mutableStateOf( 0.5f)}
var pStart by remember { mutableStateOf(Offset(0f, 0.5f))}
Canvas(modifier = Modifier.fillMaxSize()) {
canvasWidth = size.width
canvasHeight = size.height
val pEnd = Offset(x * canvasWidth, (1-y) * canvasHeight)
val col = if (pEnd.y < pStart.y) Color.Green else Color.Red
drawLine(
start = pStart,
end = pEnd,
strokeWidth = 4f,
color = col
)
pStart = pEnd
}
But this only draws the segment in a flash and no segments stay on the screen.
I know I can save the points to a list and redraw all the segments whenever a new point is added. But I just hope to economize. Is it possible?
There's no practical other way. You COULD in fact, keep track of just two points, adding a whole new Canvas (all Transparent and Filling the maximum Size, stacked on top of one another), for each extra point that is added. This does seem a bit impractical, but maybe try it out and do some benchmarking to see which one checks out. This is the only OTHER way I could think of, where you do not have to store all the points and recompose every time a point is added, since all the other lines would technically be frozen in space.
In response to the somewhat (unreasonably) aggressive comment below, here's some sample code. I assume you have a stream of new points coming in so a LiveData object is assumed to be the source of that, which I shall be converting to a MutableState<T> for my use-case.
var latestPoint by liveData.collectAsState()
var recordedPoint by remember { mutableStateOf(latestPoint) }
var triggerDraw by remember { mutableStateOf(false) }
var canvasList = mutableStateListOf<#Composable () -> Unit>Canvas>() // Canvas is the Composable
if(triggerDraw){
canvasList.add(
Canvas(){
/* you have the recordedPoint, and latestPoint, simply draw a line here */
}
)
triggerDraw = false
}
LaunchedEffect(latestPoint){
triggerDraw = true
}
canvasList.forEach {
it() // Invoke the Composable here
}
Thanks Dad!
I have searched and found code that first inserts an image into a Table in a Google Doc.The image however is larger than the desired image space--and so we have to resize the picture by first getting the dimensions of the inserted picture and then reinserting the picture into the table. The code seems to work well except I am left with two pictures the original sized picture and the scaled picture in the specified table location.
My challenge is that when I attempt to delete the first picture after getting the sizing information needed to scale the picture -- nothing ends up being saved. Here is the code:
if(celltext === "%PIC%") {
table.replaceText("%PIC%", "");
table.getCell(row, cell).insertImage(0, image);
sizePic(table,image, row,cell);
}
function sizePic(table, image) {
var cellImage = table.getCell(row, cell).insertImage(0, image);
//get the dimensions of the image AFTER having inserted it to fix
//its dimensions afterwards
var originW = cellImage.getWidth();
var originH = cellImage.getHeight();
var newW = originW;
var newH = originH;
var ratio = originW/origin
var styleImage = {};
var maxWidth = 200;
if(originW>maxWidth){
newW = maxWidth;
newH = parseInt(newW/ratio);
table.getCell(row,cell).clear();
}
cellImage.setWidth(newW).setHeight(newH).setAttributes(styleImage);
}
}
The problematic line is the table.getCell(row,cell).clear(); that even though it occurs after the image is inserted and before the sized image is inserted, it doesn't appear to work that way. Please note that my code is the result of looking at an existing post How to resize image on Google app Script.
I have a strange issue when using two RenderTargets in SharpDX, using DX11.
I am rendering a set of objects that can be layered, and am using blend modes to achieve partial transparency. Rendering is done to two render targets in a single pass, with the second render target being used as a colour picker - I simply render the object ID (integer) to this second target and retrieve the object ID from the texture under the mouse after rendering.
The issue I am getting is frustrating, as it does not happen on all computers. In fact, it doesn't happen on any of our development machines but has been reported in the wild - typically on machines with integrated Intel (HD) graphics. On these computers, no output is generated in the second render target. We have been able to reproduce the problem on a laptop here, and if we don't set the blend state, then the issue is resolved. Obviously this isn't a fix, since we need the blending.
The texture descriptions for the main render target (0) and the colour picking target look like this:
var desc = new Texture2DDescription
{
BindFlags = BindFlags.RenderTarget | BindFlags.ShaderResource,
Format = Format.B8G8R8A8_UNorm,
Width = width,
Height = height,
MipLevels = 1,
SampleDescription = sampleDesc,
Usage = ResourceUsage.Default,
OptionFlags = RenderTargetOptionFlags,
CpuAccessFlags = CpuAccessFlags.None,
ArraySize = 1
};
var colourPickerDesc = new Texture2DDescription
{
BindFlags = BindFlags.RenderTarget,
Format = Format.R32_SInt,
Width = width,
Height = height,
MipLevels = 1,
SampleDescription = new SampleDescription(1, 0),
Usage = ResourceUsage.Default,
OptionFlags = ResourceOptionFlags.None,
CpuAccessFlags = CpuAccessFlags.None,
ArraySize = 1,
};
The blend state is set like this:
var blendStateDescription = new BlendStateDescription { AlphaToCoverageEnable = false };
blendStateDescription.RenderTarget[0].IsBlendEnabled = true;
blendStateDescription.RenderTarget[0].SourceBlend = BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationBlend = BlendOption.InverseSourceAlpha;
blendStateDescription.RenderTarget[0].BlendOperation = BlendOperation.Add;
blendStateDescription.RenderTarget[0].SourceAlphaBlend = BlendOption.SourceAlpha;
blendStateDescription.RenderTarget[0].DestinationAlphaBlend = BlendOption.InverseSourceAlpha;
blendStateDescription.RenderTarget[0].AlphaBlendOperation = BlendOperation.Add;
blendStateDescription.RenderTarget[0].RenderTargetWriteMask = ColorWriteMaskFlags.All;
_blendState = new BlendState(_device, blendStateDescription);
and is applied at the start of rendering. I have tried explicitly setting IsBlendEnabled to false for RenderTarget[1] but it makes no difference.
Any help on this would be most welcome - ultimately, we may have to resort to making two render passes but that would be annoying.
I have now resolved this issue, although exactly how or why the "fix" works in not entirely clear. Hat-tip to VTT for pointing me to the IndependentBlendEnable flag in the BlendStateDescription. Setting that flag on its own (to true), along with RenderTarget[1].IsBlendEnabled = false, was not enough. What worked in the end was filling a complete set of values for RenderTarget[1], along with the aforementioned flags. Presumably all other values in the second RenderTarget would be ignored, as blend is disabled, but for some reason they need to be populated. As mentioned before, this problem only appears on certain graphics cards so I have no idea if this is the correct behaviour or just a quirk of those cards.
I have two collada files (two different scenes: "01.dae" and "02.dae").
I want to display 01.dae first and right after the animation finishes I want to load and display 02.dae on the same canvas.
(I'm trying to modify "webgl_loader_collada_keyframe.html" to do this but no results so far.)
How could I handle more than one animated collada scenes? A source code or any tips or tricks would be appreciated!
Thank you for your answer. I modified my code based on your idae but unfortunately it's not working.
Could you take a look at my code please?
Here is my code:
loader.load( 'pump.dae', function ( collada ) {
model = collada.scene;
animations = collada.animations;
kfAnimationsLength = animations.length;
model.scale.x = model.scale.y = model.scale.z = 0.125; // 1/8 scale, modeled in cm
init();
start();
animate( lastTimestamp );
setTimeout(loadSecond,3000);
} );
function loadSecond()
{
loader2.load( 'pump2.dae', function ( collada2 )
{
scene.remove( model );
model2 = collada2.scene;
animations2 = collada2.animations;
kfAnimationsLength2 = animations2.length;
model2.scale.x = model2.scale.y = model2.scale.z = 0.125; // 1/8 scale, modeled in cm
init2();
start2();
animate2( lastTimestamp2 );
//alert("second loaded");
} );
}
...
As you can see I extended your code with
scene.remove( model );
to remove the previous scene.
The first scene displays and then disapears properly however the new secene does not load. Do you have an idae why?
(Note: I don't know how long my first scene realy is.)
If you know how long the animation is, you could use a setTimeout to load the second model.
var loader = new THREE.ColladaLoader();
loader.options.convertUpAxis = true;
loader.load('PATH TO MODEL', function colladaReady( collada){
dae = collada.scene;
skin = collada.skins[ 0 ];
dae.scale.x = dae.scale.y = dae.scale.z = 1;
dae.updateMatrix();
scene.add(dae);
render();
setTimeout(loadSecond,3000);
});
function loadSecond(){
var loader = new THREE.ColladaLoader();
loader.options.convertUpAxis = true;
loader.load('PATH TO MODEL', function colladaReady( collada){
///repeat model loading logic
}
Where the time interval in the setTimeout is equal to the length of your animation.
I have implemented Uploadify in my ASP.NET MVC 3 application to upload images, but I now want to resize the images that I upload. I am not sure on what next to do in order to start resizing. I think there might be various ways to perform this resize, but I have not been able to find any example of this as yet. Can anyone suggest some way of doing this? Thanx
Here's a function you can use on the server side. I use it to process my images after uploadify is done.
private static Image ResizeImage(Image imgToResize, Size size)
{
int sourceWidth = imgToResize.Width;
int sourceHeight = imgToResize.Height;
float nPercent = 0;
float nPercentW = 0;
float nPercentH = 0;
nPercentW = ((float)size.Width / (float)sourceWidth);
nPercentH = ((float)size.Height / (float)sourceHeight);
if (nPercentH < nPercentW)
nPercent = nPercentH;
else
nPercent = nPercentW;
int destWidth = (int)(sourceWidth * nPercent);
int destHeight = (int)(sourceHeight * nPercent);
Bitmap b = new Bitmap(destWidth, destHeight);
Graphics g = Graphics.FromImage((Image)b);
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.DrawImage(imgToResize, 0, 0, destWidth, destHeight);
g.Dispose();
return (Image)b;
}
Here's how I use it:
int length = (int)stream.Length;
byte[] tempImage = new byte[length];
stream.Read(tempImage, 0, length);
var image = new Bitmap(stream);
var resizedImage = ResizeImage(image, new Size(300, 300));
Holler if you need help getting it running.
You have 3 ways:
Use GDI+ library (example of code - C# GDI+ Image Resize Function)
3-rd part components (i use ImageMagick - my solution: Generating image thumbnails in ASP.NET?)
Resize images on user side (some uploaders can do this)