How to resize images after uploading with Uploadify? - asp.net-mvc

I have implemented Uploadify in my ASP.NET MVC 3 application to upload images, but I now want to resize the images that I upload. I am not sure on what next to do in order to start resizing. I think there might be various ways to perform this resize, but I have not been able to find any example of this as yet. Can anyone suggest some way of doing this? Thanx

Here's a function you can use on the server side. I use it to process my images after uploadify is done.
private static Image ResizeImage(Image imgToResize, Size size)
{
int sourceWidth = imgToResize.Width;
int sourceHeight = imgToResize.Height;
float nPercent = 0;
float nPercentW = 0;
float nPercentH = 0;
nPercentW = ((float)size.Width / (float)sourceWidth);
nPercentH = ((float)size.Height / (float)sourceHeight);
if (nPercentH < nPercentW)
nPercent = nPercentH;
else
nPercent = nPercentW;
int destWidth = (int)(sourceWidth * nPercent);
int destHeight = (int)(sourceHeight * nPercent);
Bitmap b = new Bitmap(destWidth, destHeight);
Graphics g = Graphics.FromImage((Image)b);
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.DrawImage(imgToResize, 0, 0, destWidth, destHeight);
g.Dispose();
return (Image)b;
}
Here's how I use it:
int length = (int)stream.Length;
byte[] tempImage = new byte[length];
stream.Read(tempImage, 0, length);
var image = new Bitmap(stream);
var resizedImage = ResizeImage(image, new Size(300, 300));
Holler if you need help getting it running.

You have 3 ways:
Use GDI+ library (example of code - C# GDI+ Image Resize Function)
3-rd part components (i use ImageMagick - my solution: Generating image thumbnails in ASP.NET?)
Resize images on user side (some uploaders can do this)

Related

Drawing SharpDX effects on bitmap is too slow, what am I doing wrong?

SO basically, I need performance. Currently in my job we use GDI+ graphics to draw bitmap. Gdi+ graphics contains a method called DrawImage(Bitmap,Points[]). That array contains 3 points and the rendered image result with a skew effect.
Here is an image of what is a skew effect :
Skew effect
At work, we need to render between 5000 and 6000 different images each single frame which takes ~ 80ms.
Now I thought of using SharpDX since it provides GPU accelerations. I use direct2D since all I need is in 2 dimensions. However, the only way I saw to reproduce a skew effect is the use the SharpDX.effects.Skew and calculate matrix to draw the initial bitmap with a skew effect ( I will provide the code below). The rendered image is exactly the same as GDI+ and it is what I want. The only problem is it takes 600-700ms to render the 5000-6000images.
Here is the code of my SharpDX :
To initiate device :
private void InitializeSharpDX()
{
swapchaindesc = new SwapChainDescription()
{
BufferCount = 2,
ModeDescription = new ModeDescription(this.Width, this.Height, new Rational(60, 1), Format.B8G8R8A8_UNorm),
IsWindowed = true,
OutputHandle = this.Handle,
SampleDescription = new SampleDescription(1, 0),
SwapEffect = SwapEffect.Discard,
Usage = Usage.RenderTargetOutput,
Flags = SwapChainFlags.None
};
SharpDX.Direct3D11.Device.CreateWithSwapChain(DriverType.Hardware, DeviceCreationFlags.BgraSupport | DeviceCreationFlags.Debug, swapchaindesc, out device, out swapchain);
SharpDX.DXGI.Device dxgiDevice = device.QueryInterface<SharpDX.DXGI.Device>();
surface = swapchain.GetBackBuffer<Surface>(0);
factory = new SharpDX.Direct2D1.Factory1(FactoryType.SingleThreaded, DebugLevel.Information);
d2device = new SharpDX.Direct2D1.Device(factory, dxgiDevice);
d2deviceContext = new SharpDX.Direct2D1.DeviceContext(d2device, SharpDX.Direct2D1.DeviceContextOptions.EnableMultithreadedOptimizations);
bmpproperties = new BitmapProperties(new SharpDX.Direct2D1.PixelFormat(SharpDX.DXGI.Format.B8G8R8A8_UNorm, SharpDX.Direct2D1.AlphaMode.Premultiplied),
96, 96);
d2deviceContext.AntialiasMode = AntialiasMode.Aliased;
bmp = new SharpDX.Direct2D1.Bitmap(d2deviceContext, surface, bmpproperties);
d2deviceContext.Target = bmp;
}
And here is my code I use to recalculate every image positions each frame (each time I do a mouse zoom in or out, I asked for a redraw). You can see in the code two loop of 5945 images where I asked to draw the image. No effects takes 60ms and with effects, it takes up to 700ms as I mentionned before :
private void DrawSkew()
{
d2deviceContext.BeginDraw();
d2deviceContext.Clear(SharpDX.Color.Blue);
//draw skew effect to 5945 images using SharpDX (370ms)
for (int i = 0; i < 5945; i++)
{
AffineTransform2D effect = new AffineTransform2D(d2deviceContext);
PointF[] points = new PointF[3];
points[0] = new PointF(50, 50);
points[1] = new PointF(400, 40);
points[2] = new PointF(40, 400);
effect.SetInput(0, actualBmp, true);
float xAngle = (float)Math.Atan(((points[1].Y - points[0].Y) / (points[1].X - points[0].X)));
float yAngle = (float)Math.Atan(((points[2].X - points[0].X) / (points[2].Y - points[0].Y)));
Matrix3x2 Matrix = Matrix3x2.Identity;
Matrix3x2.Skew(xAngle, yAngle, out Matrix);
Matrix.M11 = Matrix.M11 * (((points[1].X - points[0].X) + (points[2].X - points[0].X)) / actualBmp.Size.Width);
Matrix.M22 = Matrix.M22 * (((points[1].Y - points[0].Y) + (points[2].Y - points[0].Y)) / actualBmp.Size.Height);
effect.TransformMatrix = Matrix;
d2deviceContext.DrawImage(effect, new SharpDX.Vector2(points[0].X, points[0].Y));
effect.Dispose();
}
//draw no effects, only actual bitmap 5945 times using SharpDX (60ms)
for (int i = 0; i < 5945; i++)
{
d2deviceContext.DrawBitmap(actualBmp, 1.0f, BitmapInterpolationMode.NearestNeighbor);
}
d2deviceContext.EndDraw();
swapchain.Present(1, PresentFlags.None);
}
After benching a lot, I realized the line that make it really slow is :
d2deviceContext.DrawImage(effect, new SharpDX.Vector2(points[0].X, points[0].Y));
My guess is my code or my setup does not use GPU acceleration of SharpDX like it should and this is why the code is really slow. I would expect at least better performance from SharpDX than GDI+ for this kind of stuff.

How to scale an image to half size through an array of bytes?

I found many examples about how to scale an image in Windows Forms, but at this case I'm using an array of bytes in a Windows Store application. This is the snippet code what I'm using.
// Now that you have the raw bytes, create a Image Decoder
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(fileStream);
// Get the first frame from the decoder because we are picking an image
BitmapFrame frame = await decoder.GetFrameAsync(0);
// Convert the frame into pixels
PixelDataProvider pixelProvider = await frame.GetPixelDataAsync();
// Convert pixels into byte array
srcPixels = pixelProvider.DetachPixelData();
wid = (int)frame.PixelWidth;
hgt =(int)frame.PixelHeight;
// Create an in memory WriteableBitmap of the same size
bitmap = new WriteableBitmap(wid, hgt);
Stream pixelStream = bitmap.PixelBuffer.AsStream();
pixelStream.Seek(0, SeekOrigin.Begin);
// Push the pixels from the original file into the in-memory bitmap
pixelStream.Write(srcPixels, 0, (int)srcPixels.Length);
bitmap.Invalidate();
At this case, it is just creating a copy of the stream. I don't know how to manipulate the byte array to reduce it to the half width and height.
If you look at the MSDN documentation for GetPixelDataAsync, you can see that it has an overload that allows you to specify a BitmapTransform to be applied during the operation.
So you can do this in your example code, something like this:
// decode a frame (as you do now)
BitmapDecoder decoder = await BitmapDecoder.CreateAsync(fileStream);
BitmapFrame frame = await decoder.GetFrameAsync(0);
// calculate required scaled size
uint newWidth = frame.PixelWidth / 2;
uint newHeight = frame.PixelHeight / 2;
// convert (and resize) the frame into pixels
PixelDataProvider pixelProvider =
await frame.GetPixelDataAsync(
BitmapPixelFormat.Rgba8,
BitmapAlphaMode.Straight,
new BitmapTransform() { ScaledWidth = newWidth, ScaledHeight = newHeight},
ExifOrientationMode.RespectExifOrientation,
ColorManagementMode.DoNotColorManage);
Now, you can call DetachPixelData as in your original code, but this will give you the resized image instead of the full sized image.
srcPixels = pixelProvider.DetachPixelData();
// create an in memory WriteableBitmap of the scaled size
bitmap = new WriteableBitmap(newWidth, newHeight);
Stream pixelStream = bitmap.PixelBuffer.AsStream();
pixelStream.Seek(0, SeekOrigin.Begin);
// push the pixels from the original file into the in-memory bitmap
pixelStream.Write(srcPixels, 0, (int)srcPixels.Length);
bitmap.Invalidate();

What are the risks of loading textures from images at runtime in XNA?

What is there to know about (apart from only being able to do that on PC) loading textures from images at runtime?
I've been exclusively loading my textures from streams as well, and the only "danger" that I can come up with is Premultiplied Alphas. You'll either need to process each texture as you load it, or render with premultiplied alphas disabled. Shawn Hargreaves wrote a great blog article on this subject. In the comments, he mentions:
If you don't go through our Content Pipeline, you have to handle the format conversion yourself, or just not use premultiplied alpha
So initializing your sprite batch(es) with BlendState.NonPremultiplied should work.
In my game, I have processed each texture when I load it.
EDIT: Here's my method:
private static void PreMultiplyAlphas(Texture2D ret)
{
var data = new Byte4[ret.Width * ret.Height];
ret.GetData(data);
for (var i = 0; i < data.Length; i++)
{
var vec = data[i].ToVector4();
var alpha = vec.W / 255.0f;
var a = (Int32)(vec.W);
var r = (Int32)(alpha * vec.X);
var g = (Int32)(alpha * vec.Y);
var b = (Int32)(alpha * vec.Z);
data[i].PackedValue = (UInt32)((a << 24) + (b << 16) + (g << 8) + r);
}
ret.SetData(data);
}
As you can see, all it does is multiply the color channels by the alpha, then stuff it back into the texture. Without this, your sprites will likely appear brighter/darker? than they should. (disclaimer: I didn't write the method above, a friend of mine did)

What is BlackBerry's equivalent to Java ME's Image.createImage() from an existing (loaded) image

I have the following Java ME code that I'd like to port to BlackBerry:
Image imgAll = Image.createImage("/fontDigits_200x20.png");
imageDigits = new Image[10];
for(int i = 0; i < imageDigits.length; i++)
imageDigits[i] = Image.createImage(imgAll, i * 20, 0, 20, 20, Sprite.TRANS_NONE);
Basically, it's one image of ten digits that I want to split into 10 individual images and store them into an array. I looked through the docs, but can't find anything similar on EncodedImage or Graphics.
Thank you for any pointers!
UPDATE:
Good news! Apparently there's no way to crop an EncodedImage in such a way as to have a new EncodedImage which is a cropped subset of the original. However, you can do that with a Bitmap, which essentially is the same.
you can use
Bitmap.getARGB(int[] argbData,
int offset,
int scanLength,
int x,
int y,
int width,
int height)
after loading your image
Bitmap imgAll = Bitmap.getBitmapResource("fontDigits_200x20.png");
and off course you can create new Bitmap from this ARGB data.
You can do it directly with the Bitmap.scaleInto function:
Bitmap src;
Bitmap dst = new Bitmap(64,32);
int filterType = Bitmap.FILTER_BILINEAR;
src.scaleInto(srcLeft, srcTop, srcWidth, srcHeight, dst, dstLeft, dstTop, dstWidth, dstHeight, filterType);

Get image original width & height in actionscript

I use AS3 in Flex 3 to create new image and seem unable to get the exact size of the original image. percentHeight & percentWidth to 100 can do the job, but limitation in ObjectHandlers require me to set the image scale in pixel.
Any solution?
Note: this is also applicable for displaying Image original dimension without ObjectHandler control, just remove those lines that are not applicable.
After struggle hours for solution, I found my own answer thru in actionscript forum, in fact, only one solution, I surprise there was no such topic elsewhere.
private function init():void {
var image:Image = new Image();
image.source = "http://www.colorjack.com/software/media/circle.png";
image.addEventListener(Event.COMPLETE, imageLoaded);
/* wait for completion as Image control is asynchronous,
* which mean ObjectHandler will attempt to load asap
* and you are not able to get the correct dimension for scaling.
* EventListener fixed that.
*/
this.addChild(image);
//whenever you scale ObjectHandler control, the image is always fit by 100%
image.percentHeight = 100;
image.percentWidth = 100;
}
private function imageLoaded(e:Event):void{
var img:Image = e.target as Image;
trace("Height ", img.contentHeight);
trace("Width ", img.contentWidth);
var oh:ObjectHandles = new ObjectHandles();
oh.x = 200;
oh.y = 200;
oh.height = img.contentHeight;
oh.width = img.contentWidth;
oh.allowRotate = true;
oh.autoBringForward = true;
oh.addChild(img);
genericExamples.addChild(oh);
}

Resources