Image brightness/contrast - Xamarin iOS - ios

Xamarin provides some sample code for doing simple adjustments to an image in iOS:
https://github.com/xamarin/recipes/blob/master/ios/media/coreimage/adjust_contrast_and_brightness_of_an_image/color_controls_pro/ImageViewController.cs
This code updates the image only when the user lets go of the slider knob - not the continuous updating we normally expect.
However when I make the following change I reliably get SIGSEGV faults on hardware.
//sliderC.TouchUpInside += HandleValueChanged;
//sliderS.TouchUpInside += HandleValueChanged;
//sliderB.TouchUpInside += HandleValueChanged;
sliderC.ValueChanged += HandleValueChanged;
sliderS.ValueChanged += HandleValueChanged;
sliderB.ValueChanged += HandleValueChanged;
I expect that this is "overloading" the code in some way. How would you implement image adjustments that avoid this problem? Is there a lower-level approach, or do other apps simply use a much lower-rez version of the image for adjustments?

Here is a quickie version that I did that is 'more' realtime (This is a recording from the sim, the device (6s) is smooth depending upon initial image size.
Create a single viewcontroller iOS app from the template and add a UIImageView and three sliders to the storyboard so it looks like the animate gif.
I created a simple class to store the ColorCtrl values (brightness, contrast, saturation:
public class ColorCtrl
{
public float s;
public float b;
public float c;
}
Than in the ViewDidLoad method, do some setup:
public override void ViewDidLoad ()
{
base.ViewDidLoad ();
string filePath = Path.Combine (NSBundle.MainBundle.BundlePath, "hero.jpg");
originalImage = new CIImage (new NSUrl (filePath, false));
colorCtrls = new CIColorControls ();
colorCtrls.Image = originalImage;
// Create the context only once, and re-use it
var contextOptions = new CIContextOptions ();
contextOptions.UseSoftwareRenderer = false; // gpu vs. cpu
// On save of the image, create a new context with highqual attributes and re-apply the filter...
contextOptions.HighQualityDownsample = false;
contextOptions.PriorityRequestLow = false; // high queue order it
contextOptions.CIImageFormat = (int)CIFormat.ARGB8; // use 32bpp, vs. 64|128bpp
context = CIContext.FromOptions (contextOptions);
}
Then for the three slider's change handlers.
Within those I 'hack' a busy flag in order to skip the image transformation if the last transform is not done get. If we are not busy, then do an await call on our async transform method.
Note: I said 'hack', I am mean it, in a best practice way, this should pump transform requests to a queue and the queue handler would summarize all the pending items in the queue, flush them and do the transform.
Note: I added the "async" to the generated event handlers so I can await the image transform.
Note: The three slider's handlers are all the same except for the value that they are assigning; colorCtrlV.b | colorCtrlV.s | colorCtrlV.c
Note: You can down-sample large image at the moment the user does a touch down, perform the transforms on that, and on touch up, transform the original full-size image...
async partial void brightnessChange (UISlider sender)
{
if (!busy) {
busy = true;
colorCtrlV.b = sender.Value;
this.imageView.Image = await FilterImageAsync (colorCtrlV);
busy = false;
}
}
Ok, now for actual transform, a fairly simple Run.Task so we can get this work off the main thread and NOT block the UI. This way the sliders will not stutter as the user slides their finger, BUT due to the 'busy' flag/hack in the slider's handler we might skip some of those events (we should add a handler for the touch drag exit so we process the 'last' user's requested value....)
// async Task.Run() - not best practice - just a demo
async Task<UIImage> FilterImageAsync (ColorCtrl value)
{
if (transformImage == null)
transformImage = new Func<UIImage>(() => {
colorCtrls.Brightness = colorCtrlV.b;
colorCtrls.Saturation = colorCtrlV.s;
colorCtrls.Contrast = colorCtrlV.c;
var output = colorCtrls.OutputImage;
var cgImage = context.CreateCGImage (output, originalImage.Extent);
var filteredImage = new UIImage (cgImage);
return filteredImage;
});
UIImage image = await Task.Run<UIImage>(transformImage);
return image;
}
Personally For this type of realtime image transformations I prefer to do it via OpenGL-ES using GPUImage as the screen's interaction at the 60z refresh rate is as smooth as butter, but it is a lot more work than using CoreImage filters.

Related

Xamarin.Forms Xam.Plugin.Media Take Picture on iOS

I'm using the Xam.Plugin.Media in my Forms app to take pictures.
I take the Image stream (GetStream) and convert to a byte[] and store in my DB.
I then use that as the image source.
On Android and UWP, its working fine.
On iOS, if the picture is taken in portrait mode, the image once, selected is always rotated 90 deg.
I will later, upload this to a server and that image could be used on a different device.
For this, I also tried the GetStreamWithImageRotatedForExternalStorage but in this case, I cant see the image at all.
There are bytes in the stream (I DisplayAlert the length) but the image does not display.
Any idea what I might be doing wrong?
My code:-
private async Task TakePicture(WineDetails details)
{
await CrossMedia.Current.Initialize();
if (CrossMedia.Current.IsCameraAvailable && CrossMedia.Current.IsTakePhotoSupported)
{
var file = await CrossMedia.Current.TakePhotoAsync(new Plugin.Media.Abstractions.StoreCameraMediaOptions
{
AllowCropping = true,
PhotoSize = Plugin.Media.Abstractions.PhotoSize.Medium,
SaveToAlbum = false,
RotateImage = true
});
if (file == null)
return;
using (var ms = new MemoryStream())
{
var stream = file.GetStreamWithImageRotatedForExternalStorage();
stream.CopyTo(ms);
details.LabelImage = ms.ToArray();
details.NotifyChange("ImageSource");
}
}
}
The image is updated in the page via the NotifyChange and looks like this:-
ImageSource.FromStream(() => new MemoryStream(this.LabelImage))
This works fine in all cases on Android and UWP, works on iOS using GetStream (except the image is incorrectly rotated) but does not work using GetStreamWithImageRotatedForExternalStorage on iOS.
Anyone else using this plugin?
Any idea why GetStream returns a rotated image?
Any idea why GetStreamWithImageRotatedForExternalStorage is not working?
Thanks
Update:-
Changed SaveToAlbum = true and when I open the gallery, the image is rotated 90 deg.
Have RotateImage = true which could cause the issue? I'll try setting it to false.
I still can't set the image source to the byte array of the image using GetStreamWithImageRotatedForExternalStorage.
using (var ms = new MemoryStream())
{
file.GetStreamWithImageRotatedForExternalStorage().CopyTo(ms);
details.LabelImage = ms.ToArray();
}
using the byte array for an image
return ImageSource.FromStream(() => new MemoryStream(this.LabelImage));
This does not work for me, GetStream works ok.
Update:-
Ok so, RotateImage = false + GetStreamWithImageRotatedForExternalStorage allows me to display the image but its still incorrectly rotated in my app and the gallery.
I'm using this plugin, which is similar (if not the same thing - I know James Montemagno has recently packaged/bundled his work with Xamarin).
If you check the issues board there, you'll see there are quite a few people that have similar troubles (image rotation on iOS). Almost every 'solution' mentions using GetStreamWithImageRotatedForExternalStorage.
My issue was similar - I was unable to take a photo on iOS in portrait mode, without other (non-ios Devices) rotating the image. I have tried for weeks to solve this issue, but support on the plugin seems to be quite limited.
Ultimately I had to solve this with a huge workaround - using a custom renderer extending from FFImageLoading to display our images and MetadataExtractor. We were then able to extract the EXIF data from the stream and apply a rotation transformation to the FFImageLoding image control.
The rotation information was stored in a sort of weird way, as a string. This is the method I'm using to extract the rotation information, and return the amount it needs to be rotated as an int. Note that for me, iOS was able to display the image correctly still, so it's only returned a rotation change for Android devices.
public static int GetImageRotationCorrection(byte[] image)
{
try
{
var directories = ImageMetadataReader.ReadMetadata(new MemoryStream(image));
if (Device.Android == "Android")
{
foreach (var directory in directories)
{
foreach (var tag in directory.Tags)
{
if (tag.Name == "Orientation")
{
if (tag.Description == "Top, left side(Horizontal / normal)")
return 0;
else if (tag.Description == "Left side, bottom (Rotate 270 CW)")
return 270;
else if (tag.Description == "Right side, top (Rotate 90 CW")
return 90;
}
}
}
}
return 0;
}
catch (Exception ex)
{
return 0;
}
}
Note that there is also a custom renderer for the image for FFImage Loading.
public class RotatedImage : CachedImage
{
public static BindableProperty MyRotationProperty = BindableProperty.Create(nameof(MyRotation), typeof(int), typeof(RotatedImage), 0, propertyChanged: UpdateRotation);
public int MyRotation
{
get { return (int)GetValue(MyRotationProperty); }
set { SetValue(MyRotationProperty, value); }
}
private static void UpdateRotation(BindableObject bindable, object oldRotation, object newRotation)
{
var _oldRotation = (int)oldRotation;
var _newRotation = (int)newRotation;
if (!_oldRotation.Equals(_newRotation))
{
var view = (RotatedImage)bindable;
var transformations = new System.Collections.Generic.List<ITransformation>() {
new RotateTransformation(_newRotation)
};
view.Transformations = transformations;
}
}
}
So, in my XAML - I had declared a RotatedImage instead of the standard Image. With the custom renderer, I'm able to do this and have the image display rotated the correct amount.
image.MyRotation = GetImageRotationCorrection(imageAsBytes)
It's a totally unnecessary workaround - but these are the lengths that I had to go to to get around this issue.
I'll definitely be following along with this question, there might be someone in the community who could help us both!
The SaveMetaData flag is causing the rotation issue.
Setting it to false (default is true) now displays the photo correctly.
One side effect of that, the image no longer appears in the gallery if SaveToAlbum=true.
Still can't use an image take when using GetStreamWithImageRotatedForExternalStorage, even using FFImageLoading.
I found that while using Xam.Plugin.Media v5.0.1 (https://github.com/jamesmontemagno/MediaPlugin), the combination of three different inputs produced different results on Android vs. iOS:
StoreCameraMediaOptions.SaveMetaData
StoreCameraMediaOptions.RotateImage
Using MediaFile.GetStream() vs. MediaFile.GetStreamWithImageRotatedForExternalStorage()
On Android, SaveMetaData = false, RotateImage = true, and using MediaFile.GetStreamWithImageRotatedForExternalStorage() worked for me whether I was saving the result stream externally or processing the result stream locally for display.
On iOS, the combination of RotateImage = true and StreamRotated = true would result in a NullReferenceException coming out of the plugin library. Using MediaFile.GetStreamWithImageRotatedForExternalStorage() appeared to have no impact on behaivor.
--
Before going further, it's important to understand that image orientation in the JPEG format (which Xam.Plugin.Media seems to use) isn't as straightforward as you might think. Rather than rotating the raw image bytes 90 or 180 or 270 degrees, JPEG orientation can be set through embedded EXIF metadata. Orientation issues will happen with JPEGs either if EXIF data is stripped or if downstream consumers don't handle the EXIF data properly.
The approach I landed on was to normalize JPEG image orientation at the point the image is captured without relying on EXIF metadata. This way, downstream consumers shouldn't need to be relied on to properly inspect and handle EXIF orientation metadata.
The basic solution is:
Scan a JPEG for EXIF orientation metadata
Transform the JPEG to rotate/flip as needed
Set the result JPEG's orientation metadata to default
--
Code example compatible with Xamarin, using ExifLib.Standard (1.7.0) and SixLabors.ImageSharp (1.0.4) NuGet packages. Based on (Problem reading JPEG Metadata (Orientation))
using System;
using System.IO;
using ExifLib;
using SixLabors.ImageSharp;
using SixLabors.ImageSharp.Formats.Jpeg;
using SixLabors.ImageSharp.Metadata.Profiles.Exif;
using SixLabors.ImageSharp.Processing;
namespace Your.Namespace
{
public static class ImageOrientationUtility
{
public static Stream NormalizeOrientation(Func<Stream> inputStreamFunc)
{
using Stream exifStream = inputStreamFunc();
using var exifReader = new ExifReader(exifStream);
bool orientationTagExists = exifReader.GetTagValue(ExifTags.Orientation, out ushort orientationTagValue);
if (!orientationTagExists)
// You may wish to do something less aggressive than throw an exception in this case.
throw new InvalidOperationException("Input stream does not contain an orientation EXIF tag.");
using Stream processStream = inputStreamFunc();
using Image image = Image.Load(processStream);
switch (orientationTagValue)
{
case 1:
// No rotation required.
break;
case 2:
image.Mutate(x => x.RotateFlip(RotateMode.None, FlipMode.Horizontal));
break;
case 3:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate180, FlipMode.None));
break;
case 4:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate180, FlipMode.Horizontal));
break;
case 5:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate90, FlipMode.Horizontal));
break;
case 6:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate90, FlipMode.None));
break;
case 7:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate270, FlipMode.Horizontal));
break;
case 8:
image.Mutate(x => x.RotateFlip(RotateMode.Rotate270, FlipMode.None));
break;
}
image.Metadata.ExifProfile.SetValue(ExifTag.Orientation, (ushort)1);
var outStream = new MemoryStream();
image.Save(outStream, new JpegEncoder{Quality = 100});
outStream.Position = 0;
return outStream;
}
}
}
And to use in conjunction with Xam.Plugin.Media:
MediaFile photo = await CrossMedia.Current.TakePhotoAsync(options);
await using Stream stream = ImageOrientationUtility.NormalizeOrientation(photo.GetStream);

ARCore Unity: How do I Start and Stop Plane Detection on Command?

I am creating an app with ARCore, but I don't want ARCore to look for planes as soon as the app starts. Instead, I want the plane detection to begin when I hit a button in my app. It would also be great if I could stop the plane detection on command as well.
Does anyone know how I could do start and stop the ARCore plane detection on command?
I am building the app in Unity.
Thanks so much in advance!
on ARPlaneVisualizer.cs, there is this code
void OnEnable()
{
m_PlaneLayer = LayerMask.NameToLayer ("ARGameObject");
ARInterface.planeAdded += PlaneAddedHandler;
ARInterface.planeUpdated += PlaneUpdatedHandler;
ARInterface.planeRemoved += PlaneRemovedHandler;
HidePlane(true);
}
void OnDisable()
{
ARInterface.planeAdded -= PlaneAddedHandler;
ARInterface.planeUpdated -= PlaneUpdatedHandler;
ARInterface.planeRemoved -= PlaneRemovedHandler;
HidePlane(false);
}
you can use the OnEnable() code as start tracking and OnDisable() code to stop tracking.
Initially create bool to restrict surface detection code and inatially make bool to true.
bool isSurfaceDetected = true;
if (isSurfaceDetected) {
Session.GetTrackables<TrackedPlane> (_newPlanes, TrackableQueryFilter.New);
// Iterate over planes found in this frame and instantiate corresponding GameObjects to visualize them.
foreach (var curPlane in _newPlanes) {
// Instantiate a plane visualization prefab and set it to track the new plane. The transform is set to
// the origin with an identity rotation since the mesh for our prefab is updated in Unity World
// coordinates.
var planeObject = Instantiate (plane, Vector3.zero, Quaternion.identity,
transform);
planeObject.GetComponent<DetectedPlaneVisualizer> ().Initialize (curPlane);
// Debug.Log ("test....");
// Apply a random color and grid rotation.
// planeObject.GetComponent<Renderer>().material.SetColor("_GridColor", new Color(Random.Range(0.0f, 1.0f), Random.Range(0.0f, 1.0f), Random.Range(0.0f, 1.0f)));
// planeObject.GetComponent<Renderer>().material.SetFloat("_UvRotation", Random.Range(0.0f, 360.0f));
//
}
Create a stop button in canvas and attatch below method
public void StopTrack()
{
// Make isSurfaceDetected to false to disable plane detection code
isSurfaceDetected = false;
// Tag DetectedPlaneVisualizer prefab to Plane(or anything else)
GameObject[] anyName = GameObject.FindGameObjectsWithTag ("Plane");
// In DetectedPlaneVisualizer we have multiple polygons so we need to loop and diable DetectedPlaneVisualizer script attatched to that prefab.
for (int i = 0; i < anyName.Length; i++)
{
anyName[i].GetComponent<DetectedPlaneVisualizer> ().enabled = false;
}
}
Make sure that stop button method is in ARController

Use of 'drawPolygonGeometry()' on postCompose event with vectorContext

I'm trying to draw a Circle around every kind of geometry (could be every ol.geom type: point,polygon etc.) in an event called on 'postcompose'. The purpose of this is to create an animation when a certain feature is selected.
listenerKeys.push(map.on('postcompose',
goog.bind(this.draw_, this, data)));
this.draw_ = function(data, postComposeRender){
var extent = feature.getGeometry().getExtent();
var flashGeom = new ol.geom.Polygon.fromExtent(extent);
var vectorContext = postComposeRender.vectorContext;
...//ANIMATION CODE TO GET THE RADIUS WITH THE ELAPSED TIME
var imageStyle = this.getStyleSquare_(radius, opacity);
vectorContext.setImageStyle(imageStyle);
vectorContext.drawPolygonGeometry(flashGeom, null);
}
The method
drawPolygonGeometry( {ol.geom.Polygon} , {ol.feature} )
is not working. However, it works when I use the method
drawPointGeometry({ol.geom.Point}, {ol.feature} )
Even if the type of flashGeom is
ol.geom.Polygon that I just built from an extent. I don't want to use this method because extents from polygons could be received and it animates for every point of the polygon...
Finally, after analyzing the way drawPolygonGeometry in OL3 works in the source code, I realized that I need to to apply the style with this method before :
vectorContext.setFillStrokeStyle(imageStyle.getFill(),
imageStyle.getStroke());
DrawPointGeometry and drawPolygonGeometry are not using the same style instance.

Do screen transitions when the user clicks on a bitmap

I am working on an eBook app where I need to transition the screens from left to right and right to left. I tried many samples that I've found, but I am not successful. How do I change the screen frequently when user clicks on the screen from left to right and right to left. What is the basic idea for transition of pages. I went through the Developer Support Forum thread "page-flip effect" looking for a solution, but I can't see it.
The following code is not logical. In which position do I have to implement flip effect for flipping pages in the screen and how to implement it?
public class TransitionScreen extends FullScreen implements Runnable{
private int angle = 0;
Bitmap fromBmp,toBmp;
public TransitionScreen(){
}
public TransitionScreen(AnimatableScreen from,AnimatableScreen to) {
fromBmp = new Bitmap(Display.getWidth(), Display.getHeight());
toBmp = new Bitmap(Display.getWidth(), Display.getHeight());
Graphics fromGraphics = Graphics.create(fromBmp);
Graphics toGraphics = Graphics.create(toBmp);
Object eventLock = getApplication().getEventLock();
synchronized(eventLock) {
from.drawAnimationBitmap(fromGraphics);
to.drawAnimationBitmap(toGraphics);
// Interpolate myOffset to target
// Set animating = false if myOffset = target
invalidate();
}
try {
synchronized (Application.getEventLock()) {
Ui.getUiEngine().suspendPainting(true);
}
} catch (final Exception ex) {
}
}
protected void paint(Graphics g){
//control x,y positions of the bitmaps in the timer task and the paint will just paint where they go
g.drawBitmap(0,0, 360,
480, toBmp, 0, 0);
g.drawBitmap(0, 0, 360,
480, fromBmp, 0, 0);
// invalidate();
}
protected boolean touchEvent(TouchEvent event) {
if (!this.isFocus())
return true;
if (event.getEvent() == TouchEvent.CLICK) {
// invalidate();
}
return super.touchEvent(event);
}
}
Assuming you're working with version 5.0 or later of the OS, this page has a simple example:
http://docs.blackberry.com/en/developers/deliverables/11958/Screen_transitions_detailed_overview_806391_11.jsp
From where did you get the code sample posted in your question? That code does not appear to be close to working.
Update: you can actually animate transitions like this yourself fairly simply. Assuming you know how to use the Timer class, you basically have a class-level variable that stores the current x-position of your first Bitmap (the variable would have a value of 0 initially). In each timer tick, you subtract some amount from the x-position (however many pixels you want it to move each tick) and then call invalidate();.
In each call to the paint method, then, you just draw the first bitmap using the x-position variable for the call's x parameter, and draw the second bitmap using the x-position variable plus the width of the first bitmap. The resulting effect is to see the first bitmap slide off to the left while the second slides in from the right.
A caveat : Because this is java (which means the timer events are not real-time - they're not guaranteed to occur when you want them to), this animation will be kind of erratic and unsmooth. The best way to get smooth animation like this is to pre-render your animation cells (where each is a progressive combination of the two bitmaps you're transitioning between), so that in the paint method you're just drawing a single pre-rendered bitmap.

how to dynamically change image source in a silverlight for windows phone 7 project?

I'm working on a windows phone 7 project, with silverlight, and i'm trying to show 4 images in sequence to give the user the feeling of a short movie.
I have 4 urls pointing to 4 different jpeg images, and I'm using an Image control to show these jpeg in sequence.
The way I'm trying to achieve this is by doing:
private void RetrieveImages()
{
image1.ImageOpened += new EventHandler<RoutedEventArgs>(image1_ImageOpened);
frameNumber = 0;
gotoNextImage();
}
void image1_ImageOpened(object sender, RoutedEventArgs e)
{
System.Threading.Thread.Sleep(400);
gotoNextImage();
}
private void gotoNextImage()
{
if (frameNumber < 4)
{
webBrowser1.Dispatcher.BeginInvoke(()=> {
image1.Source = new System.Windows.Media.Imaging.BitmapImage(new Uri(cam.framesUrl[frameNumber]));
frameNumber++;
});
}
else
{
image1.ImageOpened -= image1_ImageOpened;
}
}
But this just don't work as expected. I'm sure I'm missing something about how to interact with the UI.
Can anyone point me in the right direction?
Which is the best way to achieve this?
Edited:
I'll explain better what's wrong with my code... probably it's unclear what happens.
I don't get any error with my code, but I don't see the "movie effect" too. It just show 1 single image, without iterating between the image collection.
I think it's a threading problem... kind of I'm not doing the right thing in the right thread to see the UI updating as expected...
This seams to work best.
xaml
<UserControl
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
mc:Ignorable="d"
x:Class="YourNamspace.YourClass"
d:DesignWidth="24"
d:DesignHeight="24">
<Grid x:Name="LayoutRoot">
<Image Name="YourImageName" Stretch="Fill" Source="YourPath" ImageOpened="onImageOpened"/>
</Grid>
</UserControl>
C#
using System;
using System.Windows;
using System.Windows.Controls;
using System.Windows.Documents;
using System.Windows.Ink;
using System.Windows.Input;
using System.Windows.Media;
using System.Windows.Media.Imaging;
using System.Windows.Media.Animation;
using System.Windows.Shapes;
using System.Threading;
namespace YourNameSpace
{
public partial class YourClass : UserControl
{
public int FirstImageIndex = 1;
public int LastImageIndex = 1000;
public int CurrentImageIndex = 1;
public YourClass()
{
InitializeComponent();
}
private void onImageOpened(object sender, System.Windows.RoutedEventArgs e)
{
Thread.Sleep(1000);
CurrentImageIndex = ( CurrentImageIndex == LastImageIndex ) ? FirstImageIndex : CurrentImageIndex++;
YourImageName.Source = new BitmapImage(new Uri("Your/path/to/image"+CurrentImageIndex+".jpg"), UriKind.RelativeOrAbsolute);
}
}
}
Hope that helps. I am pretty new to Silverlight.
So far the best way is to define the image as a resource. Via converter setting the image source.
But, your code should work fine as well. Try setting UriKind.RelativeOrAbsolute may be that should help. Because, this is the most common area while setting image source causes issue
HTH
Ok, I figured it out, and I think I also find a nice solution :) IMHO
I basically bound the Image.source to a proprerty of my class (that now extends INotifyPropertyChanged). But this gave me some issue about the transiction between images: since they were downloaded from internet there was a black images occurring between one images an the next one... but only the first time (i'm looping on a set of 4 images, looks like the video repeat), 'cause after that the images are cached.
So, what I've done is to cache the images the first time, without displaying the right Image control, but displaying instead another Image control (or whatever else) which says to the user "I'm loading".
For handle this scenario I've created a custom event:
public delegate void FramesPrefetchedEventHanlder();
public event FramesPrefetchedEventHanlder FramesPrefetched;
Now let's take a look at the RetrieveImages method:
private void RetrieveImages()
{
frameNumber = 0;
currentCycle = 0;
// set e very short interval, used for prefetching frames images
timer.Interval = new TimeSpan(0, 0, 0, 0, 10);
timer.Tick += (sender, e) => gotoNextImage();
// defines what is going to happen when the prefetching is done
this.FramesPrefetched += () =>
{
// hide the "wait" image and show the "movie" one
imageLoading.Opacity = 0;
image1.Opacity = 1;
// set the timer with a proper interval to render like a short movie
timer.Interval = new TimeSpan(0, 0, 0, 0, 400);
};
// when a frame is loaded in the main Image control, the timer restart
image1.ImageOpened += (s, e) =>
{
if (currentCycle <= cycles) timer.Start();
};
// start the loading (and showing) frames images process
gotoNextImage();
}
Ok, now what we need to handle is the step by step loading of the image and communicate when we finished the prefetching phase:
private void gotoNextImage()
{
timer.Stop();
if (frameNumber < 4)
{
CurrentFrame = new System.Windows.Media.Imaging.BitmapImage(new Uri(cam.framesUrl[frameNumber]));
frameNumber++;
}
else
{
// repeat the frame's sequence for maxCycles times
if (currentCycle < maxCycles)
{
frameNumber = 0;
currentCycle++;
// after the first cycle through the frames, raise the FramesPrefetched event
if (currentCycle == 1)
{
FramesPrefetchedEventHanlder handler = FramesPrefetched;
if (handler != null) handler();
}
// step over to next frame
gotoNextImage();
}
}
}
This works pretty fine to me... but since I'm new to Silverlight and Windows Phone 7 developement, any suggestion for improvement is welcome.

Resources