Geotagging with PhotoCaptureDevice in WP8 - geolocation

I took a look on the new "PhotoCaptureDevice" and related classes in Windows Phone 8 but could not find means to add location data to captured images (GPS location).
Are there means to add such information in Windows Phone 8? Or is this only possible with the built-in camera?

The solution is to write the EXIF location data manually before sending the image stream (memory stream) to the media gallery.

Extensions.SaveJpeg() saves Exif geolocation data automatically if ID_CAP_LOCATION is enabled.
I have not tried the new WP8 classes yet, but it is quite likely they behave in the same way.

Related

HEIC/HEIF changed to jpeg without metadata at upload

I have a small webapp that runs on a server intended for use in a setting without reliable internet access (i.e., the app can't depend on outside resources during production). The purpose of the app is simply to upload image files, read the metadata, and categorize them in the right location on the disk. Up until recently there was no problem with this process, then I noticed that some of the files did not have all of the metadata attached (specifically the creation date). Upon further inspection, it appears that these are files that were shot on my iPhone as HEIC/HEIF photos and uploaded directly to the webpage from the phone.
Due to the design of the webapp, the filename of the uploaded file is shown on the page. Every time an HEIC photo is uploaded it displays the filename as ending in .jpeg.
I've had a hard time finding good documentation on this, but it sounds like the default for the iPhone at this point is to convert HEIC files to jpeg if it looks like they are transferring to a location that may not be able to read them. I guess a website form falls into this category. It also appears that as part of this conversion some of the EXIF data disappears.
So, does anyone know a way to retain the EXIF data? My primary limitation here is that the upload needs to happen through the webapp and that multiple users will be using this. As a result, I can't simply have everyone change their iPhone settings to only shoot jpegs.
In case it matters, the webapp is running on node.js and expressjs.

is it possible to send buffer of byte[] to screen ?

I want to know if there is any way to send byte array ( that represent simple image ) to some application and this application will show this image on some screen that connected to current machine ?
I have 2 screen connected to my machine.
On the first screen i want to show the operation application that i wrote.
And on the other machine i want to show the output of the video that i hold => that mean that the second screen will show running images.
Is there is a way to do it ?
If there is a way so how .. ?
Most operating systems today do not allow direct access to the hardware from user mode programs. However, they do provide interfaces that can accomplish what you need.
Typical examples are using APIs like: OpenGL/DirectX/SDL
You should choose and use one, depending on your OS and exact requirements.
Most operating systems support multi monitor display. You app must create two Windows (using whatever native windowing system API available) and you can arrange them (either manually or programmatically according to what you specified). For video output you need to select some video format and use a library (e.g. ffmpeg) to display it.

Can iOS8 CloudKit support streaming behind the scenes?

Is there any way, using currently available SDK frameworks on Cocoa (touch) to create a streaming solution where I would host my mp4 content on some server and stream it to my iOS client app?
I know how to write such a client, but it's a bit confusing on server side.
AFAIK cloudKit is not suitable for that task because behind the scenes it keeps a synced local copy of datastore which is NOT what I want. I want to store media content remotely and stream it to the client so that it does not takes precious space on a poor 16 GB iPad mini.
Can I accomplish that server solution using Objective-C / Cocoa Touch at all?
Should I instead resort to Azure and C#?
It's not 100% clear why would you do anything like that?
If you have control over the server side, why don't you just set up a basic HTTP server, and on client side use AVPlayer to fetch the mp4 and play it back to the user? It is very simple. A basic apache setup would do the job.
If it is live media content you want to stream, then it is worth to read this guide as well:
https://developer.apple.com/Library/ios/documentation/NetworkingInternet/Conceptual/StreamingMediaGuide/StreamingMediaGuide.pdf
Edited after your comment:
If you would like to use AVPlayer as a player, then I think those two things don't fit that well. AVPlayer needs to buffer different ranges ahead (for some container formats the second/third request is reading the end of the stream). As far as I can see CKFetchRecordsOperation (which you would use to fetch the content from the server) is not capable of seeking in the stream.
If you have your private player which doesn't require seeking, then you might be able to use CKFetchRecordsOperation's perRecordProgressBlock to feed your player with data.
Yes, you could do that with CloudKit. First, it is not true that CloudKit keeps a local copy of the data. It is up to you what you do with the downloaded data. There isn't even any caching in CloudKit.
To do what you want to do, assuming the content is shared between users, you could upload it to CloudKit in the public database of your app. I think you could do this with the CloudKit web interface, but otherwise you could create a simple Mac app to manage the uploads.
The client app could then download the files. It couldn't stream them though, as far as I know. It would have to download all the files.
If you want a streaming solution, you would probably have to figure out how to split the files into small chunks, and recombine them on the client app.
I'm not sure whether this document is up-to-date, but there is paragraph "Requirements for Apps" which demands using HTTP Live Streaming if you deliver any video exceeding 10min. or 5MB.

Is it posible to cast to the receiver the input from the phone microphone?

I would like to know if it is posible to cast the audio taken directly from the microphone iOS device to the receiver. (in a live way)
I´ve downloaded all the git example projects, and in all of them use a "loadMedia" method to start the casting. Example of one of those:
- (NSInteger)loadMedia:(GCKMediaInformation *)mediaInfo
autoplay:(BOOL)autoplay
playPosition:(NSTimeInterval)playPosition;
Can I follow this approach to do what I want? If so, what´s the expected delay?
Thanks a lot
Echo is likely if the device (iOS, Android, or Chrome) is in range of the speakers. That said:
Pick a fast codec that is supported, such as CELT/Opus or Vorbis
I haven't tried either of these, but they should be possible.
Implement your own protocol using CastChannel that passes the binary data. You'll want to do some simple conversion of the stream from Binary to something a bit more friendly. Take a look at Intro to Web Audio for using AudioContext.
or, 2. Setup a trivial server to stream from on your device, then tell the Receiver to just access that local server.

Is it good for an iPad application to have everything in local?

I develop an iPad application for a company. They want to use the application to show media, like pdf docs, pictures, video. They want one application for everything.
So I use a TabBar Application, each TabBar display a media, like pictures gallery, video gallery. The application is pretty big. And now the application is running slowly. The display of pdf is not smooth, the swich of tab takes time. I use the local data because I can use internet for the application, it needs to works everywhere without wifi.
So my question, is it a good idea to put everything in the same application? I add all my media in the xcode project.
Is the iPad good for displaying video, pdfs, pictures in the same application ? I want something smooth, but to much data for the memory kills my application. What ways I need to take ? Do you have ideas ?
It sounds like you are not purging media from memory when it is no longer used.
Make sure to release media related objects that are not immediately in use. Images in particular eat memory very quickly because all the data associated with an image has to been in memory. Unlike a PDF or audio file which can be read in as needed.
Users will expect and tolerate some slight delay switching from media type to media type because they experience that will all other apps and other forms of software. What they don't tolerate is slow performance while actively using a piece of media e.g. slow scrolling in a PDF file.

Resources