Lego Mind storms file type conversion - file-type

I have some Lego mind storms files on my PC (.ev3) and i want to convert them to work on my iPad (.ev3M) does anyone know how to do this? i have looked all over and turned here as my final attempt.

I have contacted Lego support and found out there’s no way to do it.

Related

how can I implement text-2-speech function in iOS app via firemonkey?

I have a iOS APP needs to spell out the words or phrase so the listener can type in the words they hear. How to implement text2speech with Delphi Firemonkey?
tried searching around the net, none useful found.
With help from EMBT, I found a solution from https://blog.grijjy.com/2017/01/09/cross-platform-text-to-speech/ It is a good solution to mine. Just wondered why EMBT NOT do this? Encapsulating such simple and important function to a special group directly from iOS/OSX foundations is easy,handy but important for developers using Delphi. FireMonkey has no such functions. Don't know what to say about this. EMBT seems having difficulty to find the key point as always.

IP-Cam / CCTV-Cam Live Streaming on iPhone/iPad

I want to get stream of an ip-cam on my iPhone/iPad and want to display it on screen. By R&D i found that ffmpeg is the only way to achieve it but i found nothing on ffmpeg. Is there any other way to achieve it or a confirmed way to get compiled ffmpeg on mac please mention that. Material regarding how to use ffmepg or source code example will be highly appreciated.
Is there nothing built-in framework to achieve it if not then please mention if there is any free framework/sdk to achieve this functionality.
Thanks
There are actually a few.
here are some links
http://www.streammore.tv/
http://www.live555.com/
I am sure if you google you can find more.
I cannot only address the first one, because that is ours, but I didn't want this to sound purely like self promotion.

Best way to create panoramas on iPhone?

I'm trying to create panoramas in an iOS app by stitching together several images (similar to an app like PhotoSynth). I've looked all over and haven't yet found a winning implementation strategy. Here are the things I've looked into:
1) Linking OpenCV for iOS and implementing stitching and the panorama creation process myself.
2) Getting panotools to work on iOS and using the PT* functions to produce the panorama
Am I on the right track? Are there any simpler ways of implementing this?
Obviously a good quality out-of-the-box solution is preferred, but if there isn't one, which of the two above (or another) strategies would be best for a CV novice?
Perhaps you should try to understand how to build panoramas from a technical stand point before you implement the software on the IPhone.
EDIT: March 2013 I'm pretty sure the link I gave worked in January 2012, but it is indeed now broken, Alternative links:
https://www.cs.washington.edu/education/courses/cse455/06wi/readings/szeliskiShum97.pdf
http://www.multires.caltech.edu/teaching/courses/3DP/papers/SchumSzeliski.pdf

Face Tracking and Virtual Reality

I'm searching for a face tracking system to use in an augmented reality project. I'm trying to find an open source and multi-platform application for it. The goal is to return the direction where the face is looking to interact with the virtual environment, (something like this video).
I've downloaded the sources of the above Johnny Lee's application and tried to use Free Track too, making my own headset (some kind of monster, hehe). But it's not good to be limited to infrared points in your head.
These days I've download FaceTrackNoIR, but when I launch the program I get "No DLL was found in the Waterfall procedure." that I'm actually trying to solve.
Anyone knows a good application, library, code, lecture, anything that could help me to find a good path for this?
Thank you all!
I'll try to post results someday :-)
I would take a look at OpenCV. It is a general purpose machine-learning and computer vision C++ library. One of the examples in the download is a real-time face tracker that connects to a video camera connected to your computer and draws squares around any faces in the camera view.

Is there a virtual/dummy IMAQ camera for LabVIEW?

I'm writing LabVIEW software that grabs images from an IMAQ compatible GigE camera.
The problem: This is a collaborative project, so I only have intermittent access to the actual camera.I'd like to be able to keep developing this software even when the camera isn't present.
Is there a simple/fast way to create a virtual or dummy IMAQ camera in software? Ideally I'd like the dummy camera grab frames from an AVI or a stack of JPEG's. Something like this must exist, I just can't find it on Google.
I'm looking for something that won't take very long (e.g.< 2 hours effort) and that is abstracted away through the standard LabVIEW IMAQ interface, so that my software won't know or care whether its dealing with a dummy camera or an actual camera.
You can try this method using LabVIEW classes:
Hardware Emulation Using LabVIEW Classes
If you have the IMAQdx driver, you might consider just buying a cheap USB webcam for $10.
Use the IMAQdx driver (assuming you have it), and then insert the Vision Acquisition Express VI, and you can choose AVIs or even pics as a source.
Something like this: GigESim is a camera emulation software. Unfortunately it is proprietary and too expensive (>$500) for my own needs, but perhaps others will find this link useful.
Anyone know of a viable Open Source alternative?
There's an IP Camera emulator project that emulates IP camera with python. I haven't used it myself so i don't know if it can be used by IMAQ.
Let us know if it's good for you.
I know this question is really old, but hopefully this answer helps someone out.
IMAQdx also works with Windows DirectShow devices. While normally these are actual physical capture devices (think USB Webcams), there is no necessity that they have to be.
There are a few different pre-made options available on the web. I found using Open Broadcaster Studio and this Virtual Cam plugin to be easy enough. Basically:
Download and install both.
Load your media sources in the sources list.
Enable the VirtualCam stream (Tools > VirtualCam). Press Start.

Resources