I'm searching for a MonoTouch wrapper (or a complete port) of this library:
https://github.com/gdawg/uiimage-dsp
UIImage Image Processing extensions using the vDSP/Accelerate framework.
Fast Image Blur/Sharpen/Emboss/Matrix operations for UIImage.
REQUIRES IOS 4.0 and The Accelerate framework.
Thanks in advance.
You won't find bindings for it - at least not in it's actual form. Quote from the URL you supplied:
Copy the UIImage+DSP.h and UIImage+DSP.m classes into your project
So it's not yet a library. Someone could maybe it one from the code but, until then, you won't find bindings for it.
As for a complete port the next quote
these aren't quite as fast
makes it sounds like translating the code to C# (lots of array, bounds checks... even if they could be eliminated with unsafe code) is not the best option. I suggest you keep looking at another (OpenGL-ES maybe?) solution - like your previous question here: How to do a motion blur effect on an UIImageView in Monotouch?
Maybe even add a bounty once you get enough reputation points :-)
I wrote a native port of the blur and tint UIImage categories from WWDC for Monotouch.
https://github.com/lipka/MonoTouch.UIImageEffects
It uses the Accelerate.framework - you can possible extend on it if it doesn't exactly suit your needs.
Related
I have a Unity application for iOS and I need to fetch an image from the iPhones photos library and then use it within unity as a texture/2D sprite. I can not find any information on how it could be done.
Any help is much appreciated. :)
For something like this, I think that you are going to need a native plugin. You can either write it yourself (maybe the harder way) or try to find one from the asset store. I don't remember if this is the case anymore, but you may need to have paid for Unity to use / create plugins however.
If you want to try the native plugin route, which is actually a lot of fun in my experience, here is the start of the documentation for it.
https://docs.unity3d.com/Manual/NativePlugins.html
I would suspect that you would need to make APIs that are accessible to the Unity side of code that let you call into Objective-C which in turn would call the relevant bits of code at the platform level.
Check this out: http://answers.unity3d.com/questions/1143996/open-ios-media-browser-and-import-selected-image-a.html
And scroll all the way down
I believe it answers your question of accessing an image from the photo library. Good luck!
I am working on a video editing application.
I need a function to do chroma keying and replace green background in a video file (not live feed) with an image or another video.
I have looked through the GPUImage framework but it is not suitable for my project as I cannot use third party frameworks for this project so I am wondering if there is another way to accomplish this.
Here are my questions:
Does it need to be done through Shaders in Opengl es?
Is there another way to access the frames and replace the background doing chroma keying using the AV Foundation framework?
I am not too versed in Graphics processing so I really appreciate any help.
Apple's image and video processing framework is CoreImage. There is sample code for Core Image which includes a ChromeKey example.
No it's not in Swift, which you asked for. But as #MoDJ says; video software is really complex, reuse what already exists.
I would use Apple's Obj-c sample code to get something that works then, if you feel you MUST have it in Swift, port it little by little and if you have specific problems ask them here.
I'm making a map editor app for iOS. I can't seem to find any information on how to save a modified tilemap, though. I'm using cocos2d-v3 for my framework.
Does someone have any ideas on how this is done?
Thanks
You could port the TMX writer of Kobold Kit: https://github.com/KoboldKit/KoboldKit/tree/master/KoboldKit/KoboldKitFree/Framework/TilemapModel/TMX
It actually originates from the KoboldTouch project for cocos2d-iphone, dependencies on the engine is minimal for the entire tilemap model. However there's no renderer for this model available for cocos2d (except in KoboldTouch), and cocos2d's own renderer does not retain all of the tmx information in memory, and certainly not in a way that makes it easy to write back.
You should be able to use the TMXGenerator class for this. It was written specifically to do that kind of thing. It's a little dated (pre v3, I haven't tried it with the new source), but it will get you most, if not all, of what you need.
https://github.com/slycrel/cocos2d-iphone-extensions/tree/develop/Extensions/TMXGenerator
Bottom line: read the two questions at the bottom.
I am in the progress of transitioning languages, so forgive me for the newb question. I am building an app that has, surprise, an audio/video capture requirement.
Using the development sample code for UIImagePickerController as a reference I was able to build out a working prototype. However, I quickly realized that UIImagePickerController is too simple. You don't get landscape mode and some of the options seems pretty basic.
I see that Apple recommends using AV Foundation. In addition, I read, on stack overflow, that there are a number of projects on github that extend or replace UIImagePickerController.
This brings me with two questions:
1) Is it a common scenario that Foundation/UIKit provide some very basic functionality, but if you really want a fully featured implementation, you need to go full tilt into one of the more complex frameworks? Personally, starting out, AV Foundation is pretty intimidating. The giant leap between UIImagePickerController and AV Foundation capture seems quite large. The fact that UIImagePickerController kind of "stops dead" so early in the feature set surprises me due to AV capture being so common. Perhaps I'm missing something.
2) Is it common for people to use a lot of 3rd party dependencies with objective-C development? In this case, getting an alternative open source image picker? I guess what I'm asking here, is Objective-C development as prone to dependency hell as other ecosystems?
I know it's possible to use overlays to customize the appearance, but I still think this would leave you with some of the same problems. You can't write a custom camera without AVFoundation, but I think it's worth it to get a few features that you really want.
Yes, third party dependencies are quite common. But, there are often at least a couple options when looking for something.
You can check out my newly open sourced version of a camera here: https://github.com/LoganWright/SimpleCam
I am currently running Kobold 1.0.4 and cannot work out how to use Box2D using Objective-C, any help will be appreciated.
I have looked at the Box2d example project with kobold but it uses only c++ i need to do it in objective-c as i am not really confident playing with both.
Change extension of your source files from *.m to *.mm to be able to use c++ classes in them. It will allow you to create and manage box2d objects in your objective-c code
Box2D is written in C++ so one way or another there's no getting around it. Since Box2D's code does the heavy-lifting, the amount of C++ code needed to us it is quite small, you just need to get the boilerplate up for collision handlers and then the code you fill it with can be as objective-C as you want.
If you want something that helps get geometry into your app, PhysicsEditor is a good tool and they have a plist exporter and provide an objective-C class for loading the data. It takes care of a lot of boilerplate, and if you want to do collision geometry for anything interesting it's very helpful.