Is Using HTML5 canvas a good idea for Electron.JS? - electron

I want to make an electron app that is similar to scratch in which it uses drag and drop features. I want to make these features using canvas. However, I don't know if this is a good idea as I tried searching it up and never got Electron and HTML5 Canvas in the same sentence.

I'm using canvas for rendering charts in my project and it works correctly. I have been researching a bit and I have not found any better alternative to replace it

Related

Implementing PSD processing in web (like in society6.com)

Is there a Windows server with Photoshop running that process all these templates? It just happens too quickly. How did they achieve that?
I've been looking for the answer for quite a long time and didn't find anything worthy.
A way something like this would be done would be to have an overlay template that you'd place your image under and then all of the shading and such would would go on top of it. Then it's just a mater of rotating and skewing the angle of the picture underneath the overlay to get the right perspective. This can be done programmatically in a language of your choice like PHP, Python, C#, etc.
I believe what you're describing may be achieved using the Adobe Photoshop API. Click on try demo and take a look at the various options, including the Smart Object demo.

Ios how to recognize a simple Logo on a white background?

I'm trying to write an app that recognize a logo saved in app bundle and readed as UIImage. I have did a search before make this question, the only free solution seems to be OpenCv. I have tried it in a demo i had download from toptal_logo_detector . The demo works and i can find my logo everywhere i place it. Anyway the camera is very slow, too slow to use it in a real app. Maybe there's a way to optimize it, but my question is another.
I have to recognize a vector logo (always the same logo) centered in a white background ,something like this wifi logo:
My only solution is the complex OpenCV? There's a free and simpler way to achive the result: YES here there's your logo/No there isn't ?
I found this tutorial (with project download) that does what you want using OpenCV

Three.JS model viewer feasible in phonegap?

I'm trying to make a very simple 3D model viewer in a phonegap app for use on an iPhone 4. I'm using three.js which is working fine when I make a simple website. The problem is that when I try it out on the phone the 3D object doesn't appear. Simple geometrical shapes like a cube and cylinder will load on the canvas but obj files won't.
I use an objLoader to bring in the .obj file and have all relevant files in the same directory in the app just in case. I think the problem might lie with using webGL on iOS but I'm not really sure.
Thanks very much for your help. If anyone has any suggestions for building a model viewer in phonegap for display in iOS I'd be delighted to hear them.
As very few mobile browsers support WebGL at the moment I opted to use the canvas to render the 3D models. I used a simple web 3D object viewer called JSC3D to create a model viewer in PhoneGap on iOS. It can use webGL but I just went with rendering using the 2D canvas.
I tested my app on an iPhone 4 and the result was that the model took between 2 and 5 seconds to load up and when you go to rotate the object it takes some time to redraw it depending on how complex it is. While not the most satisfactory result it did do the job. I'm going to try it out on a more advanced phone in Android and I'll let you know the result.
I suggest you try use XDK Intel if you are packaging for iphone but for android use AIDE Phonegap. Make sure you use only var renderer = new THREE.CanvasRenderer();and avoid using
Anything that has to do with WebGL for its not supported on most devices except BB Playbook and BB 10.
I think IOS render ability is better than Android, some scene renders well in IOS, but not in the Android.
Usually the movile have more render ability, than tha mobile app. I use the firefox to render the three.js obj demo, it works well. But when I use the webview in the app, it renders nothing.
I've make an Android app to render stl models. First when I use the mobile browser to render the scene, it not render the full scene, and when I remove the shadow effect, it renders. Then I try to use webview with WebGlRenderer or CanvasRenderer, no works. Last I refer to the XWalkView of crosswalk a web engine which can used as an addin of the app, to replace the webview, I also close the shadow effect, I renders well.
You can refer this answer for more info.
Here is the render result.
You definitely should not use WebGL renderer of three.js as it's not supported on iOS. Try Canvas or SVG renderer.

How to binaries image using ImageMagick?

I have got one image like this
to only black and white colored image.
And I come across this ImageMagick resource
Does this can be used to generate black and white image from the above given image?
Does it is good to use this one?
If it is good one then does there is any documentation or tutorial on "How to use?".
UPDATE
SO GOT THE BEST SOLUTION FROM #ale0xB's SUGGESTION.
No third party api is required for doing this as apple's COREIMAGE.FRAMEWORK is the best for doing what I want to do. It's filters are working like charm :)
Thanks for the suggestion :)
I use this image filter. And it is great in speed and provides great output :)
Why would you want to use imageMagick instead of the standard Core Image to produce black and White images? I haven't used it before, but I doubt it's gonna give a much better performance than the native framework when it's just about creating a filter.
Since iOS 6 you have it really easy, have a look: Core Image filters, specially to CIColorMonochrome, which is the one you may be interested in.
If you're playing with images in your app, this is definitely worth checking: Core Image Programming Guide

Is there a graphical tool for Mac to assist in positioning CCNode objects on a Layer?

If my designer gives me a 960x640px image of what the screen should look like, as well as all of the individual elements as images or text, is there a way to lay out the images and text on the iPhone/iPad screen without doing it manually through code? The way I'm doing it now is a series of trial and error, trying to guess the position of each element.
By the way, the types of layouts I'm trying to do are simple static layouts for stuff like Menus and High Scores lists, etc.
You should try one of the editing tools: LevelHelper, CocoShop and CocosBuilder. The problem will be the output format, make sure that not only the editing part works to your specification but that you can actually use just the snippet of code you need to plug it into your code.
Do you have an image-editing software like Photoshop or GIMP? How about opening the 960x640px image with any such software, then hovering your mouse over the center of each element for its coordinates, and then finally pumping these values into your code?
In my opinion, this is at least better and way faster than trial and error:)
If you want to measure position of graphic elements. You can try a commercial called xscope. The trail version can be downloaded form their official website. It is the best tool I ever seen to measure distance, color(like, it can copy color measured directly to [UIColor ...] format), etc. If you want something freeware, I would like to recommend markman, which is a Chinese software, it's built on adobe air. All elements/button are graphic, so you don't need to read chinese to use it..
You can try to use some open source editor and write your exporter. For example I am using blender as a level editor for the game I am working on. It has a nice python API that can be used to export all the information you need.

Resources