wii programming - homebrew

my daughter just got a wii for christmas from her parents and her father has nothing better to do than looking for ways to dive into wii programming. i already read a lot about "homebrew" and wii. but i seem to be unable to find answers for the most important questions:
do i have to modify the firmware to get the homebrew stuff to work?
how likely is it, that the wii get's bricked, if for example there are any nintendo firmware updates?
thanks very much!

As far as I know you need to meet some requirements before you can get the SDK for Wii. But if those requirements are no problem for you then you can easily make games for the Wii without modifying anything and firmware upgrades won't break anything.
However if you code with something else chances are something will break! More information about the Wii SDK can be found here:
To become an Authorized Developer for Wii, WiiWare and/or Nintendo DS/DSi
As a side note I know you can hookup the sensor bar to a computer and develop games in XNA to then run them on your computer which basically has the same effect if you can hookup your computer on the TV (just for the bigger screen). Which might be the easiest thing to do!
To answer your questions however, you will have to modify your Wii and there is a chance something can break. Also if you modify your Wii you will lose all warranty which is another thing to think about when you start modifying your Wii. But if you don't care about the warranty WiiBrew should help you forward!

Hardware modification is required before unauthorized software can be run on a Wii. Official system updates may detect these alterations and will probably render your console unusable if applied.

One other option you have is that you can develop things for the browser on the Wii, as it's actually a not-that-bad-off version of Opera. ;)
I recently wrote a library that wraps all the nuisances of checking Wiimote events, it might be what you're looking for with your daughter: Wii-js

Related

Reverse engineering and patching of iOS app (IPA file)

For my master thesis I am working on reverse engineering a IPA file, since I'm new to this topic I'm open on all kind of suggestions. If anyone has any good tutorial or readings or just some personal knowledge to share with me, I would really appreciate it.
Long story short, I must:
RE the IPA: for this I'm focusing on two tools, IDA Pro (that isn't free) and Radare2 (open source).
Patch the RE app: I need to find specific safety checks (e.g. login, jailbreak, etc.), change the specific check in such a way that it returns the value I want.
Install the modified IPA on a jailbroken device to see that what I did worked :D
As I said, I am not expecting any ready-to-go solution for this question, instead I just would like to have some hints that could bring me on the right learning tracks.
Thanks a lot!!

Controlling mpd/ncmpcpp on laptop from iphone

I just set up ncmpcpp on the ubuntu side of my macbook pro, and I'm trying to make an iphone app to control it. How would I go about doing this?
Should I use bluetooth or wifi? (which one would be easier)
And then how would I go about implementing it? What packages should I install in linux? And how would I use them?
I know it's kind of a big topic, and I have several broad questions, but if you can answer any of them or provide any information that would help, I would be incredibly grateful!
Thanks!
That sounds like an ambitious project with a significant scope.
Whenever ideas like this crop up, it's good to take a step back and ask: "What am I trying to accomplish?".
Are you simply wanting to run a music "server" on your Ubuntu portion, and access it via your iPhone? Or are you trying to make a "remote control"?
There are likely apps that will do what you're wanting to do... I know, I know mdp/ncmpcpp is super neat looking, but... practicality!

bluetooth communication in nxj

I'm nxj beginner.
I have some questions about bluetooth communication between PC and brick.
First, when bluetooth communication occurs, where is the birthplace processing this datas?
In other words, I want to know whether these datas will be processed on CPU or brick.
Second, what is exact roles CPU and brick in bluethooth communication?
That means what is processed on CPU and what is processed on brick.
I have searched almost web site but I can't find this anywhere.
Please help me. Thanks.
You can see it in the package structure.
lejos.nxt.*
This package contains classes running on the NXT-brick. All code in this package will be compiled for the brick and will run on the brick.
lejos.pc.*
Here the difference is not that clear. This is java-code you compile for personal computer. So most code runs on your computer. But some classes (e.g: RemoteMotorController) only send messages to the NXT-brick which gives commands to the motors.
lejos.pc.comm provides API's that allow you to communicate/control the nxt robot from the PC.
When importing the the libs to an Android project, it allows you to build an instance of the same environment used on a pc, but within android.
I agree it can be tough finding some things out. It would be great if there was as stronger lejos presence on SO
This question is months old and has remained un-answered I actually have a lot of questions about it myself, but I might be able to provide some insight for utter novices.
when using bluetooth with Android and NXJ robots, you use either lejos.pc.comm or lejos.NXJ.
Both provide APi's to do almost the same thing, but work a little differently. I don't know nearly enough about the NXJ api, but I do know that it is the one that lets you manipulate the robot much more effectively, such as outputting data to it's LCD screen, which you can't do with the pc.comm api
As far as I can tell, the pc.comm API uses both Android Bluetooth API's and it's own protocols to allow communication with Lego LCP commands.
(I want to come back to this, but I'm writing a dissert on the topic so I'll try to update it in a couple of days. Seems not many are interested though, shame)

IPad and Openframeworks

Can someone point me in the right direction to learning how to use Openframeworks to develop and IPad app. Perhaps some good tutorials, I can't seem to find any good documentation.
The docs of openFrameworks is quite outdated. But you can discover OF through the examples. Just download the iPhone package here: http://www.openframeworks.cc/download and follow the instructions in the included readme. I think a good start is, try to get the examples running on your device and start to modify the examples. If you have any further questions, the people here --> http://forum.openframeworks.cc/ will be happy to help you out.
For a more in-depth discover of openFrameworks, look at the inofficial doxygen docs here --> http://ofxfenster.undef.ch/doc/
Getting OF running on iPad is actually pretty much the same thing as running on iphone.
have you got it running before?
if you haven’t, first thing is you need to pay Apple $99 if you want to run it on real device,
otherwise it’s free to try on the simulator.
there is some instructions on OF site for the first run,
just go through it these complicated stuffs only need to be done once:
http://www.openframeworks.cc/setup/iphone/
(the guide is totally not updated at all, but it’s pretty much the same process with minor UI difference)
Any iOS OF example should runs on iPad the same way as iPhone does.
but to get iPad native resolution, you’ll have to change it manually.
it's in Application>General and in Deployment Info change the Devices drop down to iPad. (screenshot attached)
try it with any iOS examples
and if you want to put any code for mac version,
just make a copy of any iOS example and hand paste the code in appropriate void,
they are pretty much the same except mouse event vs touch event.
which a bit different in logic but just play around with it. not too hard to get used to.
basically touch events are touch.x/touch.y instead of mouseX mouseY.
(and touch events are private to each void so you might need other variables to pass it somewhere else)
I don't have a forum link but there was an openframeworks forum question on this just last week and folks posted a number of sites that have good examples/tutorials. Here's one on doing pixel operations for graphic effects:
http://itp.nyu.edu/varwiki/Syllabus/Pixels-S10

Any good (free) text-to-speech engines out there?

I've been scouring the SO board and google and can't find any really good recommendations for this. I'm building a Twilio application and the text-to-speech (TTS) engine is way bad. Plus, it's a pain in the ass to test since I have to deploy every time. Is there a significantly better resource out there that could render to a WAV or MP3 file so I can save and use that instead? Maybe there's a great API for this somewhere. I just want to avoid recording 200 MP3 files myself, would rather have this generated programatically...
Things I've seen and rejected:
http://www.yakitome.com/ (I couldn't force myself to give them my email)
http://www2.research.att.com/~ttsweb/tts/demo.php
http://www.naturalreaders.com/index.htm
http://www.panopreter.com/index.php (on the basis of crappy website)
Thinking of paying for this, but not sure yet: https://ondemand.neospeech.com/
Obviously I'm new to this, if I'm missing something obvious, please point it out...
I am not sure if you have access to a mac computer or not. Mac has pretty advanced tts built into the operating system. Apple spent a lot of money on top engineers to research it. It can easily be controlled and even automated from the command prompt. It also has quite a few built in voices to choose from. That is what I used on a recent phone system I put up. But I realize that this is not an option if you don't have a mac.
Another one you might want to check into is http://cepstral.com/ they have very realistic voices. I think they used to be open source but they are no longer and now you need to pay licensing fees. They are very commonly used for high end commercial applications. And are not so much geared towards the home user that wants their article read to them.
I like the YAKiToMe! website the best. It's free and the voices are top quality. In case you're still worried about giving them your email, they've never spammed me in many years of use and I never got onto any spam lists after signing up with them, so I doubt they sold my email. Anyway, the service is great and has lots of features for turning electronic text into audio files in different languages.
As for the API you're looking for, YAKiToMe! has a well-documented API and it's free to use. You have to register with the site to use it, but that's because it lets you customize pronunciation and voice selection, so it needs to differentiate you from other users.

Resources