I'm creating an admin tool for a project where I create an Event, then create multiple Speakers (on one page), then need to create multiple Talks for each Speaker.
Rather than have all the Speakers listed on one page after creation, and then put multiple Talks against each Speaker (which looks crazy due to all the input boxes), I'd like to gradually step through each Speaker, create the Talks for each Speaker, then move on to the next Speaker until all Speakers have been completed.
What's the best way to go about achieving this?
Do I need to create an array of all the created Speakers, then step through it somehow? Or set a flag on each created Speaker, so that once the user has clicked 'save talks' it finds the next speaker (in this event) that hasn't been saved?
I suggest reading this:
http://www.digitalmediaminute.com/article/1816/top-ruby-on-rails-tutorials
and afterwards:
http://www.sapphiresteel.com/How-To-Create-a-Ruby-On-Rails-Blog
after that lecture you will bea ble to solute that Problem in a "best-practise" Way.
Since your Question is a basic one i would like to show you that Tutorials.
No offense...but i think this lecture helps you more.
Further to my comment to bastianneu, I've spent about 10 minutes with AASM (http://github.com/rubyist/aasm) and have got it doing exactly what I needed.
Sometimes I guess you need to type out your question to properly clear it in you brain :)
Related
A colleague is trying to prepare a survey with limesurvey.
The survey is for the civil protect and among other things asks if the people knows the meaning of the different wailing tones of a siren.
I cite from wikipedia:
A civil defense siren is a siren used to provide emergency population warning of approaching danger and sometimes to indicate when the danger has passed....
By use of varying tones or on/off patterns of sound, different alert conditions can be signaled
But he told me that for playing sounds with limesurvey a browser must have installed the flash player. This isn't an option for him.
So he asked me, if such a survey would be feasible with orbeon.
The idea is, that the form has several buttons and several dropdowns.
Clicking a button a sound is played and in a corresponding dropdown one can select the meaning of the sound.
Is it possible to play sounds with orbeon forms?
The good news is that web browsers no longer need Flash Player to play sounds (see Can I use).
Now in order to play sounds with Orbeon Forms, you need to create a custom component able to call those APIs and play the sounds. This is doable by following the documentation but it involves some programming.
I'm totally new with audio framework. I would like to have a feature in my app.
When playing a clip/song, I want to record that song at the same time. I think there are two cases here. It's the best if I can record what is playing (the identical version).
Otherwise, if it's impossible, can I record everything (including noise from outside world) at the same time?
Finally I come up with the second approach which records everything including noise from the outside world. It works but I'm not sure it'll be approved by Apple.
I am very new to Objective C, and I've been searching for an answer to my question, but no luck. I have a game app that plays sounds that I have created, and it plays them randomly with each connection in the gameplay. I would like to create different sound "packs" that the user can select very simply by pressing a button. For example, the user could choose from electric piano, or classical piano, or xylophone, etc. Currently, my sound files are set up as Sound1.wav, Sound2.wav, etc.
What is the best way to approach this? Thank you for being patient with me.
You should probably setup a naming convention as such sound_0_1 where the first 0 is pack 0 and 1 is for the first sound in pack 0. These credentials are not exclusive to Objective-C, so any methodology from other programming languages can be applied using their ideals as well.
I'm trying to create an 'auto dj' application that would let smartphone users select a playlist of songs, and it would create a seamless mix for playback. There are a couple factors involved in this: read a playlist of audio files, calculate their waveforms/spectrums, determine the BPMs, and organize the compatible songs in a new playlist in the order that they will be played (based on compatible tempos & keys).
The app would have to be able to scan the waveform of a song and recognize the beginning of the 'main' part of the song (skipping slow intros/outros). I also imagine having some effects: filtering, so it can filter the bass out of the new track being mixed in, and switch the basses at an appropriate time. Perhaps reverb that the user could control as well.
I am just seeing how feasible of a project this is for 3-4 busy college students in the span of ~4 months. Not sure if it would be an Android or iOS app, or perhaps even a Windows app. Not sure what language we would use (likely Python or Java); whichever has the most useful audio analyzing libraries. Obviously it would work better for certain genres of music (house, trance), but I'd still really like to try to create this.
Thanks for any feedback
As much as I would like to hear a more experienced person's opinion on this, I would say based on your situation that it would be a very big undertaking. Since it sounds like you don't have experience using audio analyzing libraries/ programs you might want to start experimenting with those and most of them are likely going to be in C/ C++, not java/ Python. Here are some I know of but I would recommend do your own research.
http://www.underbit.com/products/mad/
http://audacity.sourceforge.net/
It doesn't sound that feasible in your situation but that just depends on your programming/project experience and motivation to create it.
Good luck
I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.