A colleague is trying to prepare a survey with limesurvey.
The survey is for the civil protect and among other things asks if the people knows the meaning of the different wailing tones of a siren.
I cite from wikipedia:
A civil defense siren is a siren used to provide emergency population warning of approaching danger and sometimes to indicate when the danger has passed....
By use of varying tones or on/off patterns of sound, different alert conditions can be signaled
But he told me that for playing sounds with limesurvey a browser must have installed the flash player. This isn't an option for him.
So he asked me, if such a survey would be feasible with orbeon.
The idea is, that the form has several buttons and several dropdowns.
Clicking a button a sound is played and in a corresponding dropdown one can select the meaning of the sound.
Is it possible to play sounds with orbeon forms?
The good news is that web browsers no longer need Flash Player to play sounds (see Can I use).
Now in order to play sounds with Orbeon Forms, you need to create a custom component able to call those APIs and play the sounds. This is doable by following the documentation but it involves some programming.
Related
I am building a web application where I will have multiple videos. But besides that there are also plenty of other things I want to be able to do, like click on video and save a video tag, then it will show up next time for other users who see the video (like youtube). Or pause the video, get the time where it is paused and then add a comment to it and save the time and the comment on my database.
Is it possible to do with just ruby on rails or do I need to use api or use other stuff? I will also want to do a bit more complex video manipulations but for now this ones are enough.
I am citing an example for the general HTML5 video tag, which supports few of the popular video formats. But this will be applicable across other popular video players like flowplayer as well
Have a look at link
You can send an ajax at every button(play/pause) press to your controller, which will save the details of the time at the which video was paused, so you can record that in your database. This link will give you an example of most of the properties, that you can play with :)
I'm trying to create an 'auto dj' application that would let smartphone users select a playlist of songs, and it would create a seamless mix for playback. There are a couple factors involved in this: read a playlist of audio files, calculate their waveforms/spectrums, determine the BPMs, and organize the compatible songs in a new playlist in the order that they will be played (based on compatible tempos & keys).
The app would have to be able to scan the waveform of a song and recognize the beginning of the 'main' part of the song (skipping slow intros/outros). I also imagine having some effects: filtering, so it can filter the bass out of the new track being mixed in, and switch the basses at an appropriate time. Perhaps reverb that the user could control as well.
I am just seeing how feasible of a project this is for 3-4 busy college students in the span of ~4 months. Not sure if it would be an Android or iOS app, or perhaps even a Windows app. Not sure what language we would use (likely Python or Java); whichever has the most useful audio analyzing libraries. Obviously it would work better for certain genres of music (house, trance), but I'd still really like to try to create this.
Thanks for any feedback
As much as I would like to hear a more experienced person's opinion on this, I would say based on your situation that it would be a very big undertaking. Since it sounds like you don't have experience using audio analyzing libraries/ programs you might want to start experimenting with those and most of them are likely going to be in C/ C++, not java/ Python. Here are some I know of but I would recommend do your own research.
http://www.underbit.com/products/mad/
http://audacity.sourceforge.net/
It doesn't sound that feasible in your situation but that just depends on your programming/project experience and motivation to create it.
Good luck
I'm trying to add a caption to a video when I upload it to youtube. The caption would say something like "Brought to you by Company ABC".
The way Google has described it here seems very long winded and complex. Additionally, there is no link to usage with the Java API.
Does anyone know a simple way of doing this?
Thanks,
Gearoid.
What you are looking for might be more properly called an "annotation" (or in this case "premercial") than a "caption".
(A caption would be an appropriate term if someone was actually speaking "Brought to you by Company ABC", and you wanted people to be able to know that with no sound...either because they had the sound off, or were scanning the video programmatically, or because they are hearing-impaired.)
There is apparently no programmatic API at this point in time for using YouTube's native annotation features:
Annotating YouTube videos programmatically
If you wanted to prepend a title card of some kind to your video (like it's your space to inject an advertisement) you can read around for approaches to do that operation before the upload:
Stitching together multiple videos without gap
You'd have a lot more options for making that intro sequence look spiffy, then.
I'm developing a virtual instrument app for iOS and am trying to implement a recording function so that the app can record and playback the music the user makes with the instrument. I'm currently using the CocosDenshion sound engine (with a few of my own hacks involving fades etc) which is based on OpenAL. From my research on the net it seems I have two options:
Keep a record of the user's inputs (ie. which notes were played at what volume) so that the app can recreate the sound (but this cannot be shared/emailed).
Hack my own low-level sound engine using AudioUnits & specifically RemoteIO so that I manually mix all the sounds and populate the final output buffer by hand and hence can save said buffer to a file. This will be able to be shared by email etc.
I have implemented a RemoteIO callback for rendering the output buffer in the hope that it would give me previously played data in the buffer but alas the buffer is always all 00.
So my question is: is there an easier way to sniff/listen to what my app is sending to the speakers than my option 2 above?
Thanks in advance for your help!
I think you should use remoteIO, I had a similar project several months ago and wanted to avoid remoteIO and audio units as much as possible, but in the end, after I wrote tons of code and read lots of documentations from third party libraries (including cocosdenshion) I end up using audio units anyway. More than that, it's not that hard to set up and work with. If you however look for a library to do most of the work for you, you should look for one written a top of core audio not open al.
You might want to take a look at the AudioCopy framework. It does a lot of what you seem to be looking for, and will save you from potentially reinventing some wheels.
I'm creating an admin tool for a project where I create an Event, then create multiple Speakers (on one page), then need to create multiple Talks for each Speaker.
Rather than have all the Speakers listed on one page after creation, and then put multiple Talks against each Speaker (which looks crazy due to all the input boxes), I'd like to gradually step through each Speaker, create the Talks for each Speaker, then move on to the next Speaker until all Speakers have been completed.
What's the best way to go about achieving this?
Do I need to create an array of all the created Speakers, then step through it somehow? Or set a flag on each created Speaker, so that once the user has clicked 'save talks' it finds the next speaker (in this event) that hasn't been saved?
I suggest reading this:
http://www.digitalmediaminute.com/article/1816/top-ruby-on-rails-tutorials
and afterwards:
http://www.sapphiresteel.com/How-To-Create-a-Ruby-On-Rails-Blog
after that lecture you will bea ble to solute that Problem in a "best-practise" Way.
Since your Question is a basic one i would like to show you that Tutorials.
No offense...but i think this lecture helps you more.
Further to my comment to bastianneu, I've spent about 10 minutes with AASM (http://github.com/rubyist/aasm) and have got it doing exactly what I needed.
Sometimes I guess you need to type out your question to properly clear it in you brain :)