How can I make an audible Beep in Dart? - dart

I have searched on [DartLang and Beep] and have found only various HTML5 solutions that require a sound file. I would like to do a as basic and as universal a "Bell" sound as possible without using a sound file. (I'm using Ubuntu and there is a System beep function that reports the following when I call it with -h:
Usage:
beep [-f freq] [-l length] [-r reps] [-d delay] [-D delay] [-s] [-c] [--verbose | --debug] [-e device]
However, again, I just want to do this in as simple and universal a way as possible.
There is also this clue:
7 00/07 07 07 BEL (Ctrl-G) BELL (Beep)
...but nothing I could think of doing with the print() function would cause a beep.
Thanks in advance!

I don't know why are you doing this, but it really depends on whether you want this to work on the browser or on the console.
If you want to do this on the console, try this:
main() {
print(new String.fromCharCodes([0x07]));
}
It beeps for me at least on Windows. It should work if the terminal supports it (and it's not disabled by the user and so forth).
If you want to do this on the browser, you should play a sound file.
Here's a free beep sound: http://www.freesound.org/people/SpeedY/sounds/3062/
A very simple example on the browser:
new AudioElement("path/to/beep.wav")
..autoplay = true
..load();

You can use the Audio API to generate tones. For example, the following Dart code will generate a beep lasting 50 ms when you hit a key.
import 'dart:html';
import 'dart:web_audio';
import 'dart:async';
void main() {
final num LENGTH = 50;
var ac = new AudioContext();
window.onKeyUp.listen((KeyboardEvent ke) {
print("press a key");
OscillatorNode oscillator = ac.createOscillator();
oscillator
..type = "sine"
..frequency.value = 1000
..connectNode(ac.destination, 0, 0)
..start(0);
var timer = new Timer(const Duration(milliseconds: LENGTH), () {
oscillator.disconnect(0);
});
});
}
Possibly, there is a better way to terminate the generated tone than to set a timeout - perhaps by using an event listener (I'm still pretty new to the API; hopefully someone who knows more can edit the above code) - but the result is an audible beep, without any sound files... on a browser that supports the Audio API, that is.

If you are writing a console application, you can print '\a' which is an ASCII bell character.
This might not play a sound on all terminals. For instance, GNU screen may instead make the screen colors invert momentarily and print the message "Wuff, Wuff!!" instead of playing a sound. Various terminal emulators will allow you to disable ASCII bells, and the specific sound played might be modified in system settings.
Also, this won't work in the browser. For that, you'll have to use a sound file.

Related

liquid soap backup playlist

I am using liquidspoap for a community radio station. when silence is detected, liquidsoap starts playing a playlist.
My issue is that, if liquid soap detects silence then it starts the backup pls, then goes back to normal once sound comes back, then the next time it detects silence, it plays the backup playlist, but this time it continues playing from where it was left last time. I just want the playlist to play from the beginning each time? any ideas please, my script is below
#!/home/ubuntu/.opam/system/bin/liquidsoap
set("log.file",false)
set("log.stdout",true)
set("log.level",3)
# myplaylist
myplaylist = playlist("~/backup_playlist/playlist/Emergency-list.m3u",mode="normal")
backup_playlist = audio_to_stereo(myplaylist)
blank = once(single("~/blank_7_s.mp3"))
#Live local talk stream
live_local = input.http("http://test.com:8382/main.mp3")
#Live remote talk stream
live_remote=input.harbor("live_remote",port=8383,password="test",buffer=2.0)
# Talk over stream using microphone mount.
mic=input.harbor("mic",port=8383,password="test",buffer=2.0)
# If something goes wrong, we'll play this
security = single("~/backup_playlist/test.mp3")
radio = fallback(track_sensitive=false, [strip_blank(max_blank=120.,live_remote), strip_blank(max_blank=120.,live_local), backup_playlist , security])
radio = smooth_add(delay=0.65, p=0.15, normal=radio, special=strip_blank(max_blank=2.,mic))
# Stream it out
output.icecast(%mp3(bitrate=64), host="localhost", port=8382, password="test", mount="listen.mp3", genre="Talk", description="test Station Australia", $
If you do not need to have a playlist, an easy way would be to have an array of songs, and when you fallback, you just pick a random song from that array and it will start from the beginning. Note, I do not know how to do this or if it would work, as I am not familiar with liquidsoap, and this is more of a workaround than a solution. I will work on finding a better solution, but I hope this helps for now!

Playing multi-sampled Instruments using AudioKit, controlling ADSR envelope

I'm trying to play instrument of several .wav samples using AudioKit.
I've tried so far:
Using AKSampler (with underlying AVAudioUnitSampler) – it worked fine, but I can't figure out how to control ADSR envelope here – calling stop will stop note immediately.
Another way is to use AKSamplePlayer for each sample and play it, manually setting rate so it play the right note. I can (possibly?) then connect AKAmplitudeEnvelope to each sample player. But if I want to play 5 notes of the same sample simultaneously, I would need 5 instances of AKSamplePlayer, which seems like wasting resources.
I also tried to find a way to just push raw audio samples to the AudioKit output buffer, making mixing and sample interpolation by myself (in C, probably?). But didn't find how to do it :(
What is the right way to make a multi-sampled instrument using AudioKit? I feel like it must be a fairly simple task.
Thanks to mahal tertin, it's pretty easy to use AKAUPresetBuilder!
You can create .aupreset file somewhere in tmp directory and then load this instrument with AKSampler.
The only thing worth noting is that by default AKAUPresetBuilder will generate samples with trigger mode set to trigger, which will ignore note-off events. So you should set it explicitly.
For example:
let sampleC4 = AKAUPresetBuilder.generateDictionary(
rootNote: 60,
filename: pathToC4WavSample,
startNote: 48,
endNote: 65)
sampleC4["triggerMode"] = "hold"
let sampleC5 = AKAUPresetBuilder.generateDictionary(
rootNote: 72,
filename: pathToC5WavSample,
startNote: 66,
endNote: 83)
sampleC5["triggerMode"] = "hold"
AKAUPresetBuilder.createAUPreset(
dict: [sampleC4, sampleC5],
path: pathToAUPresetFilename,
instrumentName: "My Instrument",
attack: 0,
release: 0.2)
and then create a sampler and start AudioKit:
sampler = AKSampler()
try sampler.loadInstrument(atPath: pathToAUPresetFilename)
AudioKit.output = sampler
AudioKit.start()
and then use this to start playing note:
sampler.play(noteNumber: MIDINoteNumber(63), velocity: MIDIVelocity(120), channel: 0)
and this to stop, respecting release parameter:
sampler.stop(noteNumber: MIDINoteNumber(63), channel: 0)
Probably the best way would be to embed your wav files into an EXS or Soundfont format, making use of tools in that realm to accomplish the ADSR for instance. Otherwise you'll kind of have to have an instrument for each sample.

iOS ffmpeg how to run a command to trim remote url video?

I was initially using the AVFoundation libraries to trim video but it has a limitation that it can't do it for remote URLs and only works for local URLs.
So after further research I found ffmpeg library which can be included in a Xcode project for iOS.
I have tested the following commands to trim a remote video on command line:
ffmpeg -y -ss 00:00:01.000 -i "http://i.imgur.com/gQghRNd.mp4" -t 00:00:02.000 -async 1 cut.mp4
which will trim the .mp4 from 1 second to 3 second mark. This works perfectly via command line on my mac.
I have been successfully able to compile and include ffmpeg library into a xcode project but not sure how to proceed further.
Now I am trying to figure out how to run this command on an iOS app using the ffmpeg libraries. How can I do this?
If you can point me to some helpful direction, I would really appreciate it! If I can get it resolved using your solution, I will award a bounty (in 2 days when it gives me the option).
I have some idea about this. However, I have very limited exp on iOS and not sure whether my thought is the best way:
As far as I know, generally it is impossible to run the cmd tools on iOS. Maybe you have to write some code linked to ffmpeg libs.
Here's all the jobs needed to do:
Open input file and init some ffmpeg context.
Get the video stream and seek to the timestamp you want. This may be complicated. See ffmpeg tutorial for some help, or check this to seek precisely and dealing with the troublesome key frames.
Decode some frames. Until the frame match the end timestamp.
Meanwhile with above, encode the frames to a new file as output.
The examples in ffmpeg source is very good to learn how to do this.
Some maybe useful codes:
av_register_all();
avformat_network_init();
AVFormatContext* fmt_ctx;
avformat_open_input(&fmt_ctx, "http://i.imgur.com/gQghRNd.mp4", NULL, NULL);
avformat_find_stream_info(fmt_ctx, NULL);
AVCodec* dec;
int video_stream_index = av_find_best_stream(fmt_ctx, AVMEDIA_TYPE_VIDEO, -1, -1, &dec, 0);
AVCodecContext* dec_ctx = avcodec_alloc_context3(NULL);
avcodec_parameters_to_context(dec_ctx, fmt_ctx->streams[video_stream_index]->codecpar)
// If there is audio you need, it should be decoded/encoded too.
avcodec_open2(dec_ctx, dec, NULL);
// decode initiation done
av_seek_frame(fmt_ctx, video_stream_index, frame_target, AVSEEK_FLAG_FRAME);
// or av_seek_frame(fmt_ctx, video_stream_index, timestamp_target, AVSEEK_FLAG_ANY)
// and for most time, maybe you need AVSEEK_FLAG_BACKWARD, and skipping some following frames too.
AVPacket packet;
AVFrame* frame = av_frame_alloc();
int got_frame, frame_decoded;
while (av_read_frame(fmt_ctx, &packet) >= 0 && frame_decoded < second_needed * fps) {
if (packet.stream_index == video_stream_index) {
got_frame = 0;
ret = avcodec_decode_video2(dec_ctx, frame, &got_frame, &packet);
// This is old ffmpeg decode/encode API, will be deprecated later, but still working now.
if (got_frame) {
// encode frame here
}
}
}

IOS WebAudio only works on headphones

I've been running into an issue now for a while where on some ios devices my webaudio system only seems to work with headphones where as other devices (exact same os, model, etc) the audio plays perfectly fine through the speakers or headphones. I've searched for a solution to this but haven't found anything on this exact issue. The only thing I can think of is that maybe it's an audio channel issue or something.
How can I fix this?
#Alastair is correct, the mute toggle switch does mute WebAudio, but it does not mute HTML5 tags. Thanks to his work I managed to find a work around for the web which enables WebAudio to play even when the mute toggle switch is on. I'd post this as a comment on his reply, but I don't have the reputation.
In order to play WebAudio you must also play at least one WebAudio sound source node and one HTML5 tag during a user action. It is fine if these sounds are short bits of silence. I found that this self contained code works without any extra files needed:
EDIT 11/29/19:
Removed vestigial typescript typedefs. Thanks #Joep. I also realized the code below is woefully out of date and janky. Just consider it an example. Editing this post prompted me to create an open source solution for this. You can see a demo of it here: https://spencer-evans.com/share/github/unmute/ and check out the repo here: https://github.com/swevans/unmute
/**
* PLEASE DONT USE THIS AS IT IS, THIS IS JUST EXAMPLE CODE.
* If you want a drop in solution I have a script on git hub
* Demo:
* #see https://spencer-evans.com/share/github/unmute/
* Github Repo:
* #see https://github.com/swevans/unmute
*/
var isWebAudioUnlocked = false;
var isHTMLAudioUnlocked = false;
function unlock() {
if (isWebAudioUnlocked && isHTMLAudioUnlocked) return;
// Unlock WebAudio - create short silent buffer and play it
// This will allow us to play web audio at any time in the app
var buffer = myContext.createBuffer(1, 1, 22050); // 1/10th of a second of silence
var source = myContext.createBufferSource();
source.buffer = buffer;
source.connect(myContext.destination);
source.onended = function()
{
console.log("WebAudio unlocked!");
isWebAudioUnlocked = true;
if (isWebAudioUnlocked && isHTMLAudioUnlocked)
{
console.log("WebAudio unlocked and playable w/ mute toggled on!");
window.removeEventListener("mousedown", unlock);
}
};
source.start();
// Unlock HTML5 Audio - load a data url of short silence and play it
// This will allow us to play web audio when the mute toggle is on
var silenceDataURL = "data:audio/mp3;base64,//MkxAAHiAICWABElBeKPL/RANb2w+yiT1g/gTok//lP/W/l3h8QO/OCdCqCW2Cw//MkxAQHkAIWUAhEmAQXWUOFW2dxPu//9mr60ElY5sseQ+xxesmHKtZr7bsqqX2L//MkxAgFwAYiQAhEAC2hq22d3///9FTV6tA36JdgBJoOGgc+7qvqej5Zu7/7uI9l//MkxBQHAAYi8AhEAO193vt9KGOq+6qcT7hhfN5FTInmwk8RkqKImTM55pRQHQSq//MkxBsGkgoIAABHhTACIJLf99nVI///yuW1uBqWfEu7CgNPWGpUadBmZ////4sL//MkxCMHMAH9iABEmAsKioqKigsLCwtVTEFNRTMuOTkuNVVVVVVVVVVVVVVVVVVV//MkxCkECAUYCAAAAFVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVVV";
var tag = document.createElement("audio");
tag.controls = false;
tag.preload = "auto";
tag.loop = false;
tag.src = silenceDataURL;
tag.onended = function()
{
console.log("HTMLAudio unlocked!");
isHTMLAudioUnlocked = true;
if (isWebAudioUnlocked && isHTMLAudioUnlocked)
{
console.log("WebAudio unlocked and playable w/ mute toggled on!");
window.removeEventListener("mousedown", unlock);
}
};
var p = tag.play();
if (p) p.then(function(){console.log("play success")}, function(reason){console.log("play failed", reason)});
}
window.addEventListener("mousedown", unlock);
This is likely because the iPhone's side switch is on "mute". It's very confusing - HTML5 <audio> tags still play fine when the phone is muted, but WebAudio does not. Why? Who knows. But it's a restriction I currently haven't found a way around.
If the iPhone mute button is down, meaning that the iPhone is muted, what is played through Web Audio Api will be muted.
Unfortunately there is no way to check if that physical button (located on the left edge towards the top of the iPhone) is on or off through Javascript.
This issue is completely independent from the fact that in iOS Safari the audio has to be started by a user action for it to be unmuted. There are some tricks that can be done to overcome that fact, including the one suggested by here Spencer, were you use "any action or a specific action" started by the user to "play" a silent audio file to allow subsequently playing audio files to play unmuted.
had same issue, and finally understood problem.
indeed WebView don't play sound on internal speakers if phone is in mute.
when i dig deeper i found a workaround :)
original post => https://stackoverflow.com/a/37874619/8064246
do {
try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayback)
//print("AVAudioSession Category Playback OK")
do {
try AVAudioSession.sharedInstance().setActive(true)
//print("AVAudioSession is Active")
} catch _ as NSError {
//print(error.localizedDescription)
}
} catch _ as NSError {
//print(error.localizedDescription)
}

Play default iOS system sound with Monotouch

I want to play the keyboard 'click' sound when pressing buttons in my app.
How do I access this sound clip with Monotouch? I don't want to pass my own sound using AudioToolbox SystemSound.FromFile(). So far all my searches have led to this solution or Objective-C code using 'AudioServicesCreateSystemSoundID' which I'm having trouble translating to C#.
With iOS 7.0 and higher, Stephane Delcroix answer seems not to work anymore...
But you can easily find the path to all the sounds in this nice project:
https://github.com/TUNER88/iOSSystemSoundsLibrary
Here is the code I used und iOS 7 (take care, it might not work with the simulator!)
private const string NotificationSoundPath = #"/System/Library/Audio/UISounds/New/Fanfare.caf";
public static void TriggerSoundAndViber()
{
SystemSound notificationSound = SystemSound.FromFile(NotificationSoundPath);
notificationSound.AddSystemSoundCompletion(SystemSound.Vibrate.PlaySystemSound);
notificationSound.PlaySystemSound();
}
Also the using() construct in the answer above caused trouble in my case... it seems like it released the sound too early, I only can hear it (and even then not complete with viber) with a breakpoint on PlaySystemSound().
Well, that's not a 1:1 port from the code in Playing system sound without importing your own, but this should do the work:
var path = NSBundle.FromIdentifier ("com.Apple.UIKit").PathForResource ("Tock", "aiff");
using (var systemSound = new SystemSound (NSUrl.FromFilename (path))) {
systemSound.PlaySystemSound ();
}
SystemSound is defined in MonoTouch.AudioToolbox. Make sure to also look at MonoTouch Play System Sound and MonoTouch: Playing sound

Resources