liquid soap backup playlist - playlist

I am using liquidspoap for a community radio station. when silence is detected, liquidsoap starts playing a playlist.
My issue is that, if liquid soap detects silence then it starts the backup pls, then goes back to normal once sound comes back, then the next time it detects silence, it plays the backup playlist, but this time it continues playing from where it was left last time. I just want the playlist to play from the beginning each time? any ideas please, my script is below
#!/home/ubuntu/.opam/system/bin/liquidsoap
set("log.file",false)
set("log.stdout",true)
set("log.level",3)
# myplaylist
myplaylist = playlist("~/backup_playlist/playlist/Emergency-list.m3u",mode="normal")
backup_playlist = audio_to_stereo(myplaylist)
blank = once(single("~/blank_7_s.mp3"))
#Live local talk stream
live_local = input.http("http://test.com:8382/main.mp3")
#Live remote talk stream
live_remote=input.harbor("live_remote",port=8383,password="test",buffer=2.0)
# Talk over stream using microphone mount.
mic=input.harbor("mic",port=8383,password="test",buffer=2.0)
# If something goes wrong, we'll play this
security = single("~/backup_playlist/test.mp3")
radio = fallback(track_sensitive=false, [strip_blank(max_blank=120.,live_remote), strip_blank(max_blank=120.,live_local), backup_playlist , security])
radio = smooth_add(delay=0.65, p=0.15, normal=radio, special=strip_blank(max_blank=2.,mic))
# Stream it out
output.icecast(%mp3(bitrate=64), host="localhost", port=8382, password="test", mount="listen.mp3", genre="Talk", description="test Station Australia", $

If you do not need to have a playlist, an easy way would be to have an array of songs, and when you fallback, you just pick a random song from that array and it will start from the beginning. Note, I do not know how to do this or if it would work, as I am not familiar with liquidsoap, and this is more of a workaround than a solution. I will work on finding a better solution, but I hope this helps for now!

Related

Roblox studio Sound max distance not working

I am trying to create a script so that when a part touches another it plays sound from a parent called speaker, I am trying and have set the max distance to 50 studs but it's not working
Here is the script:
local announce = game.Workspace.Announce
local vehicleSeat = game.Workspace["AXP Series(tong's mod)"].VehicleSeat
local speaker = game.Workspace.Speaker
local sound = game.SoundService.Sound
vehicleSeat.Touched:Connect(function(otherPart)
if otherPart == announce then
sound.Parent = speaker
sound:Play()
end
end)
enter image description here
Based on this DevForum conversation, it sounds like you need to double check that the Sound is a child of a Part, not a Model. When the sound is Parented to a Model, you can hear the sound everywhere.
This is confirmed in the Sound documentation :
A sound placed in a BasePart or an Attachment will emit its sound from that part's BasePart.Position or the attachment's Attachment.WorldPosition
...
A sound is considered "global" if it is not parented to a BasePart or an Attachment. In this case, the sound will play at the same volume throughout the entire place.

AudioKit 5 - player started when in a disconnected state

Trying to use AudioKit 5 to dynamically create a player with a mixer, and attach it to a main mixer. I'd like the resulting chain to look like:
AudioPlayer -> Mixer(for player) -> Mixer(for output) -> AudioEngine.output
My example repo is here: https://github.com/hoopes/AK5Test1
You can see in the main file here (https://github.com/hoopes/AK5Test1/blob/main/AK5Test1/AK5Test1App.swift) that there are three functions.
The first works, where an mp3 is played on a Mixer that is created when the controller class is created.
The second works, where a newly created AudioPlayer is hooked directly to the outputMixer.
However, the third, where I try to set up the chain above, does not, and crashes with the "player started when in a disconnected state" error. I've copied the function here:
/** Try to add a mixer with a player to the main mixer */
func doesNotWork() {
let p2 = AudioPlayer()
let localMixer = Mixer()
localMixer.addInput(p2)
outputMixer.addInput(localMixer)
playMp3(p: p2)
}
Where playMp3 just plays an example mp3 on the AudioPlayer.
I'm not sure how I'm misusing the Mixer. In my actual application, I have a longer chain of mixers/boosters/etc, and getting the same error, which led me to create the simple test app.
If anyone can offer advice, I'd love to hear it. Thanks so much!
In your case, you can swap outputMixer.addInput(localMixer) and localMixer.addInput(p2) then it works
Once you have started the engine: work backwards from the output with your audio chain connections. So, your problem was that you attached a player to a mixer that was disconnected from the output. You needed to first attach the output to the mixer and then attach that mixer to the player.
The advice I wound up getting from an AudioKit contributor was to do everything possible to create all audio objects that you need up front, and dynamically change their volume to "connect" and "disconnect", so to speak.
Imagine you have a piano app (a contrived example, but hopefully gets the point across) - instead of creating a player when a key is pressed, connecting it, and disconnecting/disposing when the note is complete, create a player for every key on startup, and deal with them dynamically - this prevents any weirdness with "disconnected state", etc.
This has been working pretty flawlessly for me since.

Google cloud speech very inaccurate and misses words on clean audio

I am using Google cloud speech through Python and finding many transcriptions are inaccurate and missing several words. This is a simple script I'm using to return a transcript of an audio file, in this case 'out307.wav':
client = speech.SpeechClient()
with io.open('out307.wav', 'rb') as audio_file:
content = audio_file.read()
audio = speech.types.RecognitionAudio(content=content)
config = speech.types.RecognitionConfig(
enable_word_time_offsets=True,
language_code='en-US',
audio_channel_count=1)
response = client.recognize(config, audio)
for result in response.results:
alternative = result.alternatives[0]
print(u'Transcript: {}'.format(alternative.transcript))
This returns the following transcript:
to do this the tensions and suspicions except
This is very far off what the actual audio says (I've uploaded it at https://vocaroo.com/i/s1zdZ0SOH1Ki). The audio is a .wav and very clear with no background noise. This is worse than average, as in some cases it will get the transcription fully correct on a 10 second audio file, or it may miss just a couple of words. Is there anything I can do to improve results?
This is weird, I tried your audio file with your code and I get the same result, but, if I change the language_code to "en-UK" I am able to get the full response.
I'm working for Google Cloud and I created for you a public issue here, you can track there the updates.

Read HLS Playlist information to dynamically change the preferredBitRate of an Item

I'm working on a video app, we are changing form regular mp4 files to HLS, one of the many reasons we have to do the change is that we hace much more control over the bandwidth usage of videos (we load lots of other stuff in our player, so we need to optimize the experience the best way).
So, AVFoundation introduced in iOS10 the ability to control the bandwidth using:
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:self.urlAsset];
playerItem.preferredForwardBufferDuration = 30.0;
playerItem.preferredPeakBitRate = 200000.0; // Remember this line
There's also a configuration introduced on iOS11 to set the maximum resolution of the item with preferredMaximumResolution, So we're using it, but we still need a solution for iOS10 devices.
Well, now we have control over the preferredPeakBitRate that's nice, but we have a problem, not all the HLS sources are generated by us, so, let's say we want to set a maximum resolution of 480p when you're not connected to a wifi network, today I don't have way to achieve that, not always I'm going to be able to know how much bandwidth needs the 480p source for the selected HLS playlist.
One thing I was thinking about is to read the information inside the m3u8 file, to at least know which are the different quality sources that my player can show and how much bandwidth needs everyone.
One way to do this, would download the m3u8 playlist as a plain text, use a regex to read the file and process this data, well, I'm trying to avoid that, I think that this should far less difficult.
I cannot read this information from the tracks, because a) I can't find the information, b) the tracks are replaced dynamically when changing the quality, yeah 1 track for every quality level.
So, I don't know how I can get this information, I've searched google, stackoverflow and I can't find this information, does any one can help me?
Here's an example for what I want to do, I have this example playlist:
#EXTM3U
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=314000,RESOLUTION=228x128,CODECS="mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_192k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=478000,RESOLUTION=400x224,CODECS="avc1.42001e,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_400k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=691000,RESOLUTION=480x270,CODECS="avc1.42001e,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_600k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1120000,RESOLUTION=640x360,CODECS="avc1.4d001f,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_1000k.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1661000,RESOLUTION=960x540,CODECS="avc1.4d001f,mp4a.40.2"
test-hls-1-16a709300abeb08713a5cada91ab864e_hls_duplex_1500k.m3u8
And I just want to have that information available on an array inside my code, something like this:
NSArray<ZZMetadata *> *metadataArray = self.urlAsset.bandwidthMetadata;
NSLog(#"Metadata info: %#", metadataArray);
And print something like this:
<__NSArrayM 0x123456789> (
<ZZMetadata 0x234567890> {
trackId: 1
neededBandwidth: 314000
resolution: 228x128
codecs: ...
...
}
<ZZMetadata 0x345678901> {
trackId: 2
neededBandwidth: 478000
resolution: 400x224
}
...
}

How to get all of the HLS variants in a master manifest from a AVAsset or AVPlayerItem?

Given an HLS manifest with multiple variants/renditions:
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=1612430,CODECS="avc1.4d0020,mp4a.40.5",RESOLUTION=640x360
a.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=3541136,CODECS="avc1.4d0020,mp4a.40.5",RESOLUTION=960x540
b.m3u8
#EXT-X-STREAM-INF:PROGRAM-ID=1,BANDWIDTH=5086455,CODECS="avc1.640029,mp4a.40.5",RESOLUTION=1280x720
c.m3u8
Is it possible to get an array of the three variants (with the attributes such as bandwidth and resolution) from either the AVAsset or AVPlayerItem?
I am able to get the currently playing AVPlayerItemTrack by using KVO on the AVPlayerItem, but again, it's only the track that's actively being played not the full list of variants.
I'm interested in knowing if the asset is being played at it's highest possible quality, so that I can make a decision on whether the user has enough bandwidth to start a simultaneous secondary video stream.
To know which variant you are currently playing, you can keep a KVO on AVPlayerItemNewAccessLogEntryNotification and by looking at AVPlayerItemAcessLogEvent in the access log, you can tell current bitrate and any change in bitrate.
AVPlayerItemAccessLog *accessLog = [((AVPlayerItem *)notif.object) accessLog];
AVPlayerItemAccessLogEvent *lastEvent = accessLog.events.lastObject;
if( lastEvent.indicatedBitrate != self.previousBitrate )
{
self.bitrate = lastEvent.indicatedBitrate
}
As far as knowing the entire list of available bitrates, you can simply make a GET request for the master m3u8 playlist and parse it. You will only need to do it once so not much of an overhead.
New in iOS 15, there’s AVAssetVariant

Resources