Roblox studio Sound max distance not working - lua

I am trying to create a script so that when a part touches another it plays sound from a parent called speaker, I am trying and have set the max distance to 50 studs but it's not working
Here is the script:
local announce = game.Workspace.Announce
local vehicleSeat = game.Workspace["AXP Series(tong's mod)"].VehicleSeat
local speaker = game.Workspace.Speaker
local sound = game.SoundService.Sound
vehicleSeat.Touched:Connect(function(otherPart)
if otherPart == announce then
sound.Parent = speaker
sound:Play()
end
end)
enter image description here

Based on this DevForum conversation, it sounds like you need to double check that the Sound is a child of a Part, not a Model. When the sound is Parented to a Model, you can hear the sound everywhere.
This is confirmed in the Sound documentation :
A sound placed in a BasePart or an Attachment will emit its sound from that part's BasePart.Position or the attachment's Attachment.WorldPosition
...
A sound is considered "global" if it is not parented to a BasePart or an Attachment. In this case, the sound will play at the same volume throughout the entire place.

Related

How to play alarm tone in xamarin ios when local notification fires at specific time

I have an app in xamarin ios in which I want to play alarm tone when notification fires and stop when user dismiss the notification. How can I achieve this ?
Take a look at UNNotificationSound , it is used to customize sound played upon delivery of a notification.
For local notifications, assign the sound object to the sound property of your UNMutableNotificationContent object.
Sample code
var content = new UNMutableNotificationContent();
content.Title = "Title";
content.Sound = UNNotificationSound.GetCriticalSound("a.aiff");
var trigger = UNTimeIntervalNotificationTrigger.CreateTrigger(5, false);
var request = UNNotificationRequest.FromIdentifier("a", content,trigger);
UNUserNotificationCenter.Current.AddNotificationRequest(request, null);
Notice
Place the sound file in the following locations :
The /Library/Sounds directory of the app’s container directory.
The /Library/Sounds directory of one of the app’s shared group container
directories.
The main bundle of the current executable.
The sound must be in one of the following audio data formats:
Linear PCM
MA4 (IMA/ADPCM)
µLaw
aLaw
You can package the audio data in an aiff, wav, or caf file.
Sound files must be less than 30 seconds in length.
If the sound file is longer than 30 seconds, the system plays the default sound instead.

AudioKit 5 - player started when in a disconnected state

Trying to use AudioKit 5 to dynamically create a player with a mixer, and attach it to a main mixer. I'd like the resulting chain to look like:
AudioPlayer -> Mixer(for player) -> Mixer(for output) -> AudioEngine.output
My example repo is here: https://github.com/hoopes/AK5Test1
You can see in the main file here (https://github.com/hoopes/AK5Test1/blob/main/AK5Test1/AK5Test1App.swift) that there are three functions.
The first works, where an mp3 is played on a Mixer that is created when the controller class is created.
The second works, where a newly created AudioPlayer is hooked directly to the outputMixer.
However, the third, where I try to set up the chain above, does not, and crashes with the "player started when in a disconnected state" error. I've copied the function here:
/** Try to add a mixer with a player to the main mixer */
func doesNotWork() {
let p2 = AudioPlayer()
let localMixer = Mixer()
localMixer.addInput(p2)
outputMixer.addInput(localMixer)
playMp3(p: p2)
}
Where playMp3 just plays an example mp3 on the AudioPlayer.
I'm not sure how I'm misusing the Mixer. In my actual application, I have a longer chain of mixers/boosters/etc, and getting the same error, which led me to create the simple test app.
If anyone can offer advice, I'd love to hear it. Thanks so much!
In your case, you can swap outputMixer.addInput(localMixer) and localMixer.addInput(p2) then it works
Once you have started the engine: work backwards from the output with your audio chain connections. So, your problem was that you attached a player to a mixer that was disconnected from the output. You needed to first attach the output to the mixer and then attach that mixer to the player.
The advice I wound up getting from an AudioKit contributor was to do everything possible to create all audio objects that you need up front, and dynamically change their volume to "connect" and "disconnect", so to speak.
Imagine you have a piano app (a contrived example, but hopefully gets the point across) - instead of creating a player when a key is pressed, connecting it, and disconnecting/disposing when the note is complete, create a player for every key on startup, and deal with them dynamically - this prevents any weirdness with "disconnected state", etc.
This has been working pretty flawlessly for me since.

liquid soap backup playlist

I am using liquidspoap for a community radio station. when silence is detected, liquidsoap starts playing a playlist.
My issue is that, if liquid soap detects silence then it starts the backup pls, then goes back to normal once sound comes back, then the next time it detects silence, it plays the backup playlist, but this time it continues playing from where it was left last time. I just want the playlist to play from the beginning each time? any ideas please, my script is below
#!/home/ubuntu/.opam/system/bin/liquidsoap
set("log.file",false)
set("log.stdout",true)
set("log.level",3)
# myplaylist
myplaylist = playlist("~/backup_playlist/playlist/Emergency-list.m3u",mode="normal")
backup_playlist = audio_to_stereo(myplaylist)
blank = once(single("~/blank_7_s.mp3"))
#Live local talk stream
live_local = input.http("http://test.com:8382/main.mp3")
#Live remote talk stream
live_remote=input.harbor("live_remote",port=8383,password="test",buffer=2.0)
# Talk over stream using microphone mount.
mic=input.harbor("mic",port=8383,password="test",buffer=2.0)
# If something goes wrong, we'll play this
security = single("~/backup_playlist/test.mp3")
radio = fallback(track_sensitive=false, [strip_blank(max_blank=120.,live_remote), strip_blank(max_blank=120.,live_local), backup_playlist , security])
radio = smooth_add(delay=0.65, p=0.15, normal=radio, special=strip_blank(max_blank=2.,mic))
# Stream it out
output.icecast(%mp3(bitrate=64), host="localhost", port=8382, password="test", mount="listen.mp3", genre="Talk", description="test Station Australia", $
If you do not need to have a playlist, an easy way would be to have an array of songs, and when you fallback, you just pick a random song from that array and it will start from the beginning. Note, I do not know how to do this or if it would work, as I am not familiar with liquidsoap, and this is more of a workaround than a solution. I will work on finding a better solution, but I hope this helps for now!

iOS How to set specifics channels of USB device to a audio player? AVFoundation

I'm working with AVAudioplayer and AVAudiosession. I have got an iPad and a audio interface (sound card).
This audio interface has 4 outputs (2 stereo), a lightning cable and it receive energy from the iDevice, works excellent.
Ive coded a simple play() stop() AVAudioplayer that works fine BUT I need to asign specific channel of the audio interface (1-2 & 3-4). My idea is send two audios (A & B) to each output/channel (1-2 or 3-4)
I've read the AVAudioplayer's documentation and it says: channelAssignments is for asign channels to a audioplayer.
The problem is: I've created an AVAudiosession that get the data of the USBport's device plugged (soundcard). And I got:
let route = AVAudioSession.sharedInstance().currentRoute
for port in route.outputs {
if port.portType == AVAudioSessionPortUSBAudio {
let portChannels = port.channels
let sessionOutputs = route.outputs
let dataSource = port.dataSources
dataText.text = String(portChannels) + "\n" + String(sessionOutputs) + "\n" + String(dataSource)
}
}
Log:
outputs
Which data I must to take and use to send the audios with play()?
Wow - I had no idea that AVAudioPlayer had been developed at all since AVPlayer came out in iOS 4. Yet here we are in 2016, and AVAudioPlayer has channelAssignments while the fancy streaming, video playing with subtitles AVPlayer does not.
But I don't think you will be able to play two files A and B through one AVAudioPlayer as each player can only open one file. That leaves
creating two players (player(A) and player(B)) and setting the channelAssignments of each to one half of the audio devices output channels, dealing with the joys of synchronising the players, or
creating a four channel file, AB, and playing it through one player, assigning channelAssignments the full four channels you found above, dealing with the joys of slightly non-standard audio files .
Sanity check: is your session.outputNumberOfChannels returning 4?
Personally, when I do this kind of thing I create a 4 channel remote io audio unit as I've found the higher level APIs cause too much heartache once you do anything a little unusual. I also use AVAudioSessionCategoryMultiRoute because I don't have any > 2 channel sound cards, so I have to cobble headphone jack plus usb sound card to get 4 channels, but you shouldn't need this.
Despite not having procedural output (like remoteIO audio units), you may also be able to use AVAudioEngine to do what you want.

How to record and play back audio using media.newRecording

I've created 3 squares. I want the first square to record and save the recording. Second square to stop the recording. Third square to get the recording and play it back.
I'm able to get a response when I use the app on my computer, but the results aren't very clear(I don't have a mic). When I test on my phone I don't get any response.
How to make this app work?
function startRecording ()
print( "startRec tap" )
recordPath = system.pathForFile( "recordings.wav", system.DocumentsDirectory )
recording = media.newRecording( recordPath )
recording:startTuner( )
recording:startRecording( )
proof1 = display.newText( "proof1", 80, 300,nil , 20 )
end
function stopRecording( )
print( "stopRec tap" )
proof2 = display.newText( "proof2", 280, 300,nil , 20 )
recording:stopRecording( )
recording:stopTuner( )
end
function playRecording ()
print( "Play back recording" )
findRecording = audio.loadSound( "recordings.wav" , system.DocumentsDirectory )
audio.play( findRecording)
proof3 = display.newText( "proof3", 400, 300,nil , 20 )
end
*UPDATE: I've loaded an .wav file with actual sound into the system.DocumentsDirectory. Results: It works in corona simulator(hear sound), but it doesn't work on my phone (no sound). ***I believe the problem is that my phone can't access the files in system.DocumentsDirectory.
Following is my code that I've edited to test, and sample.wav is audio that has actual sound. Anyone have any insights?
function playRecording ()
print( "Play back recording" )
findRecording = audio.loadSound( "sample.wav" , system.DocumentsDirectory )
audio.play( findRecording)
proof3 = display.newText( "proof3", 400, 300,nil , 20 )
end
Thank you for your time
There are several things you should be checking:
RECORD_AUDIO permission in config file
IsRecording is true after start and before stop
size of sound file created
if file created, copy it to your PC and see if it plays anything
Update:
If you know for a fact that all are true i.e. you have a sound file, then here are more questions to check:
maybe the playback is not finding the file.
Is it working in simulator?
You able to hear any sound from other (non corona) games or apps on device?
You able to hear sound when you play one of the corona demo apps that have sound, on your device?
Update 2:
If all the above pass, then "I believe the problem is that my phone can't access the files in system.DocumentsDirectory" is not likely true, unless there are config settings in the demo app that are different in your app like some config permissions.
Copy the Corona example that works to a new project and start adding code from your app. Test often as you might eventually add some app code that causes it to no longer work then you'll see what was wrong in your original app, or you'll end up having everything you need in your modified/extended app and you'll never know what was problem but you can then drop original source you have been working on.

Resources