I am recording incoming stream to mp3 file into 20 seconds parts in sender device. After that i am uploading this files to Google Drive(using RCLONE). Then i am downloading this files to receiver device. I am waiting about a while time (buffering) in the receiver side. Then i start to play this file using VLC-player from command line and listening this song. Having a problem when skipping to another m3 file in media-player occurring an silence about 0.1 seconds. I tried to concatenate those mp3 files into one file, but same problem had occurred again.
How can i handle this problem ?
Here is the part of code;
def Sound(self):
t1=threading.Thread(target=self.read_playlist) # update playlist file continuously
t1.start()
vlc_instance = vlc.Instance()
player = vlc_instance.media_player_new()
i=0
while 1:
media = vlc_instance.media_new(self.playlist[i].strip())
player.set_media(media)
duration=self.len_mp3(self.playlist[i].strip())
player.play()
time.sleep(duration)
i=i+1
Mr. Brad, i am so late for feedback sorry about that. Problem solved with your advice, here what i do:
First i am creating an HLS segment with this command;
ffmpeg -f alsa -i plughw:1,0 -c:a libmp3lame -f segment -strftime 1 -segment_time 1 -segment_format mpegts path/%Y%m%d%H%M%S.ts
This creates a ".ts" files which is the length of 1 second according to timestamp
In the receiver side, i am downloading this ".ts" files to my device. While downloading these ".ts" files, i am waiting to create a ".m3u8" file for example let's say,
buffer time is 3 minutes, then i am starting the download process and waiting for 3 minutes to create ".m3u8" file.
After 3 minutes, i am starting to create ".m3u8" file manually and i am starting mpv-player(python interface) to play the ".m3u8" file. I am updating ".m3u8" file every one second in the receiver side
Related
I am trying to play two audio files pre_loop.flac and loop.flac using VLC from the command line, repeating the last item only (in an infinite loop).
Between the files, there should be no noticeable silence.
I have tried the following command:
cvlc pre_loop.flac loop.flac :repeat
This plays both files seamlessly but only once (neither repeating any item nor terminating the process).
I also cannot use two separate commands like
cvlc --play-and-exit pre_loop.flac && cvlc --repeat loop.flac
because the silence between the commands is too long.
Any suggestions?
I am developing MIDI Player by referring to the following Web-Page.
http://twocentstudios.com/2017/02/20/bouncing-midi-to-audio-on-ios/
I don't do any recording, I just want to play the SMF file.
However, when I run setPreload (true), it says "ASSERTION FAILED: Preroll mode set during render" and my app hangs.
I searched for "Preroll mode set during render" but couldn't find any valid information.
Please help someone.
EDIT:
hi, #dspr.
The percussion sounds even if I don't do "AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 1)".
I think this is because the BANK for percussion is automatically assigned to ch.10.
However, in this state, the piano and guitar and others do not sound.
AVAudioUnitMIDI Instrument needs kAUMIDISynthProperty_EnablePreload to analyze which tone is assigned to which track in the SMF file, right?
Which method does AVAudioUnitMIDIInstrument use to preload SMF files?
(1) AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 1) to AVAudioUnitMIDISynth
(2) << How to preload? >>
(3) AudioUnitSetProperty (kAUMIDISynthProperty_EnablePreload: 0) to AVAudioUnitMIDISynth
(4) Start AVAudioSequencer
MIDI Player uses the kAUMIDISynthProperty_EnablePreload property of MIDISynth for that purpose. See the Apple comment about it below. Note the It should only be used prior to MIDI playback and must be set back to 0 before attempting to start playback sentence at the end :
/*!
#constant kAUMIDISynthProperty_EnablePreload
#discussion Scope: Global
Value Type: UInt32
Access: Write
Setting this property to 1 puts the MIDISynth in a mode where it will attempt to load
instruments from the bank or file when it receives a program change message. This
is used internally by the MusicSequence. It should only be used prior to MIDI playback,
and must be set back to 0 before attempting to start playback.
*/
EDIT : frankly, I'm a little bit reserved about your link
One strategy I haven’t tried would be to pitch shift the MIDI up one octave, play it back at 2x, record it at 88.2kHz, then downsample to 44.1kHz. AVAudioSession presumably can’t go past 48kHz though.
Clearly, the person who wrote that has a very poor knowledge about audio and sampling. Playing a MIDI song transposed one octave up at double tempo is really not equivalent than playing the same recorded in audio at double speed whatever you make the recording at 88.2kHz or any other sample rate. As a simple example, what happens is the file contains a drum set ? A snare drum (40) will become a Chinese cymbal (52) played two times slower ?
As I can understand this post, the described hack has for unique purpose to make recording. So if you simply want to play your MIDI file back you can certainly find a simpler and better example.
I use google speech-to-text API to get subtitles from audio, but when audio is too long, normally longer than 60 minutes, it will fail for too many retries.It says: google.api_core.exceptions.GoogleAPICallError: None Too many retries, giving up.
Can someone help me ??
I have tried many time, when audio file is shorter than about 60 minutes, it is OK.
client = speech.SpeechClient()
# Loads audio into memory.
audio = types.RecognitionAudio(uri=gcs_uri)
config = types.RecognitionConfig(
encoding=enums.RecognitionConfig.AudioEncoding.OGG_OPUS,
sample_rate_hertz=48000,
language_code='en-US',
enable_word_time_offsets=True,
enable_automatic_punctuation=True)
# Detects speech in the audio file
operation = client.long_running_recognize(config, audio)
print('Waiting for operation to complete...')
# Get the feedback from Google Cloud API
operation.add_done_callback(callback)
time.sleep(30)
#metadata = operation.metadata
# Every 30 seconds to get back one response
percentile(operation, 30)
response = operation.result(timeout=None)
This is an exception thrown by operation.result() call. The operation.result() call has an internal counter that overflows.
Try to poll operation.done() before calling to operation.result(). The operation.done() is a non-blocking call.
Hope that will be fixed in future releases of google.cloud.speech library.
I have successfully used the asynchronous file conversion tool with both python and the gcloud tool. However, when I try and use either option with different audio files of my own i have in different formats, I get error messages. For example, I have been using a 45 second mp3 and trying this command:
python long.py gs://audio_001/pt2.mp3
The error message, I get back is below.
"Waiting for operation to complete...
Traceback (most recent call last):
File "long.py", line 99, in <module>
transcribe_gcs(args.path)
File "long.py", line 80, in transcribe_gcs
response = operation.result(timeout=300)
File "C:\Python27\lib\site-packages\google\api_core\future\polling.py", line
120, in result
raise self._exception
google.api_core.exceptions.GoogleAPICallError: None Unexpected state: Long-
running operation had neither response nor error set."
I changed the name of the asynchronous script and increased the timeouts to 300 seconds.
Please suggest.
It’s likely the audio file type you’re using isn’t encoded in a supported format [1]. I’ve tried to process MP3 files myself but it always returns no results, or returns an index error (in both standard and long transcription methods). I’d recommend converting the file to FLAC or WAV - this works for me.
[1] https://cloud.google.com/speech-to-text/docs/encoding#audio-encodings
I'm using Audio Queue to record audio from the iphone's mic and stop recording when silence detected (no audio input for 10seconds) but I want to discard the silence from audio file.
In AudioInputCallback function I am using following code to detect silence :
AudioQueueLevelMeterState meters[1];
UInt32 dlen = sizeof(meters);
OSStatus Status AudioQueueGetProperty(inAQ,kAudioQueueProperty_CurrentLevelMeterDB,meters,&dlen);
if(meters[0].mPeakPower < _threshold)
{ // NSLog(#"Silence detected");}
But how to remove these packets? Or Is there any better option?
Instead of removing the packets from the AudioQueue, you can delay the write up by writing it to a buffer first. The buffer can be easily defined by having it inside the inUserData.
When you finish recording, if the last 10 seconds is not silent, you write it back to whatever file you are going to write. Otherwise just free the buffer.
after the file is recorded and closed, simply open and truncate the sample data you are not interested in (note: you can use AudioFile/ExtAudioFile APIs to properly update any dependent chunk/header sizes).