I tried to set GaussianBlurBackgroundProcessor (I use this post as a starting point but instead of node.js i use *.min.js in a php page). On local video it works but when I connect my video in a room, remote partecipants see my video "clean".
Someone had have my same problem?
I use min version of:
twilio-video.js 2.22.1
twilio-video-processors.js 1.0.2
This is the code:
[...]
const TWVideo = Twilio.Video;
const bg = new Twilio.VideoProcessors.GaussianBlurBackgroundProcessor({
assetsPath: '',
maskBlurRadius: 5,
blurFilterRadius: 25,
});
bg.loadModel();
const localVideo = TWVideo.createLocalVideoTrack().then(track => {
let video = document.getElementById('local-media').firstElementChild;
setProcessor(bg, track);
video.appendChild(track.attach());
$('#local-media').find('video').css('width', '200px');
});
TWVideo.connect(room_token, {
name: roomName
}).then(room => {
window.room = activeRoom = room;
log('Connected to Room '+ roomName);
room.participants.forEach(participantConnected);
room.on('participantConnected', participantConnected);
room.on('participantDisconnected', participantDisconnected);
room.once('disconnected', error => room.participants.forEach(participantDisconnected));
room.on('reconnecting', error => {
assert.equal(room.state, 'reconnecting');
if (error.code === 53001) {
console.log('Reconnecting your signaling connection!', error.message);
}
else if (error.code === 53405) {
console.log('Reconnecting your media connection!', error.message);
}
});
room.on('reconnected', () => {
console.log('Reconnected your signaling and media connections!');
assert.equal(room.state, 'connected');
});
room.on('participantReconnected', remoteParticipant => {
console.log("${remoteParticipant.identity} has reconnected the signaling connection to the Room!");
assert.equals(remoteParticipant.state, 'connected');
})
});
[...]
Thanks!
The issue is that you are creating a local video track and applying the blur to it, but you’re not using that track when you connect to the room. I would create the local video track and audio track first, then apply them to the room when you connect like this:
Video.connect(roomToken, {
name: roomName,
tracks: [localVideo, localAudio]
}).then(…);
Related
I'm making a playlist app in react, where each Track component in the playlist has and <audio> element:
const AudioPlayer = ({ track }: { track: Track }): JSX.Element => {
const { src, title } = track;
return (
<audio id={"audio_" + title}>
<source src={src} />
Your browser does not support the <code>audio</code> element.
</audio>
);
};
In my redux store, when a user click a Track, I find the audio element associated with the track and use audioElement.play():
const playTrack = (track: Track) => {
//get all the audio elements for all the playlist tracks
allAudioElems.current = tracks
.map((t) => "audio_" + t.title)
.map((id) => document.getElementById(id) as HTMLAudioElement)
.filter((e) => e !== null);
if (allAudioElems.current) {
allAudioElems.current.forEach((element) => {
//if the audio elements id matches the active track title, then play
if (element.id === "audio_" + track.title) {
element.play();
setCurrentAudio(element);
} else {
element.pause();
}
});
}
setCurrentTrack(track.title);
setIsPlayingAction(true);
setIsPlaying(true);
};
This all works fine on desktop for Chrome/Safari/Firefox, but fails on iOS Safari. Enabling controls, or autoPlay doesn't seem to do anything either.
I know that .play() returns a promise, but the error message is very vague when I catch the .play() error:
element.play().then((e)=>{
console.log("played audio!");
}).catch((error)=>{
console.log(error)
});
Gives Error:
Unhandeled Promise Rejection: AbortError: The operation was aborted
So what's one to do to get audio working on iOS?
Team i have implemented the tracking functionality in our app and using BackgroundGeolocation but its not working in the background below we have mentioned scenario:
Case 1:
BackgroundGeolocation is fire only 1 time if i am foreground not in background below links we have used for BackgroundGeolocation :
https://www.joshmorony.com/adding-background-geolocation-to-an-ionic-2-application/
https://www.freakyjolly.com/ionic-3-get-background-device-location-using-cordova-and-ionic-native-geolocation-plugins/
i have tried above links but i got same result for me.i really stuck location issues in ionic3 as well ionic community is not supported for any query.
loadLocation(){
// foreground Tracking
var self = this;
//get user currentLocation and update user location after move move
self.watch = this.geolocation.watchPosition(options).filter((p: any) => p.code === undefined).subscribe((position: Geoposition) => {
//self.subscription = self.watch.subscribe((position) => {
var latLng = new google.maps.LatLng(position.coords.latitude, position.coords.longitude);
console.log('lat fire1.....', +position.coords.latitude);
console.log('long fire.....', +position.coords.longitude);
//set center to map and set zoom....
self.map.setCenter(latLng);
self.map.setZoom(16);
//updateDriverLocation for get driver updated location...
self.updateDriverLocation(position.coords.latitude, position.coords.longitude);
//moveMarker for ,move marker......
self.moveMarker(position);
}, (err) => {
console.log(err);
});
// Background Tracking
let config = {
desiredAccuracy: 0,
stationaryRadius: 20,
distanceFilter: 10,
debug: true,
interval: 2000
};
this.backgroundGeolocation.configure(config).subscribe((location) => {
alert('dddddd');
console.log('BackgroundGeolocation: ' + location.latitude + ',' + location.longitude);
//updateDriverLocation for get driver updated location...
self.updateDriverLocation(location.latitude,location.longitude);
self.moveMarker(location);
}, (err) => {
console.log(err);
});
// Turn ON the background-geolocation system.
this.backgroundGeolocation.start();
}
I tried each and everything but nothing worked please team help me.
We have built a web application. The application's core is to arrange the meetings/sessions on the web. So User A(Meeting co-ordinator) will arrange a meeting/session and all other participants B, C, D and etc will be joining in the meeting/session. So I have used Twilio group video call to achieve it.
I have the below use case.
We want to do the voice pitch shifting of the User A's(Meeting co-ordinator) voice. So all other participants will be receiving the pitch-shifted voice in group video. We have analyzed the AWS Polly in Twilio but it doesn’t match with our use case.
So please advice is there any services in Twilio to achieve this scenario.
(or)
will it be possible to interrupt Twilio group call and pass the pitch-shifted voice to other participants?
Sample Code Used
initAudio();
function initAudio() {
analyser1 = audioContext.createAnalyser();
analyser1.fftSize = 1024;
analyser2 = audioContext.createAnalyser();
analyser2.fftSize = 1024;
if (!navigator.getUserMedia)
navigator.getUserMedia = navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
if (!navigator.getUserMedia)
return(alert("Error: getUserMedia not supported!"));
navigator.getUserMedia({ audio: true }, function(stream){
gotStream(stream);
}, function(){ console.log('Error getting Microphone stream'); });
if ((typeof MediaStreamTrack === 'undefined')||(!MediaStreamTrack.getSources)){
console.log("This browser does not support MediaStreamTrack, so doesn't support selecting sources.\n\nTry Chrome Canary.");
} else {
MediaStreamTrack.getSources(gotSources);
}
}
function gotStream (stream) {
audioInput = audioContext.createMediaStreamSource(stream);
outputMix = audioContext.createGain();
dryGain = audioContext.createGain();
wetGain = audioContext.createGain();
effectInput = audioContext.createGain();
audioInput.connect(dryGain);
audioInput.connect(effectInput);
dryGain.connect(outputMix);
wetGain.connect(outputMix);
audioOutput = audioContext.createMediaStreamDestination();
outputMix.connect(audioOutput);
outputMix.connect(analyser2);
crossfade(1.0);
changeEffect();
}
function crossfade (value) {
var gain1 = Math.cos(value * 0.5 * Math.PI);
var gain2 = Math.cos((1.0 - value) * 0.5 * Math.PI);
dryGain.gain.value = gain1;
wetGain.gain.value = gain2;
}
function createPitchShifter () {
effect = new Jungle( audioContext );
effect.output.connect( wetGain );
effect.setPitchOffset(1);
return effect.input;
}
function changeEffect () {
if (currentEffectNode)
currentEffectNode.disconnect();
if (effectInput)
effectInput.disconnect();
var effect = 'pitch';
switch (effect) {
case 'pitch':
currentEffectNode = createPitchShifter();
break;
}
audioInput.connect(currentEffectNode);
}
Facing the error while adding the Localaudiotrack to a room
var mediaStream = new Twilio.Video.LocalAudioTrack(audioOutput.stream);
room.localParticipant.publishTrack(mediaStream, {
name: 'adminaudio'
});
ERROR:
Uncaught (in promise) TypeError: Failed to execute 'addTrack' on 'MediaStream': parameter 1 is not of type 'MediaStreamTrack'.
Twilio developer evangelist here.
There is nothing within Twilio itself that pitch shifts voices.
If you are building this in a browser, then you could use the Web Audio API to take the input from the user's microphone and pitch shift it, then provide the resultant audio stream to the Video API instead of the original mic stream.
the comments in the above answer are SO helpful! I've been researching this for a couple of weeks, posted to Twilio-video.js to no avail and finally just the right phrasing pulled this up on S.O!
but to summarize and to add what I've found to work since it's hard to follow all the 27 questions/comments/code excerpts:
when connecting to Twilio:
const room = await Video.connect(twilioToken, {
name: roomName,
tracks: localTracks,
audio: false, // if you don't want to hear the normal voice at all, you can hide this and add the shifted track upon participant connections
video: true,
logLevel: "debug",
}).then((room) => {
return room;
});
upon a new (remote) participant connection:
const stream = new MediaStream([audioTrack.mediaStreamTrack]);
const audioContext = new AudioContext();
const audioInput = audioContext.createMediaStreamSource(stream);
source.disconnect(audioOutput);
console.log("using PitchShift.js");
var pitchShift = PitchShift(audioContext);
if (isFinite(pitchVal)) {
pitchShift.transpose = pitchVal;
console.log("gain is " + pitchVal);
}
pitchShift.wet.value = 1;
pitchShift.dry.value = 0.5;
try {
audioOutput.stream.getAudioTracks()[0]?.applyConstraints({
echoCancellation: true,
noiseSuppression: true,
});
} catch (e) {
console.log("tried to constrain audio track " + e);
}
var biquadFilter = audioContext.createBiquadFilter();
// Create a compressor node
var compressor = audioContext.createDynamicsCompressor();
compressor.threshold.setValueAtTime(-50, audioContext.currentTime);
compressor.knee.setValueAtTime(40, audioContext.currentTime);
compressor.ratio.setValueAtTime(12, audioContext.currentTime);
compressor.attack.setValueAtTime(0, audioContext.currentTime);
compressor.release.setValueAtTime(0.25, audioContext.currentTime);
//biquadFilter.type = "lowpass";
if (isFinite(freqVal)) {
biquadFilter.frequency.value = freqVal;
console.log("gain is " + freqVal);
}
if (isFinite(gainVal)) {
biquadFilter.gain.value = gainVal;
console.log("gain is " + gainVal);
}
source.connect(compressor);
compressor.connect(biquadFilter);
biquadFilter.connect(pitchShift);
pitchShift.connect(audioOutput);
const localAudioWarpedTracks = new Video.LocalAudioTrack(audioOutput.stream.getAudioTracks()[0]);
const audioElement2 = document.createElement("audio");
document.getElementById("audio_div").appendChild(audioElement2);
localAudioWarpedTracks.attach();
I want to play multiple Audio files simultaneously on iOS .
On the click of a button I create multiple instance of an Audio file and put them into an array.
let audio = new Audio('path.wav')
audio.play().then(() => {
audio.pause();
possibleAudiosToPlay.push(audio);
});
After a while I play them all:
possibleAudiosToPlay.forEach(el => {
el.currentTime = 0;
el.play();
});
While this plays all audio files: When a new one begins it stops the old one. (on iOS)
Apples developer guide says this isn't possible at all with HTML5 Audio:
Playing multiple simultaneous audio streams is also not supported.
But can this be achieved with the Web Audio API?
There isn't anything written about it in Apples developer guide.
Yes you can with Web Audio API. You have to create an AudioBufferSourceNode for each one of your audio sources, since each source can be played only once (you can't stop it and play it again).
const AudioContext = window.AudioContext || window.webkitAudioContext;
const ctx = new AudioContext();
const audioPaths = [
"path/to/audio_file1.wav",
"path/to/audio_file2.wav",
"path/to/audio_file3.wav"
];
let promises = [];
// utility function to load an audio file and resolve it as a decoded audio buffer
function getBuffer(url, audioCtx) {
return new Promise((resolve, reject) => {
if (!url) {
reject("Missing url!");
return;
}
if (!audioCtx) {
reject("Missing audio context!");
return;
}
let xhr = new XMLHttpRequest();
xhr.open("GET", url);
xhr.responseType = "arraybuffer";
xhr.onload = function() {
let arrayBuffer = xhr.response;
audioCtx.decodeAudioData(arrayBuffer, decodedBuffer => {
resolve(decodedBuffer);
});
};
xhr.onerror = function() {
reject("An error occurred.");
};
xhr.send();
});
}
audioPaths.forEach(p => {
promises.push(getBuffer(p, ctx));
});
// Once all your sounds are loaded, create an AudioBufferSource for each one and start sound
Promise.all(promises).then(buffers => {
buffers.forEach(b => {
let source = ctx.createBufferSource();
source.buffer = b;
source.connect(ctx.destination);
source.start();
})
});
I do not know if this is possible but I might as well give it a chance and ask.
I'm doing an Electron app and I'd like to know if it is possible to have no more than a single instance at a time.
I have found this gist but I'm not sure hot to use it. Can someone shed some light of share a better idea ?
var preventMultipleInstances = function(window) {
var socket = (process.platform === 'win32') ? '\\\\.\\pipe\\myapp-sock' : path.join(os.tmpdir(), 'myapp.sock');
net.connect({path: socket}, function () {
var errorMessage = 'Another instance of ' + pjson.productName + ' is already running. Only one instance of the app can be open at a time.'
dialog.showMessageBox(window, {'type': 'error', message: errorMessage, buttons: ['OK']}, function() {
window.destroy()
})
}).on('error', function (err) {
if (process.platform !== 'win32') {
// try to unlink older socket if it exists, if it doesn't,
// ignore ENOENT errors
try {
fs.unlinkSync(socket);
} catch (e) {
if (e.code !== 'ENOENT') {
throw e;
}
}
}
net.createServer(function (connection) {}).listen(socket);;
});
}
There is a new API now: requestSingleInstanceLock
const { app } = require('electron')
let myWindow = null
const gotTheLock = app.requestSingleInstanceLock()
if (!gotTheLock) {
app.quit()
} else {
app.on('second-instance', (event, commandLine, workingDirectory) => {
// Someone tried to run a second instance, we should focus our window.
if (myWindow) {
if (myWindow.isMinimized()) myWindow.restore()
myWindow.focus()
}
})
// Create myWindow, load the rest of the app, etc...
app.on('ready', () => {
})
}
Use the makeSingleInstance function in the app module, there's even an example in the docs.
In Case you need the code.
let mainWindow = null;
//to make singleton instance
const isSecondInstance = app.makeSingleInstance((commandLine, workingDirectory) => {
// Someone tried to run a second instance, we should focus our window.
if (mainWindow) {
if (mainWindow.isMinimized()) mainWindow.restore()
mainWindow.focus()
}
})
if (isSecondInstance) {
app.quit()
}