I have an application that processes data from bluetooth and send it to the web service. Recently there was a request to add sounds to the application. Now when the application processes batches of data and the player is playing constantly after a few secs I get "Application is not responding" exception. And then the process is terminated. In the logs I can see lots of ForcedStackTrace exception logged after this exception.
The sounds are played in the separate thread. If app doesn't play sounds or plays short sounds - everything works fine. Is there any way to avoid this exception happening? Why is it happening?
InputStream mediaStream = null;
try {
mediaStream = getClass().getResourceAsStream(relativePath);
getLogger().log("setting player _ " + _audioType);
setPlayer(Manager.createPlayer(mediaStream, _audioType));
_currentPlayer.addPlayerListener(this);
_currentPlayer.setLoopCount(1);
_currentPlayer.realize();
VolumeControl vc = (VolumeControl) _currentPlayer
.getControl("VolumeControl");
if (vc != null) {
vc.setLevel(_voumeLevel);
}
_currentPlayer.prefetch();
_currentPlayer.start();
} catch (Exception e) {
}
(crossposted from BB forums)
Resolved by implementing my own PlayerManager, which, running in a separate thread would play the item in the queue manner rather then having many threads using the inner Player implementation.
Related
In short
We have a mobile app that streams fairly high volumes of data to and from a server through various bidirectional streams. The streams need to be closed on occasion (for example when the app is backgrounded). They are then reopened as needed. Sometimes when this happens, something goes wrong:
From what I can tell, the stream is up and running on the device's side (the status of both the GRPCProtocall and the GRXWriter involved is either started or paused)
The device sends data on the stream fine (the server receives the data)
The server seems to send data back to the device fine (the server's Stream.Send calls return as successful)
On the device, the result handler for data received on the stream is never called
More detail
Our code is heavily simplified below, but this should hopefully provide enough detail to indicate what we're doing. A bidirection stream is managed by a Switch class:
class Switch {
/** The protocall over which we send and receive data */
var protocall: GRPCProtoCall?
/** The writer object that writes data to the protocall. */
var writer: GRXBufferedPipe?
/** A static GRPCProtoService as per the .proto */
static let service = APPDataService(host: Settings.grpcHost)
/** A response handler. APPData is the datatype defined by the .proto. */
func rpcResponse(done: Bool, response: APPData?, error: Error?) {
NSLog("Response received")
// Handle response...
}
func start() {
// Create a (new) instance of the writer
// (A writer cannot be used on multiple protocalls)
self.writer = GRXBufferedPipe()
// Setup the protocall
self.protocall = Switch.service.rpcToStream(withRequestWriter: self.writer!, eventHandler: self.rpcRespose(done:response:error:))
// Start the stream
self.protocall.start()
}
func stop() {
// Stop the writer if it is started.
if self.writer.state == .started || self.writer.state == .paused {
self.writer.finishWithError(nil)
}
// Stop the proto call if it is started
if self.protocall?.state == .started || self.protocall?.state == .paused {
protocall?.cancel()
}
self.protocall = nil
}
private var needsRestart: Bool {
if let protocall = self.protocall {
if protocall.state == .notStarted || protocall.state == .finished {
// protocall exists, but isn't running.
return true
} else if writer.state == .notStarted || writer.state == .finished {
// writer isn't running
return true
} else {
// protocall and writer are running
return false
}
} else {
// protocall doesn't exist.
return true
}
}
func restartIfNeeded() {
guard self.needsRestart else { return }
self.stop()
self.start()
}
func write(data: APPData) {
self.writer.writeValue(data)
}
}
Like I said, heavily simplified, but it shows how we start, stop, and restart streams, and how we check whether a stream is healthy.
When the app is backgrounded, we call stop(). When it is foregrounded and we need the stream again, we call start(). And we periodically call restartIfNeeded(), eg. when screens that use the stream come into view.
As I mentioned above, what happens occasionally is that our response handler (rpcResponse) stops getting called when server writes data to the stream. The stream appears to be healthy (server receives the data we write to it, and protocall.state is neither .notStarted nor .finished). But not even the log on the first line of the response handler is executed.
First question: Are we managing the streams correctly, or is our way of stopping and restarting streams prone to errors? If so, what is the correct way of doing something like this?
Second question: How do we debug this? Everything we could think of that we can query for a status tells us that the stream is up and running, but it feels like the objc gRPC library keeps a lot of its mechanics hidden from us. Is there a way to see whether responses from server may do reach us, but fail to trigger our response handler?
Third question: As per the code above, we use the GRXBufferedPipe provided by the library. Its documentation advises against using it in production because it doesn't have a push-back mechanism. To our understanding, the writer is only used to feed data to the gRPC core in a synchronised, one-at-a-time fashion, and since server receives data from us fine, we don't think this is an issue. Are we wrong though? Is the writer also involved in feeding data received from server to our response handler? I.e. if the writer broke due to overload, could that manifest as a problem reading data from the stream, rather than writing to it?
UPDATE: Over a year after asking this, we have finally found a deadlock bug in our server-side code that was causing this behaviour on client-side. The streams appeared to hang because no communication sent by the client was handled by server, and vice-versa, but the streams were actually alive and well. The accepted answer provides good advice for how to manage these bi-directional streams, which I believe is still valuable (it helped us a lot!). But the issue was actually due to a programming error.
Also, for anyone running into this type of issue, it might be worth investigating whether you're experiencing this known issue where a channel gets silently dropped when iOS changes its network. This readme provides instructions for using Apple's CFStream API rather than TCP sockets as a possible fix for that issue.
First question: Are we managing the streams correctly, or is our way of stopping and restarting streams prone to errors? If so, what is the correct way of doing something like this?
From what I can tell by looking at your code, the start() function seems to be right. In the stop() function, you do not need to call cancel() of self.protocall; the call will be finished with the previous self.writer.finishWithError(nil).
needsrestart() is where it gets a bit messy. First, you are not supposed to poll/set the state of protocall yourself. That state is altered by itself. Second, setting those state does not close your stream. It only pause a writer, and if app is in background, pausing a writer is like a no-op. If you want to close a stream, you should use finishWithError to terminate this call, and maybe start a new call later when needed.
Second question: How do we debug this?
One way is to turn on gRPC log (GRPC_TRACE and GRPC_VERBOSITY). Another way is to set breakpoint at here where gRPC objc library receives a gRPC message from the server.
Third question: Is the writer also involved in feeding data received from server to our response handler?
No. If you create a buffered pipe and feed that as request of your call, it only feed data to be sent to server. The receiving path is handled by another writer (which is in fact your protocall object).
I don't see where the usage of GRXBufferedPipe in production is discouraged. The known drawback about this utility is that if you pause the writer but keep writing data to it with writeWithValue, you end up buffering a lot of data without being able to flush them, which may cause memory issue.
We're trying to make a multipeer connection between two devices using MPC framework in libgdx game.
What generally we've done successfully:
Devices are connecting, session is establishing correctly.
After session is established nearBrowser and nearAdvertiser stop
doing their stuff.
Then we do transition to the game scene. In the new scene one device
can send a message to another.
DidReceiveData method from Session Delegate is called and there
we've got right messages for both devices.
After this we send to libgdx message for updating content (in main
gdx thread).
BUT after a while when some device received data it immediately crashes. Sometimes it happens on 10th receiving, sometimes after 200th. Crash appears only on the device that received message. It doesn't matter how long they are connected. Crash appears after all methods have done their work with data. So we don't know where exactly error happens.
// MCSession delegate method
public void didReceiveData(MCSession session, NSData data, MCPeerID peerID) {
//there we make userInfoData
//
DispatchQueue.getMainQueue().async(new Runnable() {
#Override
public void run() {
NSNotificationCenter.getDefaultCenter().postNotification(new NSString("didReceiveData"), null, userInfoData);
}
});
}
// Register observer in NSNotificationCenter
// NSNotificationCenter.getDefaultCenter().addObserver(this, Selector.register("updateDataWithNotification:"), new NSString("didReceiveData"), null);
// This method is called when device has received new data
#Method
private void updateDataWithNotification(NSNotification notification){
userInfoDict = notification.getUserInfo();
data = (NSData) userInfoDict.get(new NSString("data"));
strBytes = new String(data.getBytes());
// i'm not sure this Gdx.app.postRunnable is really needed
Gdx.app.postRunnable(new Runnable() {
#Override
public void run() {
SBGlobalMessanger.getInstance().readBluetoothMessage(BluetoothData.RC_MESSAGE, strBytes);
}
});
}
The questions are:
Where is the bug? And how can we fix it?
The problem was in robovm plugin. In debug mode it made build that crushed. After making release build bug disappeared. The thing i have learned after working with robovm + libgdx is if you have strange bug just make a release build. It seems that this kind of bugs was eliminated with the last release of robovm 1.3 (i haven't try it out yet).
I just started testing this very simple audio recording application that was built through Monotouch on actual iPhone devices today. I encountered an issue with what seemed to be the re-use of the AVAudioRecorder and AVPlayer objects after their first use and I am wondering how I might could solve it.
Basic Overview
The application consists of the following three sections :
List of Recordings (TableViewController)
Recording Details (ViewController)
New Recording (ViewController)
Workflow
When creating a recording, the user would click the "Add" button from the List of Recordings area and the application pushes the New Recording View Controller.
Within the New Recording Controller, the following variables are available:
AVAudioRecorder recorder;
AVPlayer player;
each are initialized prior to their usage:
//Initialized during the ViewDidLoad event
recorder = AVAudioRecorder.Create(audioPath, audioSettings, out error);
and
//Initialized in the "Play" event
player = new AVPlayer(audioPath);
Each of this work as intended on the initial load of the New Recording Controller area, however any further attempts do not seem to work (No Audio Playback)
The Details area also has a playback portion to allow the user to playback any recordings, however, much like the New Recording Controller, playback doesn't function there either.
Disposal
They are both disposed as follows (upon exiting / leaving the View) :
if(recorder != null)
{
recorder.Dispose();
recorder = null;
}
if(player != null)
{
player.Dispose();
player = null;
}
I have also attempted to remove any observers that could possible keep any of the objects "alive" in hopes that would solve the issue and have ensured they are each instantiated with each display of the New Recording area, however I still receive no audio playback after the initial Recording session.
I would be happy to provide more code if necessary. (This is using MonoTouch 6.0.6)
After further investigation, I determined that the issue was being caused by the AudioSession as both recording and playback were occurring within the same controller.
The two solutions that I determined were as follows:
Solution 1 (AudioSessionCategory.PlayAndRecord)
//A single declaration of this will allow both AVAudioRecorders and AVPlayers
//to perform alongside each other.
AudioSession.Category = AudioSessionCategory.PlayAndRecord;
//Upon noticing very quiet playback, I added this second line, which allowed
//playback to come through the main phone speaker
AudioSession.OverrideCategoryDefaultToSpeaker = true;
Solution 2 (AudioSessionCategory.RecordAudio & AudioSessionCategory.MediaPlayback)
void YourRecordingMethod()
{
//This sets the session to record audio explicitly
AudioSession.Category = AudioSessionCategory.RecordAudio;
MyRecorder.record();
}
void YourPlaybackMethod()
{
//This sets the session for playback only
AudioSession.Category = AudioSessionCategory.MediaPlayback;
YourAudioPlayer.play();
}
For some additional information on usage of the AudioSession, visit Apple's AudioSession Development Area.
In my application, I have BrowserField2 added to MainScreen and Media player based on Streaming media - Start to finish. I am trying to open media player from Browser using extended javascript. My plan is that when user clicks on some links in web page, I call extended javascript function with some parameters like url of the video to stream. This function in turn pushes media player screen with the url passed. Media player works very well and streams video when used stand alone. But it doesn't play video when coupled with BrowserField using extended javascript.
I suspect that the issue is synchronizing with Event thread or related to threading. I push screen containing media player using runnable. The screen is displayed. But when I click on play button (which starts some threads to fetch video and play it), nothing happens and my application freezes. I am not able to figure out exact problem. Will appreciate if someone can pin point the problem.
Thank you.
Some relevant code listings as below:
public void extendJavaScript() throws Exception
{
ScriptableFunction playVideo = new ScriptableFunction()
{
public Object invoke(Object thiz, Object[] args) throws Exception
{
openMediaPlayer(args[0].toString());
return Boolean.FALSE;
}
};
_bf2.extendScriptEngine("bb.playVideo", playVideo);
}
private void openMediaPlayer(final String url){
UiApplication.getUiApplication().invokeAndWait(new Runnable() {
public void run() {
PlayerScreen _playerScreen = new PlayerScreen(url + ";deviceside=true");
UiApplication.getUiApplication().pushScreen(_playerScreen);
}
});
}
Never mind. Got it resolved. It turned out that the video that I was trying to access from the web page was in incompatible format and hence throwing an error and freezing the media player.
I'm developing an application using the Blackberry plugin for eclipse and I am getting the following error when making a call to a web service when I have deployed my application to a production server and handset... it works in my local simulator and development environment. (I can't hook my simulator directly to my production environment)
Uncaught exception: Application
app(150) is not responding; process
terminated
The call is being made from another thread.
The thread is passed to my CustomThreadManager to run
ClientChangeThread thread = new ClientChangeThread();
CustomThreadManager.Start(thread, true);
CustomThreadManager
ProgressPopup _progress = null;
if(showProgress){
_progress = new ProgressPopup("Loading...");
_progress.Open();
}
thread.start();
while (thread.isRunning())
{
try
{
CustomThread.sleep(300);
if(showProgress){
_progress.doPaint();
}
}
catch (InterruptedException e)
{
Dialog.alert("Error contacting webservice\n" + e.getMessage());
Functions.moveBack();
}
}
if(showProgress)
_progress.Close();
Some calls work while others dont.
The web service returns results fairly quickly so Im not sure if its the web service is too slow or problems with the threading.
Any help appreciated.
Thread.sleep() does not release any locks. This means your code to update the progress bar in the while-loop is holding the UI event lock, and prevents other UI updates from happening until the while loop terminates -- in this case when thread.isRunning() returns false.
You can use UiApplication.invokeLater(Runnable, long, boolean) to schedule a repeating UI update that will only hold the event lock while the Runnable is executing.