I'm calling Application.getApplication().requestForeground(); from a Background Class extending from Application. But this function is not triggering. This function is also in the same Background Class.
public void activate(){
System.out.println("==Activate== ");
}
public void setupBackgroundApplication(){
Application.getApplication().requestForeground();
}
How can this activate function can trigger?
I think the problem may be that there are two different concepts here:
Application, which is the base class of all BlackBerry Java applications (UI and background apps)
UiApplication, which is the base class of BlackBerry Java UI applications.
If your application is a subclass of Application:
public class MyApplication extends Application {
then, calling requestForeground() isn't going to magically give it a user interface.
My guess is that you need one of two solutions:
If you want one application, then change it to extend UiApplication. You will then have one application, that goes from foreground to background to foreground ...
You could use two applications, one which always runs in the background, and another which is only a UI application. Then, your background code could trigger the UI application with the ApplicationManager APIs
Related
i am writing ios plugins for existing ios framework.
in framework, there is already a method like:
public func present(from parentController: UIViewController, key: String, completionHandler handler: #escaping (Bool, String) -> Void) {
}
i understand a bit how to communicate with unity <-> swift framework.
but i am facing a challenge like,
i have a basic UI, canvas and buttons. i want that when user tap on button, it should call above framework method.
i know how to call a unity method when button tap.
But how can i call above method from unity, with button tap, as unity is not aware of UIViewController.
please note: ios side is a framework, not an app, so framework is not aware of any UIKit stuffs.
I think the problem you are having is that you are trying to code 1:1 Swift operations in Unity when you might want to think about encapsulation.
It might be easier to have a single proxy/middleman component in your Swift code that performs operations on behalf of your Unity code. In this way much of the complexity and iOS-specific aspects are hidden (encapsulated) away from your Unity code. After all, all your Unity code might be concerned about is that a certain window be shown, it does not care how it is done.
So you might expose a very simple contract to Unity like (pseudo code):
ShowWindow (string name, callback allDone);
...then in your Unity code:
PluginManager pluginManager; // a set of predefined [DllImport]s
void OnZapRobotClicked()
{
pluginManager.ShowWindow ("ZapRobot", () => { Debug.Log("Done"); } );
}
When ShowWindow is called in your Swift code, you use the appropriate Swift APIs to display the appropriate view with the appropriate controller. How that is done exactly is no different to showing windows in Swift at the best of times.
Now instead of having to code up a zillion [DllImport]s Unity-side just to show a single window, you simply have one method per feature that you want to expose, ShowWindow being the very first. You might have other methods like SendMessage, LoadSettings, SaveSettings, DisablePlugin.
Depending on how you are designing your plugin system, you might want to consider making this main proxy plugin the router that directs commands from your C# code to the appropriate plugin Swift-side.
I am building an iOS framework that collects information about various events in an iOS app and performs local and remote analysis. Some of these events can't be tested outside of an app: for example, view controller transitions. To test these events, I built a test iOS app and would like to use Xcode UI tests to:
Initiate a view controller transition by, say, tapping a button that pushes a VC to a navigation controller, or by switching to a different tab in a tab controller
Verify that the framework was able to detect the view controller transitions and generate the necessary events (e.g., send them to the server)
The problem is that Xcode UI tests run outside the app, so I can't access any shared objects to verify that the framework is functioning properly. I also can't insert any mock objects, because, again, I don't have access to the framework loaded in the app's process.
I went as far as trying to load the framework in a kind of test mode and have it update a label with some results that the UI test would then be able to read. But even that is difficult, because XCUIElement doesn't give me the actual text of the element; I can only query for elements with some predefined text. I can sort of work with that, but it seems like a very roundabout approach for something that should be rather trivial.
Are there any better alternatives for testing events that can only be simulated in a running app, and then verifying that my framework was able to detect and process them? In addition to view controller transitions, I am also interested in touch interactions, accelerometer events, and other features that can't be simulated outside of an app. Naturally, I can simulate these events for unit testing, but I would also like to automatically test that when these events occur in an actual app, the proper responses are generated in my framework.
You could try using SBTUITestTunnel. This library extends the functionality of UI Tests adding some features that might come in handy in cases like yours.
Network monitoring
The library allows your test target to collect network calls invoked in the app. Usage is pretty straight forward:
func testThatNetworkAfterEvent() {
// Use SBTUITunneledApplication instead of XCUIApplication
let app = SBTUITunneledApplication()
app.launchTunnelWithOptions([SBTUITunneledApplicationLaunchOptionResetFilesystem]) {
// do additional setup before the app launches
// i.e. prepare stub request, start monitoring requests
}
app.monitorRequestsWithRegex("(.*)myserver(.*)") // monitor all requests containing myserver
// 1. Interact with UI tapping elements that generate your events
// 2. Wait for events to be sent. This could be determined from the UI (a UIActivitiIndicator somewhere in your app?) or ultimately if you have no other option with an NSThread.sleepfortimeinterval
// 3. Once ready flush calls and get the list of requests
let requests: [SBTMonitoredNetworkRequest] = app.monitoredRequestsFlushAll()
for request in requests {
let requestBody = request.request!.HTTPBody // HTTP Body in POST request?
let responseJSON = request.responseJSON
let requestTime = request.requestTime // How long did the request take?
}
}
The nice thing of the network monitoring is that all the testing code and logic is contained in your test target.
Custom block of code
There are other use cases where you need to perform custom code to be conveniently invoked in the test target, you can do that as well.
Register a block of code in the app target
SBTUITestTunnelServer.registerCustomCommandNamed("myCustomCommandKey") {
injectedObject in
// this block will be invoked from app.performCustomCommandNamed()
return "any object you like"
}
and invoke it from the test target
func testThatNetworkAfterEvent() {
let app = ....
// at the right time
let objFromBlock = app.performCustomCommandNamed("myCustomCommand", object: someObjectToInject)
}
Currently you can only inject data from the test target -> app target. If you need the other way around, you could store that data in the NSUserDefaults and fetch it using the SBTUIApplication's userDefaultsObjectForKey() method.
I personally don't like the idea of mixing standard and test code in your app's target, so I'd advise to use this only when really needed.
EDIT
I've update the library and starting from vision 0.9.23 you can now pass back any object from the block to the test target. No need for workarounds anymore!
Not much of an automated way to have things done, but do use breakpoints as an intermediate way, at least until you get things started automatically. You can add a symbolic breakpoint to hit on UI methods such as [UIWindow setRootViewController:], viewWillAppear, etc.
You can explicitly define modules, conditions and actions, so that whenever someone views your VC, some action will be done that might be helpful to you.
If you can print something from your framework using a simple "po myEvent" upon some viewDidAppear, you're good to go. I've never tried using fancier methods as an action, but those can be done as well.
I am trying to print out (e.g., NSLog) the information about events that fired on iPhone application. For example user executes a scenario and I want to track all the methods called and events clicked by the user.
Is there anyway to do that using a category extension and method swizzling features of objective-C to inject some code to log and print the information needed? Right now, I defined a category for UIWindow, UIApplication and UIView. I am not sure where is the best place to track tho, e.g., [UIWindow sendEvent:]? Should I observe objc_msgSend()?
I basically want to instrument a project and inject code with category extension in order to not change the source code as much as possible.
I want to use the UiApplication.activate() method in BlackBerry. In Android, we use the onResume() method; so how do we use the UiApplication.activate() method in BlackBerry?
There is not that much info available.
It's difficult to answer this question because of the comment below Peter's answer. Activity#onResume() in Android is not normally used simply "to refresh ... UI data on the spot after making changes".
Android
onResume() is called by the Android OS when a user comes (back) to an Activity; this usually happens after leaving it ... either because another Activity was displayed in front of it, or because the user left your application and came back to it (going home and back, to the phone and back, etc.)
BlackBerry
In BlackBerry, Application#activate() is called when the user returns to your app. This callback happens at the app level. An app in Android is made of many Activies (normally). onResume() gets called separately for each Activity in your app, as the user returns to that individual Activity.
Although not identical, a similar construct in BlackBerry is the Screen class. One app may have many Screens, as Android apps have many Activities. So, if you're looking for the most similar thing to onResume(), I would try, as Peter suggested:
Screen#onExposed()
Screen#onUiEngineAttached(boolean)
To get closest to onResume(), you probably need both of those methods, because Screen#onExposed() does not get called the first time your Screen is shown (while Activity#onResume() does). You would override these methods in your own classes that extend Screen (or MainScreen, etc.)
If your problem is just determining when to refresh your UI, you'll need to explain more about what kind of UI objects (Fields) you're using, and when you get new data to display.
Update:
Here is some sample code for how you might build your main Screen class, to try to mimic Android's onResume() callback:
public final class MyScreen extends MainScreen {
public MyScreen() {
setTitle("MyTitle");
}
protected void onExposed() {
super.onExposed();
onResume();
}
protected void onUiEngineAttached(boolean attached) {
super.onUiEngineAttached(attached);
if (attached) {
onResume();
}
}
private void onResume() {
// TODO: put your Android-like processing here
System.out.println("onResume()");
}
}
There is nothing really to know about activate(). If you look at the documentation
http://www.blackberry.com/developers/docs/7.1.0api/net/rim/device/api/system/Application.html
it says:
"The system invokes this method when it brings this application to the foreground."
So if your application has been pushed to the background (for example, by a phone call), and then the user clicks on your icon to look at your application again, then activate() will be called. By contrast, deactivate() will be called when the system pushes your application to the background (for example, when a phone call is received).
Note that activate() is also called when the application is first started.
The question really is what do you do in onResume() that you need to replicate in the BlackBerry code. If you can tell us this, we might be able to suggest what is the best way to achieve the result you want.
Update
Given that you appear to be using onResume() to update a Ui, there is unfortunately, no one simple way of doing this for the entire application. The method you would use actually depends on what is being updated.
But be aware that most Fields are updated automatically when you change their contents. To give a simple example, if you have an EditField that contains data, and you use the
.setText("new data");
method, this will automatically repaint that Field on the screen for you.
I expect that you have a screen that you have populated from the database or data source, and in activate() you want to refresh this data. So you will have to go through each of the screen's Fields and use the associated set... method to update the contents.
This is slightly problematic, because activate is called for the Application, not the screen, and you might have multiple screens on the stack, and you really need to update all of them There are various methods available to do this, involving say, your screens registering themselves to be updated, or your activate() method searching the display stack for the screens there.
But possibly a simple approach is to use each screen's 'onExposed()' method to automatically update the contents. This method is called anytime the screen is hidden and then shown, which is exactly what would happen after the application has been foregrounded. It also happens when the screen is hidden by another screen being pushed on top of it, or even a user pressing the menu key. So perhaps, if the update is time consuming (for example requires a database lookup), you might want not want to update every time onExposed() is invoked, but instead try to restrict the update frequency.
This 'onExposed()' approach does not get out of the requirement to update the contents of each Field separately, but it might make it easier to implement.
Further update
Note Nate's answer, Nate has experience of both Android and BB, so can better relate to your problem.
But if you know that the screen has been updated, so you have just processed a network request that relates t that screens content, then you should go through each of the Field individually, "set"ting the updated value at that time, don't worry about using onExposed(). One design approach that accommodates this is to separate the screen construction from the screen population, so you can call the 'population' from multiple places (note that you do need to be on the Event Thread when you update the Fields).
But in this circumstance, you might find it easier and faster to create a new Screen and push that and pop the old screen (which has the old values).
I'm currently developing a BlackBerry application where I need to be able to open the application by clicking a link in an email or web page. The link would contain a string of text that would also need to be made available to the application at runtime.
The iPhone OS allows you to do this through custom protocols (ex. appname://some-other-text) quite easily. Is there similar functionality available in the BlackBerry SDK, or is this going to turn into a pipe dream?
I have done something like this by registering a custom BrowserContentProvider (using a unique, custom MIME type). You then use a URL that returns an web page with the custom MIME type, which will trigger your BrowserContentProvider implementation. Part of this implementation can consist of code that launches your application (or bring it to the foreground if already running).
There's another class called ContentHandler that you may want to look into as well. I haven't used it, but it appears to be able to spawn custom handlers based on certain filename-matching conditions.