I'm trying to figure out how I can switch a current call to output audio via the loudspeaker (as if the person was to press "speaker" on the phone app. Ideally, I want to accept the call on loudspeaker; but one step at a time!
I have searched through various headers, both private and non-private frameworks but I can't find the appropriate method to call in order to switch to loudspeaker.
Originally, I expected it would be present in CTCall.h but nothing useful (in this respect) is in there..
Does anyone know where the appropriate method is found?
Many thanks :)
Several ideas:
1) Look at system wide event generation.
You can click programmatically on required button ("speaker" or "Answer" button).
Here are couple of questions regarding this:
Simulating System Wide Touch Events on iOS
Simulating System Wide Touch Events in iOS without jailbreaking the device
How to send a touch event to iPhone OS?
You may be interested to google more on GSEvent which is the key for even simulation.
2) Go to Simulator folder (/Application/XCode/Contents/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator6.1.sdk
and do something like this
grep -R "Speaker" ./
The idea is to search speaker in binaries (vs header files). Most of private API's aren't documented in header files (that's part of the reason why they are private).
I believe couple of interesting hits are:
TelephoneUI private framework
AudioToolbox framework (BTW. It even has in .h files following thing: kAudioSessionOverrrideAudioRoute_Speaker)
AVFoundation framewokr
IOKit* framework (it has kIOAudioOutputPortSubTypeExternalSpeaker in .h files)
and so on.
Next step would be to disassemble them. Most of these frameworks has A LOT of interesting API's which aren't defined in .h files. It's useful to browser through them to check whether you will find anything interesting on this subject.
If you don't want to bother with disassembling, you can get extracted headers from here:
https://github.com/nst/iOS-Runtime-Headers/tree/master/PrivateFrameworks
Related
I am working on a project with SwiftUI and it originally started with creating a new project as an "App" (Xcode, clicked on file, new, project, click on "App") but was then later asked to put it into a pod as a framework. I did it successfully (Xcode, clicked on file, new project, click on "Framework"), however I am unsure what the differences are and I'm unsure why I would want to do that. To me they look very similar, except that I'm unable to launch my project as a framework in the simulator. Luckily SwiftUI offers the canvas preview window however it is a bit finicky when it comes to certain button interactions, which is why I am wanting to use the simulator.
Two places of confusion:
What is the difference between an app and a framework project?
Why is it more advantageous to have my project as a framework?
An App is a standalone application that can be launched and run. For example, all of the apps that you have on your phone are just that -- apps. You tap on them and they launch and run, presenting a user interface, accepting input, etc.
A framework is something else entirely. It's a collection of code that is bundled together into a package that is used by another framework or by an app. Some frameworks are provided by the system -- for example, SwiftUI is a framework that it sounds like you're using in your app. Other frameworks are provided by 3rd parties. For example, you can find many frameworks via CocoaPods or the Swift Package Manager -- Alamofire is a common example. Also, you can make your own frameworks and use them in your own code as a form of organization and separation of responsibilities.
Why is it more advantageous to have my project as a framework?
It is not -- they are two almost completely different concepts (besides both ultimately being collections of code and resources). If you intend to build an app that is launch-able on someone's device, your only choice is to make an app. If you intend to make a collection of reusable code for use in your or someone else's app, than you would make a framework.
Excellent answer (and upvoted) by #jnpdx. Let me give you a physical example:
(1) Create a project in Xcode that is a framework. Call it "MyAppKit". Inside it create, well, basically anything - a View, UIView, or more likely a function that will be shared by several views. (Let's go with that.)
public func setLoginName(_ login:String) -> String {
return ""Hello, " + login + "!";
}
Pretty simple. Call it, pass in something, and it returns a string saying hello. Please note the public piece. It matters. (And there's much more there. This is a simple example.)
(2) Now we get to your app or apps. Let's say you have two apps that need to use this (again, very simple) code. One is SwiftUI, one is UIKit. (It doesn't matter except for syntax.) Sine my forte is UIKit I'll use that. (And it can be several dozen apps too.)
import MyAppKit
let myLoginMessage = setLoginName("World").
Pretty much, it's "Hello, World!'
Again, this is really a nonsensical example. But it should get you started on what the difference in Xcode is between a Framework project and an App project is.
This seems like a basic request, but I can't find the answer to it anywhere. I want to wrap some existing iOS code that I wrote, in a Appcelerator module. That's it. Important points:
I am NOT wrapping a pre-existing 3rd party iOS SDK.
I wrote the iOS code being wrapped.
Code is verified as working within xcode.
There are no .a files. There are 2x .h files and 2x .m files though.
There are no UI elements in the iOS code as it is only designed to connect the native bluetooth hardware to the app.
I have created a generic appcelerator iOS module project, built it, and successfully called the generic ID function within my app.
I cannot figure out how to successfully edit the generic module so that it utilizes my code. Every attempt results in it refusing to compile, and it's maddening.
I do not have access to Hyperloop.
Once I can successfully build the wrapped module, I would call an initialization function which triggers a native bluetooth hardware search. Once connected, there are functions within the module to send commands to the hardware and receive data back. This is the official documentation I've followed so far:
http://docs.appcelerator.com/platform/latest/#!/guide/iOS_Module_Quick_Start
That helped me build the blank module, include it in the app, and ensure that it worked by calling the built in test property. From there it stops short of actually telling me what I need to know. These are the closest things I've found so far, while still not being what I need:
http://docs.appcelerator.com/platform/latest/#!/guide/iOS_Module_Project-section-43288810_iOSModuleProject-AddaThird-PartyFramework
appcelerator module for existing ios project sdk
Heck, I still don't even know if I can do this within studio or if I have to edit the generic module in Xcode. Help! :) Many thanks in advance.
so first of all, this is not best practice and will cause possible problems in the future when the SDK changes and your module still relies on outdated core API's.
Regarding your question, you could either create a new component that subclasses the existing class, e.g.
class TiMyModuleListViewProxy : TiUiListViewProxy {
}
and call it with
var myList = MyModule.createListView();
or you write a category to extend the existing API with your own logic, e.g.
#interface TiUIListViewProxy (MyListView)
- (void)setSomethingElse:(id)value;
#end
#implementation TiUIListViewProxy (MyListView)
- (void)setSomethingElse:(id)value
{
// Set the value of "somethingElse" now
}
#end
I would prefer the second option since it matches a better Objective-C code-style, but please still be aware of the possible core-changes that might effect your implementation in the feature. Thanks!
In this project I've to develop an iOS application which reads the .psl files and arranges the data in the relevant section. For eg: the inbox messages from the psl file into the app's inbox folder and so on.
Can anyone guide me regarding the steps? And how would my project proceed also tell the workflow of this whole process.
The first thing you're going to have to tackle is to figure out how to get the file onto the phone. If you're getting it from the web; you could register as a sharable-target for that file type, or you could potentially integrate the DropBox api or something similar.
Once you have the file; you'll have to develop something to parse the file and use it as a datafile. Depending on the size and complexity of the file there will be different possible approaches to this, and you'll need to figure out what's going to be performant for you.
Then you'll build view controllers that leverage your model and make awesome things happen on the phone.
Your question is extremely general; so this is a very general answer. To me; the immediate critical questions are: how to get the file to the phone; and how to read the file format without loading the whole thing into RAM at one time?
I've been searching for 2 days to prevent my app from jailbreak device, and I got it, the problem is I still can hook my class use theOS and override the jailbreak check function.
Do you have any proven Idea maybe , framework , library or something else ?
You can use dyld for that.
_dyld_image_count returns a number of dynamic libraries loaded into your application address space. Then you can iterate over them using _dyld_get_image_name checking dynamic library path. That way you can determine whether CydiaSubstrate library or any dynamic library with unknown path been loaded into your application.
Of course with jailbreak even those functions can be hooked and I don't think you can do much about it. Arxan claims it can do something with it but even if it detects something you can always hook any function it uses for detection. CydiaSubstrate tweaks are always one step ahead because they're loaded before main is called. Thus it can hook everything it wants in constructor and you can't do anything about it.
Without jailbreak only way to load malicious library is to modify and resign your app so that it links against the library. Without jailbreak you can't hook C functions so _dyld_get_image_name will be able to detect that library.
I want to play instrument 49 in iOS for varying pitches [B2-E5] for varying durations.
I have been using the Load Preset Demo as reference. The vibraphone.aupreset does not reference any files. So, I had presumed that:
I would be able to find and change the instrument in the aupreset file (unsuccessful so far)
there is some way to tell the MIDI interface to turn notes on and off without generating *.mid files.
Here's what I did:
Duplicated the audio related code from the proj
removed the trombone related files and code (called loadPresetTwo: in place of loadPresetOne: in init (as opposed to viewDidLoad)),
added a note sequence, and timer to turn off the previous note, and turn on the next note.
Build. Run. I hear sound on the simulator.
There is NO sound coming from my iPhone.
I have triple checked the code that I copied as well as where the calls are taking place. It's all there. The difference is the trombone related files and code are absent. Perhaps there is some dependency that I'm not aware of. Perhaps this is problem rooted in architectural differences between the simulator running on remote Mac VM and the iPhone. Perhaps I can only speculate because I don't know enough about the problem to understand what questions to ask.
Any thoughts or suggested tests would be great!
Thanks.
MusicPlayer + MusicSequence + MusicTrack works. It was much easier than trying to guess what code in the demo was doing.