How to parse a service in Apple Watch in Swift? - ios

I am working on Swift and I am new to Apple Watch Programming. I am making a demo app in iOS i.e displaying images on feed. To display the images, I have to parse a service.
I know how to implement it for iPhone, but I don’t understand how to implement it for Apple Watch, and even I don't know that it is possible on Apple Watch or not???
If it is not possible then how should I show the data from a service in Apple Watch??
Is there any other way to display the data in Apple Watch?
Can any one explain me about this clearly, please...
Thanks in advance.

To consume web services on the Apple Watch you will need to use the NSURLSession API. You can also use an easy to use, popular framework called AFNetworking. This will make the requests and parse the data for you

I can't really explain why Alamofire is returning this error. It did the same for me.
But it was better, since the WatchOS guidelines say that you can't do long operation on the watch since it's life could be short.
I built a bridge around WatchConnectivity where I send message and I reply with the response asynchronously. The only difference is that you need the app on the iPhone is running (not terminated, essentially)

Related

Video call and chat using WebRTC and Pubnub in native iOS Swift

I'm working on a doctor-patient appointment app on native iOS Swift project. Here I want to implement a live video call with chat using WebRTC and PubNub signaling server. I'm totally new to it. I don't know how to implement this. I have seen some of the Objective-C codes but still I didn't understand. Please help if any of you implement the same in swift.
Highly recommend you reach out to a service such as Vonage (https://www.vonage.com) who can provide HIPAA compliant WebRTC video service for you.
Once you have a set of keys on Vonage, you can use PubNub to move the video session details around the channel in question.
Traditionally I use a JSON object model that looks like this:
channel: "UUID of medical session"
messageID: "MessageID on your platform"
messageType: "videoInvite"
sender: "Hilaj"
sessionDetails: "session JWT and/or session token"
timestamp: "1597347054"
This means you can send text based messages as well as video invites in the same channel and write the event to logs.
I have implemented this in Swift (and previously in Objective C) but it is completely proprietary.
You are going to have to download the Google WebRTC framework.
Taking a quick look at PubNub, it looks like it just does signalling. You are still going to need an actual WebRTC server (eg, Janus). There are others and I'm not really sure which ones support PubNub.

Vidyo.io Integration in Swift4

I'm new in IOS. I'm working on a Video Call project in swift. i'm using vidyo.io SDK for video Call and message chat. But I have some questions
If my app is in kill state or my phone is locked. how can I receive call Notification.
Some SDK's have VoIP support for Call notification in locked State. vidyo.io have support for VoIP? If yes how can I implement.
vidyo.io Documentation have some methods for use camera, microphone, Customize UI etc can we implement all these methods in swift?
if any one have good tutorials or helping materials please share.
You can find a Vidyo.io sample app built with Swift here: https://github.com/Vidyo/customview-swift-ios
Vidyo.io is a CPaaS focused on video chat. You can use the service for voice only, but you probably have better options if that is what you want to achieve.

iOS SDK can you mute or cancel incoming calls

I found various threads here about how muting or canceling incoming calls (or messages) with the iOS SDK is not possible, due to the fact that Apple doesn't want an app to access system level settings. Well in fact not possible with the official tools, which means that if you somehow manage to do it, your app will not be accepted in the iTunes store.
Well I have been asked to assess the possibility of such an app that could do just that. Namely my client has seen these two apps
https://itunes.apple.com/us/app/lifesaver-distracted-driving/id874231222?mt=8
https://itunes.apple.com/us/app/at-t-drivemode/id907208943?mt=8
And they are sure that an app, basically exactly like these (based on the functionality) can be made.
So here I am, asking, how did these two apps succeed at the impossible and also how did they manage to get those apps uploaded to the iTunes store, if muting your phone is not an Apple approved option? I am not really asking for source code, although I am certainly not rejecting examples, but moreso I am asking for pointers of what class or book or documentation do I have to look up to figure out if this is possible? Apples CTCall and CT* classes did not seem to help me much.
K
Apple added the CallKit framework in iOS 10 to allow app developers to do this sort of thing, among others. For docs, see:
https://developer.apple.com/reference/callkit
It is now possible to detect and block unwanted phone calls from iOS 10 and above.
See the CallKit framework
The CallKit framework (CallKit.framework) lets VoIP apps integrate
with the iPhone UI and give users a great experience. Use this
framework to let users view and answer incoming VoIP calls on the lock
screen and manage contacts from VoIP calls in the Phone app’s
Favorites and Recents views.
CallKit also introduces app extensions that enable call blocking and
caller identification. You can create an app extension that can
associate a phone number with a name or tell the system when a number
should be blocked.

Is there inter-app communication when apps start in iOS?

I am relatively new to iOS app development and I'm just trying to figure out some things that, to me, are more abstract. How do apps know when other apps start? The closest example I can think of to what I'm trying to ask is when music is playing in the background and you open another app that has sound and the music stops. Is that the new app taking authority or is there inter app communication? If there is communication how does that communication work? Like is it a message that could be accessed or what?
Sorry if that didn't make much sense, I tried to elaborate the best I could. I couldn't find anything on this on apple's developer website. Thanks in advance!
There is no inter-app communication. Each app is living in its own world, and as far as your app is concerned, it's the only app on the phone. The way communication happens is that an app talks to the system, and the system talks to your app. The way the system sends messages to your app is usually in the delegate of the framework you're working with. In your example, opening your app to play audio will send a message to the system, and the system will tell the other app to stop audio playback. That other app has no idea it was your app that initiated the stop. Another example is the AppDelegate. The app delegate will send your app messages such as application:didFinishLaunchingWithOptions:, which in this method is where you do custom initialization of the app or applicationWillResignActive: which is normally sent when the user presses the home button or receives a phone call, so here you might want to save your game, etc.
In fact the iOS system is quite complicated, much to complicated to be explained in detail here so I highly suggest reading the Apple Developer Documentation, some of it can be a little dry, but they do their best to be coherent with even absolute beginners.

Does Apple provide an API for SIRI?

Is it possible that Apple does or will provide an API for Siri? It would be great if I can be sipping my coffee and say,
User: Hey Siri, could you please open Angry
Birds; Level 4 and throw a first bird for me. Make sure you at least hit one green pig or it's coming out of your paycheck.
Siri: Yes sure, I will do that for you.
Is this possible? And would you think Apple will provide this to us?
THIS IS NO LONGER ACCURATE:
There is no API and there is no indication of it changing anytime soon. There are private headers that you can look at by decompiling the SDK. This is a great synopsis:
Quora
You can be clever like RTM though, this is as close as it gets:
http://www.rememberthemilk.com/services/siri/
In iOS 10, Apple has announced an API for Siri called SiriKit. However, you can only do it as an app extension and only if your app implements one of the following types of services:
Audio or video calling
Messaging
Payments
Searching photos
Workouts
Ride booking
Climate and radio
SiriKit is a way for you to make your content available through Siri.
It also lets you add support for your services to the Maps app. To
support SiriKit, you use the Intents framework and Intents UI
framework to implement one or more extensions that you then include
inside your iOS app. When the user requests specific types of services
through Siri or Maps, the system uses your extensions to provide those
services.
This means SiriKit cannot be used for the scenario mentioned in the question and in ways that many of us would like.
Source: Apple Docs for SiriKit
When the iPhone was first released, there was absolutely no public talk from Apple about custom app development. The delayed release of the SDK gave them plenty of time to get public feedback on the iPhone user experience and make the SDK ready for public use.
It seems likely that they're taking a similar approach with Siri.
Not yet. If you want it, file a feature request at bugreport.apple.com, and briefly describe what you want it for. The more people ask for it, the more likely it is to happen.

Resources