iOS15 has a new shared with you feature on apple's apps- safari, apple news, music, movies, etc. For example, if any of my friends share the news link from apple news to me on apple messages, when I open the Apple news and scroll to the home page I get a section for 'shared with you' articles. It also shows the name of the friend how shared it with me.
I am wondering is there any API/source I can use to provide the same experience on my app? I found https://developer.apple.com/videos/play/wwdc2021/10187/, but this video doesn't explain the 'shared with you' implementation. Please share ideas for the implementation. Thanks in advance.
With iOS 16 it will be possible to implement Shared with You support for your app.
Check out this WWDC22 Session Add Shared with You to your app.
Well I spent a bunch of time on implementing Shared With You for iOS 16. There is not much information available beyond the WWDC video listed in the previous answer. But, I did get it working. Here was my approach.
Make sure your device is running at least iOS 16.0.3. It didn't start working until I installed this version and did the rest of the following.
Follow these instructions: https://www.avanderlee.com/swift/shared-with-you/
When implementing Universal Links, you just have to return true (in your app delegate) once you recognize the universal link as one of the links from Messages that contain Shared With You content. You do NOT need to implement SWCollaborationMetaData. It is kinda implied in the Apple docs that you might need to do this but you do not.
In the SharedWithYouMonitor (as referenced in item 2 above), set up a array that can hold all of your hightlights. You will need this array in the view that displays the Shared With You content. Append each hightlight to the array in the delegate method highlightCenterHighlightsDidChange...
#Published var sharedItems = [SharedWithYouModel]()
I set up a simple model for the hightlights. Something like:
import Foundation
import SharedWithYou
#available(iOS 16.0, *)
struct SharedWithYouModel: Identifiable {
var id = UUID()
let swHighlight: SWHighlight
let url: URL
}
In your view where you will show the SharedWithYou content, mark it as:
#available(iOS 16.0, *)
struct SharedWithYou: View {
#Environment(\.horizontalSizeClass) var sizeClass
#StateObject var monitor = SharedWithYouMonitor()
var gridItemLayout = [GridItem(.adaptive(minimum: 100, maximum: 150), spacing: 30)]
var body: some View {
ZStack {
ScrollView {
Text("ContentOne")
.foregroundColor(Color(hex: 0x0DB9D6))
LazyVGrid(columns: gridItemLayout, spacing: horizontalSpace) {
ForEach(monitor.sharedItems) { item in
SharedWithYouView(.........
Other: I used the SWAttributionView suggested in 2. Make sure your users have Shared With You on for your app in Settings--Messages--Shared With You. You can send content to yourself in Messages to test. It seems that content received in Messages will only show if the recipient pins the content in Messages and/or the recipient has the sender as a contact in their Contacts. I guess this is for privacy control.
Related
I have a SwiftUI app (XCode 14 / iOS 15+) that tries to receive fmp12 files sent from the FileMaker app (using the share sheet). I got to the point that my app is shown in the share sheet and gets started. What I cannot achieve is access to the file.
Following some tutorials and Apple's documentation I imported "com.filemaker.fmp12" as an imported type identifier in my plist.info and added it to Document Types.
first strategy: DocumentGroup
To access the file I have first tried to use a DocumentGroup in SwiftUI based on Apple's documentation here: https://developer.apple.com/documentation/swiftui/documentgroup
#main
struct KioskBridgeApp: App {
var body: some Scene {
DocumentGroup(viewing: FMP12File.self) { file in
KioskBridgeView(file: file.fileURL?.absoluteString ?? "undefined")
}
}
While this works when I send an fmp12 file with the files app, it does not work when I send it from FileMaker: Then it always starts with the Document Browser.
The document browser also opens when I start the app without sending anything to it and I could not find a hint how to suppress that.
So I developed the feeling that I might be on the wrong track entirely with DocumentGroup here and so tried I strategy number 2:
second strategy: App delegate
Using an Application Delegate as suggested here https://stackoverflow.com/a/46572886 and as for the Adaptor here: SwiftUI app life cycle iOS14 where to put AppDelegate code? and here https://developer.apple.com/documentation/swiftui/uiapplicationdelegateadaptor
class MyAppDelegate: NSObject, UIApplicationDelegate {
func application(_ application: UIApplication, open url: URL, sourceApplication: String?, annotation: Any) -> Bool {
print("application opened with \(url.absoluteString)")
return true
}
#main
struct DelegateTestApp: App {
#UIApplicationDelegateAdaptor(MyAppDelegate.self) var appDelegate
var body: some Scene {
WindowGroup {
ContentView()
}
}
}
But this delegate is never called, not even when I open my app using the files app.
I am fairly new to iOS development, so even after hours of reading I am not even sure that any of those strategies is even the right approach. Any help would be highly appreciated.
While I have not figured out ways to make the first two strategies work, more research has finally led to the rather simple solution of .onOpenUrl.
This was a useful source to get there: https://betterprogramming.pub/swiftui-3-ways-to-observe-apps-life-cycle-in-swiftui-e9be79e75fef.
So my code to catch the file sent to the app is now:
#main
struct KioskBridgeApp: App {
#State var openedUrl: URL? = nil
var body: some Scene {
WindowGroup {
KioskBridgeView(openedUrl: $openedUrl)
.onOpenURL { url in
openedUrl = url
}
}
}
}
Quite clear once one knows it. After that I also found a stackoverflow question on the topic I had overlooked so far: Handle incoming custom URL types with SwiftUI while app is closed
We have used the following steps of integrating PIP (Picture in Picture) for WebRTC Video Call:
We are enabling mode of Audio, Airplay, and Picture in Picture capability in our project.
We have added an Entitlement file with Accessing the Camera while multitasking, see Accessing the Camera While Multitasking.)
From the documentation link, we followed:
Provision Your App
After your account has permission to use the entitlement, you can create a new provisioning profile with it by following these steps:
Log in to your Apple Developer Account.
Go to Certificates, Identifiers & Profiles.
Generate a new
provisioning profile for your app.
Select the Multitasking Camera Access entitlement from the additional entitlements for your account.
We have also integrated the following link, but how to add video render layer view in this SampleBufferVideoCallView we don’t have any particular hint.
https://developer.apple.com/documentation/avkit/adopting_picture_in_picture_for_video_calls?changes=__8
Also, RTCMTLVideoView creates MTKView isn’t supported, but we have used WebRTC default video render view like RTCEAGLVideoView used to GLKView for a video rendering.
The PIP Integrate with WebRTC iOS Swift code:
class SampleBufferVideoCallView: UIView {
override class var layerClass: AnyClass {
get { return AVSampleBufferDisplayLayer.self }
}
var sampleBufferDisplayLayer: AVSampleBufferDisplayLayer {
return layer as! AVSampleBufferDisplayLayer
}
}
func startPIP() {
if #available(iOS 15.0, *) {
let sampleBufferVideoCallView = SampleBufferVideoCallView()
let pipVideoCallViewController = AVPictureInPictureVideoCallViewController()
pipVideoCallViewController.preferredContentSize = CGSize(width: 1080, height: 1920)
pipVideoCallViewController.view.addSubview(sampleBufferVideoCallView)
let remoteVideoRenderar = RTCEAGLVideoView()
remoteVideoRenderar.contentMode = .scaleAspectFill
remoteVideoRenderar.frame = viewUser.frame
viewUser.addSubview(remoteVideoRenderar)
let pipContentSource = AVPictureInPictureController.ContentSource(
activeVideoCallSourceView: self.viewUser,
contentViewController: pipVideoCallViewController)
let pipController = AVPictureInPictureController(contentSource: pipContentSource)
pipController.canStartPictureInPictureAutomaticallyFromInline = true
pipController.delegate = self
} else {
// Fallback on earlier versions
}
}
How to add a viewUser GLKView into pipContentSource and how to integrate remote video buffer view into SampleBufferVideoCallView?
Is it possible this way or any other way to video render buffer layer view in AVSampleBufferDisplayLayer?
Code-Level Support Apple gave the following advice when asked about this problem:
In order to make recommendations, we'd need to know more about the code you’ve tried to render the video.
As discussed in the article you referred to, to provide PiP support you must first provide a source view to display inside the video-call view controller -- you need to add a UIView to AVPictureInPictureVideoCallViewController. The system supports displaying content from an AVPlayerLayer or AVSampleBufferDisplayLayer depending on your needs. MTKView/GLKView isn’t supported. Video-calling apps need to display the remote view, so use AVSampleBufferDisplayLayer to do so.
In order to handle the drawing in your source view, you could gain access to the buffer stream before it is turned into a GLKView, and feed it to the content of the AVPictureInPictureViewController. For example, you can create CVPixelBuffers from the video feed frames, then from those, create CMSampleBuffers Once you have the CMSampleBuffers, you can begin providing these to the AVSampleBufferDisplayLayer for display. Have a look at the methods defined there to see how this is done. There's some archived ObjC sample code AVGreenScreenPlayer that you might look at to help you get started using AVSampleBufferDisplayLayer (note: it's Mac code, but the AVSampleBufferDisplayLayer APIs are the same across platforms).
In addition, for implementing PiP support you'll want to provide delegate methods for AVPictureInPictureControllerDelegate, and for AVSampleBufferDisplayLayer AVPictureInPictureSampleBufferPlaybackDelegate. See the recent WWDC video What's new in AVKit for more information about the AVPictureInPictureSampleBufferPlaybackDelegate delegates which are new in iOS 15.
Not sure if this will solve the problem, though.
I am looking to add multiple languages to my app and so I have created the different String files and have added them to my localisations. I have a settings page in my app where I want them to be able to select the language the app is in, and the whole app is changed.
VStack{
Button("English"){
//set app to English
}
Button("Francais"){
//set app to French
}
Button("Cymru"){
//set app to Welsh
}
.....
}
I've found the modifier .environment(\.locale, init(identifier: "en")) but I believe this modifier needs to be added to every view and I was wondering if there is an easier way to do this? I want the language to be saved to user defaults too.
I have created an instance of UIAccessibilityElement in order to provide a set of custom actions together with some additional information (i.e. accessibilityLabel + accessibilityHint)
The problem is that VoiceOver doesn't announce the existence of custom actions. They are there, they work, but don't get announced. Also, custom actions' hint is not being announced as well.
Any ideas?
Code to generate the element is below:
private lazy var accessibilityOverviewElement: UIAccessibilityElement = {
let element = UIAccessibilityElement(accessibilityContainer: self)
element.accessibilityLabel = viewModel.accessibilityOverviewTitle
element.accessibilityHint = viewModel.accessibilityOverviewHint
element.isAccessibilityElement = true
let close = UIAccessibilityCustomAction(
name: viewModel.accessibilityCloseActionTitle,
target: self,
selector: #selector(self.accessibilityDidClose))
close.accessibilityHint = viewModel.accessibilityCloseActionHint
let expand = UIAccessibilityCustomAction(
name: viewModel.accessibilityExpandActionTitle,
target: self,
selector: #selector(self.accessibilityDidExpand))
expand.accessibilityHint = viewModel.accessibilityExpandActionHint
element.accessibilityCustomActions = [close, expand]
return element
}()
I compute the element's frame in viewDidLayoutSubviews()
override func viewDidLayoutSubviews() {
super.viewDidLayoutSubviews()
var frame = view.bounds
frame.size.height = SleepAidMinifiedPlayerViewController.defaultHeight
accessibilityOverviewElement.accessibilityFrameInContainerSpace = frame
}
Finally, I need to be able to enable/disable accessibility since this view controller slides from the bottom and hides, but it's not completely removed from the view hierarchy (so VoiceOver still focuses on its elements)
func setAccessibility(enabled isEnabled: Bool) {
view.accessibilityElements = isEnabled ? [accessibilityOverviewElement, /* + other accessible elements*/].compactMap { $0 } : []
}
Thanks!
Any ideas?
I already created a radar about this problem: VoiceOver doesn't read out the custom actions anymore - Nov 4, 2019 at 5:01 PM – FB7426771.
Description: "Natively, in iOS 13, VoiceOver doesn't announce available actions even if they're present: example in the alarms settings, select an alarm and no actions is read out (it's OK in iOS 12) while they exist.
Moreover, if I create an element in an app with custom actions, they won't be announced in iOS 13 but they can be used if I know they're here (up and down swipe to get them).
However, if i use an older app targeting iOS 12, my elements containing custom actions are perfectly spelled out with the "actions available" announced with an iOS 12 device while the iOS 13 device does announce them 'sometimes'.
Please correct this huge turning back in the next iOS 13.3 version because it's extremely penalizing for the VoiceOver users."
No answers since but it's important to deliver a solution in a future version: I'm looking forward to seeing this correction in the next release notes.
However, your implementation should make your app work as desired, that's not the problem in my view ⇒ there are many useful examples (code + illustrations) if you need further explanations about some VoiceOver implementations.
Make your app run under iOS 12 and notice that it works while it's not the case under iOS 13.😰
⚠️ ⬛️◼️🔳▪️ EDIT ▪️🔳◼️⬛️ ⚠️ (2020/03/17)
The problem is that VoiceOver doesn't announce the existence of custom actions. They are there, they work, but don't get announced. Also, custom actions' hint is not being announced as well.
Even if you didn't mention your iOS version you're working with, I think this is iOS 13 because this weird behavior has been introduced making itself scarce in this version: no WWDC videos or info on the Apple website. 😤
This dedicated a11y site mentioned this modification ⟹ "iOS 13 introduced a new custom actions behavior: the "actions available" announcement isn't always present anymore.
It was previously offered to every element containing custom actions but, now, it will occur when you navigate to another element that contains a different set of actions.
The purpose is to prevent repetitive announcements on elements where the same actions are present as the previous element." 🤓
Take a look at this SO answer that highlights a response from a Technical Support Incident about this subject. 😉
Conclusion: if you need to use the announcement of the custom actions on each element they're implemented, use iOS 12 otherwise you'll have to work with this new behavior that wasn't explained anywhere and is definitely not efficient for the VoiceOver users ⟹ the Apple Technical Support claims that's the way it works from now.😰
⚠️ ⬛️◼️🔳▪️ EDIT ▪️🔳◼️⬛️ ⚠️ (2022/11/15)
I haven't this problem anymore, even in iOS 15. 🥳
If you're still in the same still bad situation in iOS 16, I suggest to check you've ticked the box Accessibility-VoiceOver-Verbosity-Actions-Speak in your device settings to make it work as expected (⟹ source). 👍
However, I've had no news from Apple regarding my TSI. 😵💫
We would like to enable a feature that allows a model to be viewed using a deep link to our ARKit app from a web page.
Has anyone discovered a way to discover if a device is ARKit compatible using the user agent string or any other browser-based mechanism?
Thanks!
Apple seems to use the following code to show/hide the "Visit this page on iOS 12 to try AR Quick Look" on https://developer.apple.com/arkit/gallery/
(function () {
var isRelAR = false;
var a = document.createElement('a');
if (a.relList.supports('ar')) {
isRelAR = true;
}
document.documentElement.classList.add(isRelAR ? 'relar' : 'no-relar');
})();
The interesting part of course being
var isRelAR = false;
var a = document.createElement('a');
if (a.relList.supports('ar')) {
isRelAR = true;
}
Make your actions accordingly based on the value of isRelAR.
Safari doesn’t expose any of the required hardware information for that.
If you already have a companion iOS app for your website, another option might be to still provide some non-AR experience for your content, so that the website has something to link to in all cases.
For example, AR furniture catalogs seem to be a thing now. But if the device isn’t ARKit capable, you could still provide a 3D model of each furniture piece linked from your website, letting the user spin it around and zoom in on it with touch gestures instead of placing it in AR.