A SwiftUI SiriKit NSUserActivity Tutorial

In this chapter, an example project will be created that uses the Photo domain of SiriKit to allow the user, via Siri voice commands, to search for and display a photo taken on a specified date. In the process of designing this app, the tutorial will also demonstrate the use of the NSUserActivity class to allow processing of the intent to be transferred from the Intents Extension to the main iOS app.

About the SiriKit Photo Search Project

The project created in this tutorial is going to take the form of an app that uses the SiriKit Photo Search domain to locate photos in the Photo library. Specifically, the app will allow the user to use Siri to search for photos taken on a specific date. In the event that photos matching the date criteria are found, the main app will be launched and used to display the first photo taken on the chosen day.

Creating the SiriPhoto Project

Begin this tutorial by launching Xcode and selecting the options to create a new Multiplatform App project named SiriPhoto.

Enabling the Siri Entitlement

Once the main project has been created the Siri entitlement must be enabled for the project. Select the SiriPhoto target located at the top of the Project Navigator panel (marked A in Figure 44-1) so that the main panel displays the project settings. From within this panel, select the Signing & Capabilities tab (B) followed by the SiriPhoto target entry (C):

Figure 44-1

Click on the “+ Capability” button (D) to display the dialog shown in Figure 44-2 below. Enter Siri into the filter bar, select the result and press the keyboard enter key to add the capability to the project:

Figure 44-2

Seeking Siri Authorization

In addition to enabling the Siri entitlement, the app must also seek authorization from the user to integrate the app with Siri. This is a two-step process which begins with the addition of an entry to the Info.plist file of the iOS app target for the NSSiriUsageDescription key with a corresponding string value explaining how the app makes use of Siri.

Select the Info.plist file located within the iOS folder in the project navigator panel as shown in Figure 44-3:

Figure 44-3

Once the file is loaded into the editor, locate the bottom entry in the list of properties and hover the mouse pointer over the item. When the ‘+’ button appears, click on it to add a new entry to the list. From within the drop-down list of available keys, locate and select the Privacy – Siri Usage Description option as shown in Figure 44-4:

Figure 44-4

Within the value field for the property, enter a message to display to the user when requesting permission to use speech recognition. For example:

Siri support search and display photo library images.

Repeat the above steps to add a Privacy – Photo Library Usage Description entry set to the following to that the app is able to request photo library access permission from the user:

This app accesses your photo library to search and display photos.

In addition to adding the Siri usage description key, a call also needs to be made to the requestSiriAuthorization class method of the INPreferences class. Ideally, this call should be made the first time that the app runs, not only so that authorization can be obtained, but also so that the user learns that the app includes Siri support. For the purposes of this project, the call will be made within the onChange() modifier based on the scenePhase changes within the app declaration located in the SiriPhotoApp.swift file as follows:

import SwiftUI
import Intents
 
@main
struct SiriPhotoApp: App {
    
    @Environment(\.scenePhase) private var scenePhase
    
    var body: some Scene {
        WindowGroup {
            ContentView()
        }
        .onChange(of: scenePhase) { phase in
            INPreferences.requestSiriAuthorization({status in
                // Handle errors here
            })
        }
    }
}

Before proceeding, compile and run the app on an iOS device or simulator. When the app loads, a dialog will appear requesting authorization to use Siri. Select the OK button in the dialog to provide authorization.

Adding an Image Asset

The completed app will need an image to display when no matching photo is found for the search criteria. This image is named image-missing.png and can be found in the project_images folder of the source code download archive available from the following URL:

https://www.ebookfrenzy.com/code/SwiftUI-iOS14-CodeSamples.zip

Within the Xcode project navigator, locate and select the Assets.xcassets file located in the Shared folder. In a separate Finder window, locate the project_images folder from the sample code and drag and drop the image into the asset catalog as shown in Figure 44-5 below:

Figure 44-5

Adding the Intents Extension to the Project

With some of the initial work on the iOS app complete, it is now time to add the Intents Extension to the project. Select Xcode’s File -> New -> Target… menu option to display the template selection screen. From the range of available templates, select the Intents Extension option as shown in Figure 44-6:

Figure 44-6

With the Intents Extension template selected, click on the Next button and enter SiriPhotoIntent into the Product Name field. Before clicking on the Finish button, turn off the Include UI Extension option and make sure that the Starting Point is set to None since this extension will not be based on the Messaging domain. When prompted to do so, enable the build scheme for the Intents Extension by clicking on the Activate button in the resulting panel.

Reviewing the Default Intents Extension

The files for the Intents Extension are located in the SiriPhotoIntent folder which will now be accessible from within the Project Navigator panel. Within this folder are an Info.plist file and a file named IntentHandler.swift.

The IntentHandler.swift file contains the IntentHandler class declaration which currently only contains a stub handler() method.

Modifying the Supported Intents

Currently we have an app which is intended to search for photos but for which no supported intents have been declared. Clearly some changes need to be made to implement the required functionality.

The first step is to configure the Info.plist file for the SiriPhotoIntent extension. Select this file and unfold the NSExtension settings until the IntentsSupported array is visible:

Figure 44-7

Currently the array does not contain any supported intents. Add a photo search intent to the array by clicking on the + button indicated by the arrow in the above figure and entering INSearchForPhotosIntent into the newly created Item 0 value field. On completion of these steps the array should match that shown in Figure 44-8:

Figure 44-8

Modifying the IntentHandler Implementation

The IntentHandler class now needs to be updated to add support for Siri photo search intents. Edit the IntentHandler.swift file and change the class declaration so it reads as follows:

import Intents
import Photos
 
class IntentHandler: INExtension, INSearchForPhotosIntentHandling {
 
    override func handler(for intent: INIntent) -> Any {
        
        return self
    }
}

The only method currently implemented within the IntentHandler.swift file is the handler method. This method is the entry point into the extension and is called by SiriKit when the user indicates that the SiriPhoto app is to be used to perform a task. When calling this method, SiriKit expects in return a reference to the object responsible for handling the intent. Since this will be the responsibility of the IntentHandler class, the handler method simply returns a reference to itself.

Implementing the Resolve Methods

SiriKit is aware of a range of parameters which can be used to specify photo search criteria. These parameters consist of the photo creation date, the geographical location where the photo was taken, the people in the photo and album in which it resides. For each of these parameters, SiriKit will call a specific resolve method on the IntentHandler instance. Each method is passed the current intent object and is required to notify Siri whether or not the parameter is required and, if so, whether the intent contains a valid property value. The methods are also passed a completion handler reference which must be called to notify Siri of the response.

The first method called by Siri is the resolveDateCreated method which should now be implemented in the IntentHandler.swift file as follows:

func resolveDateCreated(for
    intent: INSearchForPhotosIntent,
    with completion: @escaping
        (INDateComponentsRangeResolutionResult) -> Void) {
 
    if let dateCreated = intent.dateCreated {
        completion(INDateComponentsRangeResolutionResult.success(
            with: dateCreated))
    } else {
        completion(INDateComponentsRangeResolutionResult.needsValue())
    }
}

The method verifies that the dateCreated property of the intent object contains a value. In the event that it does, the completion handler is called indicating to Siri that the date requirement has been successfully met within the intent. In this situation, Siri will call the next resolve method in the sequence.

If no date has been provided the completion handler is called indicating the property is still needed. On receiving this response, Siri will ask the user to provide a date for the photo search. This process will repeat until either a date is provided or the user abandons the Siri session.

The SiriPhoto app is only able to search for photos by date. The remaining resolver methods can, therefore, be implemented simply to return notRequired results to Siri. This will let Siri know that values for these parameters do not need to be obtained from the user. Remaining within the IntentHandler.swift file, implement these methods as follows:

func resolveAlbumName(for intent: INSearchForPhotosIntent, 
    with completion: @escaping (INStringResolutionResult) -> Void) {
    completion(INStringResolutionResult.notRequired())
}
 
func resolvePeopleInPhoto(for intent: 
     INSearchForPhotosIntent, with completion: @escaping ([INPersonResolutionResult]) -> Void) {
    completion([INPersonResolutionResult.notRequired()])
}
 
func resolveLocationCreated(for intent: 
    INSearchForPhotosIntent, with completion: @escaping (INPlacemarkResolutionResult) -> Void) {
        completion(INPlacemarkResolutionResult.notRequired())
}

With these methods implemented, the resolution phase of the intent handling process is now complete.

Implementing the Confirmation Method

When Siri has gathered the necessary information for the user, a call is made to the confirm method of the intent handler instance. The purpose of this call is to provide the handler with an opportunity to check that everything is ready to handle the intent. In the case of the SiriPhoto app, there are no special requirements so the method can be implemented to reply with a ready status:

func confirm(intent: INSearchForPhotosIntent, 
    completion: @escaping (INSearchForPhotosIntentResponse) -> Void)
{
    let response = INSearchForPhotosIntentResponse(code: .ready, 
        userActivity: nil)
    completion(response)
}

Handling the Intent

The last step in implementing the extension is to handle the intent. After the confirm method indicates that the extension is ready, Siri calls the handle method. This method is, once again, passed the intent object and a completion handler to be called when the intent has been handled by the extension. Implement this method now so that it reads as follows:

func handle(intent: INSearchForPhotosIntent, completion: @escaping
    (INSearchForPhotosIntentResponse) -> Void) {
    
    let activityType = "com.ebookfrenzy.siriphotointent"
    let activity = NSUserActivity(activityType: activityType)
    
    let response = INSearchForPhotosIntentResponse(code:
        INSearchForPhotosIntentResponseCode.continueInApp,
                                             userActivity: activity)
    
    if intent.dateCreated != nil {
        let calendar = Calendar(identifier: .gregorian)
        
        if let startComponents = intent.dateCreated?.startDateComponents,
            let endComponents = intent.dateCreated?.endDateComponents {
            
            if let startDate = calendar.date(from:
                startComponents),
                let endDate = calendar.date(from:
                    endComponents) {
                
                response.searchResultsCount = 
                   photoSearchFrom(startDate, to: endDate)
            }
        }
    }
    completion(response)
}

The above code requires some explanation. The method is responsible for constructing the intent response object containing the NSUserActivity object which will be handed off to the SiriPhoto app. The method begins by creating a new NSUserActivity instance configured with a type as follows:

let activityType = "com.ebookfrenzy.siriphotointent"
let activity = NSUserActivity(activityType: activityType)

The activity type can be any string value but generally takes the form of the app or extension name and company reverse domain name. Later in the chapter, this type name will need to be added as a supported activity type to the Info.plist file for the SiriPhoto app and referenced in the App declaration so that SiriPhoto knows which intent triggered the app launch.

Next, the method creates a new intent response instance and configures it with a code to let Siri know that the intent handling will be continued within the main SiriPhoto app. The intent response is also initialized with the NSUserActivity instance created previously:

let response = INSearchForPhotosIntentResponse(code:
                    INSearchForPhotosIntentResponseCode.continueInApp,
                               userActivity: activity)

The code then converts the start and end dates from DateComponents objects to Date objects and calls a method named photoSearchFrom(to:) to confirm that photo matches are available for the specified date range. The photoSearchFrom(to:) method (which will be implemented next) returns a count of the matching photos. This count is then assigned to the searchResultsCount property of the response object, which is then returned to Siri via the completion handler:

    if let startComponents = intent.dateCreated?.startDateComponents,
        let endComponents = intent.dateCreated?.endDateComponents {
 
        if let startDate = calendar.date(from:
            startComponents),
          let endDate = calendar.date(from:
              endComponents) {
 
        response.searchResultsCount = photoSearchFrom(startDate,
                            to: endDate)
        }
    }
}
completion(response)

If the extension returns a zero count via the searchResultsCount property of the response object, Siri will notify the user that no photos matched the search criteria. If one or more photo matches were found, Siri will launch the main SiriPhoto app and pass it the NSUserActivity object.

The final step in implementing the extension is to add the photoSearchFrom(to:) method to the IntentHandler. swift file:

func photoSearchFrom(_ startDate: Date, to endDate: Date) -> Int {
 
    let fetchOptions = PHFetchOptions()
 
    fetchOptions.predicate = NSPredicate(format: "creationDate > %@ AND creationDate < %@", startDate as CVarArg, endDate as CVarArg)
    let fetchResult = PHAsset.fetchAssets(with: PHAssetMediaType.image, 
                           options: fetchOptions)
    return fetchResult.count
}

The method makes use of the standard iOS Photos Framework to perform a search of the Photo library. It begins by creating a PHFetchOptions object. A predicate is then initialized and assigned to the fetchOptions instance specifying that the search is looking for photos taken between the start and end dates. Finally, the search for matching images is initiated, and the resulting count of matches returned.

Testing the App

Though there is still some work to be completed for the main SiriPhoto app, the Siri extension functionality is now ready to be tested. Within Xcode, make sure that SiriPhotoIntent is selected as the current target and click on the run button. When prompted for a host app, select Siri and click the run button. When Siri has started listening, say the following:

“Find a photo with SiriPhoto”

Siri will respond by seeking the day for which you would like to find a photo. After you specify a date, Siri will either launch the SiriPhoto app if photos exist for that day, or state that no photos could be found. Note that the first time a photo is requested the privacy dialog will appear seeking permission to access the photo library.

Provide permission when prompted and then repeat the photo search request.

Adding a Data Class to SiriPhoto

When SiriKit launches the SiriPhoto app in response to a successful photo search, it will pass the app an NSUserActivity instance. The app will need to handle this activity and use the intent response it contains to extract the matching photo from the library. The photo image will, in turn, need to be stored as a published observable property so that the content view is always displaying the latest photo. These tasks will be performed in a new Swift class declaration named PhotoHandler.

Add this new class to the project by right-clicking on the Shared folder in the project navigator panel and selecting the New File… menu option. In the template selection panel, choose the Swift File option before clicking on the Next button. Name the new class PhotoHandler and click on the Create button. With the PhotoHandler.swift file loaded into the code editor, modify it as follows:

import SwiftUI
import Combine
import Intents
import Photos
 
class PhotoHandler: ObservableObject {
    
    @Published var image: UIImage?
    var userActivity: NSUserActivity
    
    init (userActivity: NSUserActivity) {
        
        self.userActivity = userActivity
        self.image = UIImage(named: "image-missing")
        
    }
}

The above changes declare an observable class containing UIImage and NSUserActivity properties. The image property is declared as being published and will be observed by the content view later in the tutorial.

The class initializer stores the NSUserActivity instance passed through when the class is instantiated and assigns the missing image icon to the image property so that it will be displayed if there is no matching image from SiriKit.

Next, the class needs a method which can be called by the app to extract the photo from the library. Remaining in the PhotoHandler.swift file, add this method to the class as follows:

func handleActivity() {
    
    let intent = userActivity.interaction?.intent
        as! INSearchForPhotosIntent
    
    if (intent.dateCreated?.startDateComponents) != nil {
        let calendar = Calendar(identifier: .gregorian)
        let startDate = calendar.date(from:
            (intent.dateCreated?.startDateComponents)!)
        let endDate = calendar.date(from:
            (intent.dateCreated?.endDateComponents)!)
        getPhoto(startDate!, endDate!)
    }
}

The handleActivity() method extracts the intent from the user activity object and then converts the start and end dates to Date objects. These dates are then passed to the getPhoto() method which now also needs to be added to the class:

func getPhoto(_ startDate: Date, _ endDate: Date){
    
    let fetchOptions = PHFetchOptions()
    
    fetchOptions.predicate = NSPredicate(
         format: "creationDate > %@ AND creationDate < %@", 
                  startDate as CVarArg, endDate as CVarArg)
    let fetchResult = PHAsset.fetchAssets(with:
        PHAssetMediaType.image, options: fetchOptions)
    
    let imgManager = PHImageManager.default()
    
    if let firstObject = fetchResult.firstObject {
        imgManager.requestImage(for: firstObject as PHAsset,
                                targetSize: CGSize(width: 500, 
                                                    height: 500),
                                contentMode: 
                                     PHImageContentMode.aspectFill,
                                options: nil,
                                resultHandler: { (image, _) in
                                    self.image = image
        })
    }
}

The getPhoto() method performs the same steps used by the intent handler to search the Photo library based on the search date parameters. Once the search results have returned, however, the PHImageManager instance is used to retrieve the image from the library and assign it to the published image variable.

Designing the Content View

The user interface for the app is going to consist of a single Image view on which will be displayed the first photo taken on the day chosen by the user via Siri voice commands. Edit the ContentView.swift file and modify it so that it reads as follows:

import SwiftUI
 
struct ContentView: View {
 
    @StateObject var photoHandler: PhotoHandler
    
    var body: some View {
        Image(uiImage: photoHandler.image!)
            .resizable()
            .aspectRatio(contentMode: .fit)
            .padding()
    }
}
 
struct ContentView_Previews: PreviewProvider {
    static var previews: some View {
        ContentView(photoHandler: PhotoHandler(userActivity: 
              NSUserActivity(activityType: "Placeholder")))
    }
}

The changes simply add a PhotoHandler state object variable declaration, the image property of which is used to display an image on an Image view. The preview declaration is then adapted to pass a PhotoHandler instance to the content view initialized with a placeholder NSUserObject. Steps also need to be taken to pass a placeholder PhotoHandler instance to the content view within the SiriPhotoApp.swift file as follows:

import SwiftUI
import Intents
 
@main
struct SiriPhotoApp: App {
 
    @Environment(\.scenePhase) private var scenePhase
    var photoHandler: PhotoHandler = 
        PhotoHandler(userActivity: NSUserActivity(activityType: "Placeholder"))
    
    var body: some Scene {
        WindowGroup {
            ContentView(photoHandler: photoHandler)
        }
        .onChange(of: scenePhase) { phase in
            INPreferences.requestSiriAuthorization({status in
                // Handle errors here
            })
        }
    }
}

When previewed, the ContentView layout should be rendered as shown in the figure below:

Figure 44-9

Adding Supported Activity Types to SiriPhoto

When the intent handler was implemented earlier in the chapter, the NSUserActivity object containing the photo search information was configured with an activity type string. In order for the SiriPhoto app to receive the activity, the type must be declared using the NSUserActivityTypes property in the app’s iOS Info.plist file. Within the project navigator panel, select the Info.plist file located in the iOS folder. Hover the mouse pointer over the last entry in the property list and click on the ‘+’ button to add a new property. In the Key field, enter NSUserActivityTypes and change the Type setting to Array as shown in Figure 44-10:

Figure 44-10

Click on the ‘+’ button indicated by the arrow above to add a new item to the array. Set the value for Item 0 to com.ebookfrenzy.siriphotointent so that it matches the type assigned to the user activity instance:

Figure 44-11

Handling the NSUserActivity Object

The intent handler in the extension has instructed Siri to continue the intent handling process by launching the main SiriPhoto app. When the app is launched by Siri it will be provided the NSUserActivity object for the session containing the intent object. When an app is launched and passed an NSUserActivity object it can be accessed from within the App declaration by adding the onContinueUserActivity() modifier to the ContentView, passing through the activity type and defining the actions to be performed. Within the SiriPhotoApp.swift file, implement these changes as follows:

import SwiftUI
 
@main
struct SiriPhotoApp: App {
    
    var photoHandler: PhotoHandler = PhotoHandler(userActivity: 
        NSUserActivity(activityType: "Placeholder"))
    
    var body: some Scene {
        WindowGroup {
            ContentView(photoHandler: photoHandler)
                .onContinueUserActivity(
                       "com.ebookfrenzy.siriphotointent", 
                perform: { userActivity in
                    photoHandler.userActivity = userActivity
                    photoHandler.handleActivity()
                })
        }
.
.

The declaration begins by creating a placeholder PhotoHandler instance which can be passed to the ContentView in the event that the app is not launched by a supported activity type, or by the user tapping on the app in on the device home screen.

Next, the onContinueUserActivity() modifier is configured to only detect the activity type associated with the SiriPhotoIntent. If the type is detected, the NSUserActivity object passed to the app is assigned to the placeholder PhotoHandler instance and the handleActivity() method called to fetch the photo from the library. Because the content view is observing the image property, the Image view will update to display the extracted photo image.

Testing the Completed App

Run the SiriPhotoIntent extension, perform a photo search and, assuming photos are available for the selected day, wait for the main SiriPhoto app to load. When the app has loaded, the first photo taken on the specified date should appear within the Image view:

Figure 44-12

Summary

This chapter has worked through the creation of a simple app designed to use SiriKit to locate a photo taken on a particular date. The example has demonstrated the creation of an Intents Extension and the implementation of the intent handler methods necessary to interact with the Siri environment, including resolving missing parameters in the Siri intent. The project also explored the use of the NSUserActivity class to transfer the intent from the extension to the main iOS app.

Customizing the SiriKit Intent User Interface

Each SiriKit domain will default to a standard user interface layout to present information to the user during the Siri session. In the previous chapter, for example, the standard user interface was used by SiriKit to display to the user the message recipients and content to the user before sending the message. The default appearance can, however, be customized by making use of an Intent UI app extension. This UI Extension provides a way to control the appearance of information when it is displayed within the Siri interface. It also allows an extension to present additional information that would not normally be displayed by Siri or to present information using a visual style that reflects the design theme of the main app.

Adding the Intents UI Extension

When the Intents Extension was added to the SiriDemo project in the previous chapter, the option to include an Intents UI Extension was disabled. Now that we are ready to create a customized user interface for the intent, select the Xcode File -> New -> Target… menu option and add an Intents UI Extension to the project. Name the product SiriDemoIntentUI and, when prompted to do so, activate the build scheme for the new extension.

Modifying the UI Extension

SiriKit provides two mechanisms for performing this customization each of which involves implementing a method in the intent UI view controller class file. A simpler and less flexible option involves the use of the configure method. For greater control, the previously mentioned configureView method is available.

Using the configure Method

The files for this Intent UI Extension added above can be found within the Project navigator panel under the SiriDemoIntentUI folder.

Included within the SiriDemoIntentUI extension is a storyboard file named MainInterface.storyboard. For those unfamiliar with how user interfaces were built prior to the introduction of SwiftUI, this is an Interface Builder file. When the configure method is used to customize the user interface, this scene is used to display additional content which will appear directly above the standard SiriKit provided UI content. This layout is sometimes referred to as the Siri Snippet.

Although not visible by default, at the top of the message panel presented by Siri is the area represented by the UI Extension. Specifically, this displays the scene defined in the MainInterface.storyboard file of the SiriDemoIntentUI extension folder. The lower section of the panel is the default user interface provided by Siri for this particular SiriKit domain.

To provide a custom user interface using the UI Extension, the user interface needs to be implemented in the MainInterface.storyboard file and the configure method added to the IntentViewController.swift file. The IntentViewController class in this file is a subclass of UIViewController and configured such that it implements the INUIHostedViewControlling protocol.

The UI Extension is only used when information is being presented to the user in relation to an intent type that has been declared as supported in the UI Extension’s Info.plist file. When the extension is used, the configure method of the IntentViewController is called and passed an INInteraction object containing both the NSUserActivity and intent objects associated with the current Siri session. This allows context information about the session to be extracted and displayed to the user via the custom user interface defined in the MainInterface.storyboard file.

To add content above the “To:” line, therefore, we just need to implement the configure method and add some views to the UIView instance in the storyboard file. These views can be added either via Interface Builder or programmatically with the configure method.

For more advanced configuration, however, the configureView() approach provides far greater flexibility, and is the focus of this chapter.

Using the configureView Method

Unlike the configure method, the configureView method allows each section of the default user interface to be replaced with custom content and view layout.

SiriKit considers the default layout to be a vertical stack in which each row is represented by a parameter. For each layer of the stack (starting at the top and finishing at the bottom of the layout) the configureView method is called, passed information about the corresponding parameters and given the opportunity to provide a custom layout to be displayed within the corresponding stack row of the Siri user interface. The method is also passed a completion handler to be called with the appropriate configuration information to be passed back to Siri.

The parameters passed to the method take the form of INParameter instances. It is the responsibility of the configureView method to find out if a parameter is one for which it wants to provide a custom layout. It does this by creating local NSParameter instances of the type it is interested in and comparing these to the parameters passed to the method. Parameter instances are created by combining the intent class type with a specific key path representing the parameter (each type of intent has its own set of path keys which can be found in the documentation for that class). If the method needs to confirm that the passed parameter relates to the content of a send message intent, for example, the code would read as follows:

func configureView(for parameters: Set<INParameter>, of interaction: 
   INInteraction, interactiveBehavior: INUIInteractiveBehavior, context: 
    INUIHostedViewContext, completion: @escaping (Bool, Set<INParameter>, 
      CGSize) -> Void) {
 
    let content = INParameter(for: INSendMessageIntent.self, 
               keyPath: #keyPath(INSendMessageIntent.content))
 
    if parameters == [content] {
       // Configure ViewController before calling completion handler
   }
.
.
}

When creating a custom layout, it is likely that the method will need to access the data contained within the parameter. In the above code, for example, it might be useful to extract the message content from the parameter and incorporate it into the custom layout. This is achieved by calling the parameterValue method of the INInteraction object which is also passed to the configureView method. Each parameter type has associated with it a set of properties. In this case, the property for the message content is named, appropriately enough, content and can be accessed as follows:

.
.
let content = INParameter(for: INSendMessageIntent.self, 
               keyPath: #keyPath(INSendMessageIntent.content))
 
if parameters == [content] {
   let contentString = interaction.parameterValue(for: content)
}
.
.

When the configureView method is ready to provide Siri with a custom layout, it calls the provided completion handler, passing through a Boolean true value, the original parameters and a CGSize object defining the size of the layout as it is to appear in the corresponding row of the Siri user interface stack, for example:

completion(true, parameters, size)

If the default Siri content is to be displayed for the specified parameters instead of a custom user interface, the completion handler is called with a false value and a zero CGSize object:

completion(false, parameters, CGSize.zero)

In addition to calling the configureView method for each parameter, Siri will first make a call to the method to request a configuration for no parameters. By default, the method should check for this condition and call the completion handler as follows:

if parameters.isEmpty {
    completion(false, [], CGSize.zero)
}

The foundation for the custom user interface for each parameter is the View contained within the intent UI MainInterface.storyboard file. Once the configureView method has identified the parameters it can dynamically add views to the layout, or make changes to existing views contained within the scene.

Designing the Siri Snippet

The previous section covered a considerable amount of information, much of which will become clearer by working through an example.

Begin by selecting the MainInterface.storyboard file belonging to the SiriDemoIntentUI extension. While future releases of Xcode will hopefully allow the snippet to be declared using SwiftUI, this currently involves working with Interface Builder to add components, configure layout constraints and set up outlets.

The first step is to add a Label to the layout canvas. Display the Library by clicking on the button marked A in Figure 43-1 below and drag and drop a Label object from the Library (B) onto the layout canvas as indicated by the arrow:

Figure 43-1

Next, the Label needs to be constrained so that it has a 5dp margin between the leading, trailing and top edges of the parent view. With the Label selected in the canvas, click on the Add New Constraints button located in the bottom right-hand corner of the editor to display the menu shown in Figure 43-2 below:

Figure 43-2

Enter 5 into the top, left and right boxes and click on the I-beam icons next to each value so that they are displayed in solid red instead of dashed lines before clicking on the Add 3 Constraints button.

Before proceeding to the next step, establish an outlet connection from the Label component to a variable in the IntentViewController.swift file named contentLabel. This will allow the view controller to change the text displayed on the Label to reflect the intent content parameter. This is achieved using the Assistant Editor which is displayed by selecting the Xcode Editor -> Assistant menu option. Once displayed, Ctrl-click on the Label in the canvas and drag the resulting line to a position in the Assistant Editor immediately above the viewDidLoad() declaration:

Figure 43-3

On releasing the line, the dialog shown in Figure 43-4 will appear. Enter contentLabel into the Name field and click on Connect to establish the outlet.

Figure 43-4

Ctrl-click on the snippet background view and drag to immediately beneath the newly declared contentLabel outlet, this time creating an outlet named contentView:

Figure 43-5

On completion of these steps, the outlets should appear in the IntentViewController.swift file as follows:

class IntentViewController: UIViewController, INUIHostedViewControlling {
    
    @IBOutlet weak var contentLabel: UILabel!
    @IBOutlet weak var contentView: UIView!
.
.

Implementing a configureView Method

Next, edit the configureView method located in the IntentViewController.swift file to extract the content and recipients from the intent, and to modify the Siri snippet for the content parameter as follows:

func configureView(for parameters: Set<INParameter>, of interaction: 
    INInteraction, interactiveBehavior: INUIInteractiveBehavior, context: 
    INUIHostedViewContext, completion: @escaping (Bool, Set<INParameter>, 
     CGSize) -> Void) {
 
    var size = CGSize.zero
    
    let content = INParameter(for: INSendMessageIntent.self, keyPath:
        #keyPath(INSendMessageIntent.content))
 
    let recipients = INParameter(for: INSendMessageIntent.self,
                        keyPath: #keyPath(INSendMessageIntent.recipients))
    
    let recipientsValue = interaction.parameterValue(
           for: recipients) as! Array<INPerson>
 
    if parameters == [content] {
        let contentValue = interaction.parameterValue(for: content)
        
        self.contentLabel.text = contentValue as? String
        self.contentLabel.textColor = UIColor.white
        self.contentView.backgroundColor = UIColor.brown
        size = CGSize(width: 100, height: 70)
    }
    completion(true, parameters, size)
}

The code begins by declaring a variable in which to contain the required size of the Siri snippet before the content and recipients are extracted from the intent parameter. If the parameters include message content, it is applied to the Label widget in the snippet. The background of the snippet view is set to brown, the text color to white, and the dimensions to 100 x 70dp.

The recipients parameter takes the form of an array of INPerson objects, from which can be extracted the recipients’ display names. Code now needs to be added to iterate through each recipient in the array, adding each name to a string to be displayed on the contentLabel view. Code will also be added to use a different font and text color on the label and to change the background color of the view. Since the recipients list requires less space, the height of the view is set to 30dp:

.
.
    if parameters == [content] {
        let contentValue = interaction.parameterValue(for: content)
        self.contentLabel.text = contentValue as? String
        self.contentView.backgroundColor = UIColor.brown
        size = CGSize(width: 100, height: 70)      
    } else if recipientsValue.count > 0 {
        var recipientStr = "To:"
        var first = true
            
        for name in recipientsValue {
            let separator = first ? " " : ", "
            first = false
            recipientStr += separator + name.displayName
        }
            
        self.contentLabel.font = UIFont(name: "Arial-BoldItalicMT", size: 20.0)
        self.contentLabel.text = recipientStr
        self.contentLabel.textColor = UIColor.white
        self.contentView.backgroundColor = UIColor.blue
        size = CGSize(width: 100, height: 30)
    } else if parameters.isEmpty {
        completion(false, [], CGSize.zero)
    }
    completion(true, parameters, size)
.
.

Note that the above additions to the configureView() method also include a check for empty parameters, in which case a false value is returned together with a zeroed CGSize object indicating that there is nothing to display.

Testing the Extension

To test the extension, begin by changing the run target menu to the SiriDemoIntentUI target as shown in Figure 43-6 below:

Figure 43-6

Next, display the menu again, this time selecting the Edit Scheme… menu option:

Figure 43-7

In the resulting dialog select the Run option from the left-hand panel and enter the following into the Siri Intent Query box before clicking on the Close button:

Use SiriDemo to tell John and Kate I’ll be 10 minutes late.

Compile and run the Intents UI Extension and verify that the recipient row now appears with a blue background, a 30 point height and uses a larger italic font while the content appears with a brown background and a 70dp height:

Figure 43-8

Summary

While the default user interface provided by SiriKit for the various domains will be adequate for some apps, most intent extensions will need to be customized to present information in a way that matches the style and theme of the associated app, or to provide additional information not supported by the default layout. The default UI can be replaced by adding an Intent UI extension to the app project. The UI extension provides two options for configuring the user interface presented by Siri. The simpler of the two involves the use of the configure method to present a custom view above the default Siri user interface layout. A more flexible approach involves the implementation of the configureView method. SiriKit associates each line of information displayed in the default layout with a parameter. When implemented, the configureView method will be called for each of these parameters and provided with the option to return a custom View containing the layout and information to be used in place of the default user interface element.

A SwiftUI SiriKit Tutorial

The previous chapter covered much of the theory associated with integrating Siri into an iOS app. This chapter will review the example Siri messaging extension that is created by Xcode when a new Intents Extension is added to a project. This will not only show a practical implementation of the topics covered in the previous chapter, but will also provide some more detail on how the integration works. The next chapter will cover the steps required to make use of a UI Extension within an app project.

Creating the Example Project

Begin by launching Xcode and creating a new Multiplatform App project named SiriDemo.

Enabling the Siri Entitlement

Once the main project has been created the Siri entitlement must be enabled for the project. Select the SiriDemo target located at the top of the Project Navigator panel (marked A in Figure 42-1) so that the main panel displays the project settings. From within this panel, select the Signing & Capabilities tab (B) followed by the SiriDemo target entry (C):

Figure 42-1

Click on the “+ Capability” button (D) to display the dialog shown in Figure 42-2. Enter Siri into the filter bar, select the result and press the keyboard enter key to add the capability to the project:

Figure 42-2

If Siri is not listed as an option, you will need to pay to join the Apple Developer program as outlined in the chapter entitled “Joining the Apple Developer Program”.

Seeking Siri Authorization

In addition to enabling the Siri entitlement, the app must also seek authorization from the user to integrate the app with Siri. This is a two-step process which begins with the addition of an entry to the Info.plist file of the iOS app target for the NSSiriUsageDescription key with a corresponding string value explaining how the app makes use of Siri.

Select the Info.plist file located within the iOS folder in the project navigator panel as shown in Figure 42-3:

Figure 42-3

Once the file is loaded into the editor, locate the bottom entry in the list of properties and hover the mouse pointer over the item. When the plus button appears, click on it to add a new entry to the list. From within the drop-down list of available keys, locate and select the Privacy – Siri Usage Description option as shown in Figure 42-4:

Figure 42-4

Within the value field for the property, enter a message to display to the user when requesting permission to use speech recognition. For example:

Siri support is used to send and review messages.

In addition to adding the Siri usage description key, a call also needs to be made to the requestSiriAuthorization() class method of the INPreferences class. Ideally, this call should be made the first time that the app runs, not only so that authorization can be obtained, but also so that the user learns that the app includes Siri support. For the purposes of this project, the call will be made within the onChange() modifier based on the scenePhase changes within the app declaration located in the SiriDemoApp.swift file as follows:

import SwiftUI
import Intents
 
@main
struct SiriDemoApp: App {
    
    @Environment(\.scenePhase) private var scenePhase
    
    var body: some Scene {
        WindowGroup {
            ContentView()
        }
        .onChange(of: scenePhase) { phase in
            INPreferences.requestSiriAuthorization({status in
                // Handle errors here
            })
        }
    }
}

Before proceeding, compile and run the app on an iOS device or simulator. When the app loads, a dialog will appear requesting authorization to use Siri. Select the OK button in the dialog to provide authorization.

Adding the Intents Extension

The next step is to add the Intents Extension to the project ready to begin the SiriKit integration. Select the Xcode File -> New -> Target… menu option and add an Intents Extension to the project. Name the product SiriDemoIntent, set the Starting Point menu to Messaging and make sure that the Include UI Extension option is turned off (this will be added in the next chapter) before clicking on the Finish button. When prompted to do so, activate the build scheme for the Intents Extension.

Supported Intents

In order to work with Siri, an extension must specify the intent types it is able to support. These declarations are made in the Info.plist files of the extension folders. Within the Project Navigator panel, select the Info.plist file located in the SiriDemoIntent folder and unfold the NSExtension -> NSExtensionAttributes section. This will show that the IntentsSupported key has been assigned an array of intent class names:

Figure 42-5

Note that entries are available for intents that are supported and intents that are supported but restricted when the lock screen is enabled. It might be wise, for example, for a payment based intent to be restricted when the screen is locked. As currently configured, the extension supports all of the messaging intent types without restrictions. To support a different domain, change these intents or add additional intents accordingly. For example, a photo search extension might only need to specify INSearchForPhotosIntent as a supported intent. When the Intents UI Extension is added in the next chapter, it too will contain an Info.plist file with these supported intent value declarations. Note that the intents supported by the Intents UI Extension can be a subset of those declared in the UI Extension. This allows the UI Extension to be used only for certain intent types.

Trying the Example

Before exploring the structure of the project it is worth running the app and experiencing the Siri integration. The example simulates searching for and sending messages, so can be safely used without any messages actually being sent.

Make sure that the SiriDemoIntent option is selected as the run target in the toolbar as illustrated in Figure 42-6 and click on the run button.

Figure 42-6

When prompted, select Siri as the app within which the extension is to run. When Siri launches experiment with phrases such as the following:

“Send a message with SiriDemo.”

“Send a message to John with SiriDemo.”

“Use SiriDemo to say Hello to John and Kate.”

“Find Messages with SiriDemo.”

If Siri indicates that SiriDemo has not yet been set up, tap the button located on the Siri screen to open the SiriDemo app. Once the app has launched, press and hold the home button to relaunch Siri and try the above phrases again.

In each case, all of the work involved in understanding the phrases and converting them into structured representations of the request is performed by Siri. All the intent handler needs to do is work with the resulting intent object.

Specifying a Default Phrase

A useful option when repeatedly testing SiriKit behavior is to configure a phrase to be passed to Siri each time the app is launched from within Xcode. This avoids having to repeatedly speak to Siri each time the app is relaunched. To specify the test phrase, select the SiriDemoIntent run target in the Xcode toolbar and select Edit scheme… from the resulting menu as illustrated in Figure 42-7:

Figure 42-7

In the scheme panel, select the Run entry in the left-hand panel followed by the Info tab in the main panel. Within the Info settings, enter a query phrase into the Siri Intent Query text box before closing the panel:

Figure 42-8

Run the extension once again and note that the phrase is automatically passed to Siri to be handled:

Figure 42-9

Reviewing the Intent Handler

The Intent Handler is declared in the IntentHandler.swift file in the SiriDemoIntent folder. Load the file into the editor and note that the class declares that it supports a range of intent handling protocols for the messaging domain:

class IntentHandler: INExtension, INSendMessageIntentHandling, 
  INSearchForMessagesIntentHandling, INSetMessageAttributeIntentHandling {
.
.
}

The above declaration declares the class as supporting all three of the intents available in the messaging domain.

As an alternative to listing all of the protocol names individually, the above code could have achieved the same result by referencing the INMessagesDomainHandling protocol which encapsulates all three protocols.

If this template were to be re-purposed for a different domain, these protocol declarations would need to be replaced. For a payment extension, for example, the declaration might read as follows:

class IntentHandler: INExtension, INSendPaymentIntentHandling, 
    INRequestPaymentIntent {
.
.
}

The class also contains the handler method, resolution methods for the intent parameters and the confirm method. The resolveRecipients method is of particular interest since it demonstrates the use of the resolution process to provide the user with a range of options from which to choose when a parameter is ambiguous.

The implementation also contains multiple handle methods for performing tasks for message search, message send and message attribute change intents. Take some time to review these methods before proceeding.

Summary

This chapter has provided a walk-through of the sample messaging-based extension provided by Xcode when creating a new Intents Extension. This has highlighted the steps involved in adding both Intents and UI Extensions to an existing project, and enabling and seeking SiriKit integration authorization for the project. The chapter also outlined the steps necessary for the extensions to declare supported intents and provided an opportunity to gain familiarity with the methods that make up a typical intent handler. The next chapter will outline the mechanism for implementing and configuring a UI Extension.

An Introduction to SwiftUI and SiriKit

Although Siri has been part of iOS for a number of years, it was not until the introduction of iOS 10 that some of the power of Siri was made available to app developers through SiriKit. Initially limited to particular categories of app, SiriKit has since extended to allow Siri functionality to be built into apps of any type.

The purpose of SiriKit is to allow key areas of application functionality to be accessed via voice commands through the Siri interface. An app designed to send messages, for example, may be integrated into Siri to allow messages to be composed and sent using voice commands. Similarly, a time management app might use SiriKit to allow entries to be made in the Reminders app.

This chapter will provide an overview of SiriKit and outline the ways in which apps are configured to integrate SiriKit support.

Siri and SiriKit

Most iOS users will no doubt be familiar with Siri, Apple’s virtual digital assistant. Pressing and holding the home button, or saying “Hey Siri” launches Siri and allows a range of tasks to be performed by speaking in a conversational manner. Selecting the playback of a favorite song, asking for turn-by-turn directions to a location or requesting information about the weather are all examples of tasks that Siri can perform in response to voice commands.

When an app integrates with SiriKit, Siri handles all of the tasks associated with communicating with the user and interpreting the meaning and context of the user’s words. Siri then packages up the user’s request into an intent and passes it to the iOS app. It is then the responsibility of the iOS app to verify that enough information has been provided in the intent to perform the task and to instruct Siri to request any missing information. Once the intent contains all of the necessary data, the app performs the requested task and notifies Siri of the results. These results will be presented either by Siri or within the iOS app itself.

SiriKit Domains

When initially introduced, SiriKit could only be used with apps to perform tasks that fit into narrowly defined categories, also referred to as domains. With the release of iOS 10, Siri could only be used by apps when performing tasks that fit into one or more of the following domains:

  • Messaging
  • Notes and Lists
  • Payments
  • Visual Codes
  • Photos
  • Workouts
  • Ride Booking
  • CarPlay
  • Car Commands
  • VoIP Calling
  • Restaurant Reservations
  • Media

If your app fits into one of these domains then this is still the recommended approach to performing Siri integration. If, on the other hand, your app does not have a matching domain, SiriKit can now be integrated using custom Siri Shortcuts.

Siri Shortcuts

Siri Shortcuts allow frequently performed activities within an app to be stored as a shortcut and triggered via Siri using a pre-defined phrase. If a user regularly checked a specific stock price within a financial app, for example, that task could be saved as a shortcut and performed at any time via Siri voice command without the need to manually launch the app. Although lacking the power and flexibility of SiriKit domain-based integration, Siri Shortcuts provide a way for key features to be made accessible via Siri for apps that would otherwise be unable to provide any Siri integration.

An app can provide an “Add to Siri” button that allows a particular task to be configured as a shortcut. Alternatively, an app can make shortcut suggestions by donating actions to Siri. The user can review any shortcut suggestions within the Shortcuts app and choose those to be added as shortcuts.

Based on user behavior patterns, Siri will also suggest shortcuts to the user in the Siri Suggestions and Search panel that appears when making a downward swiping motion on the device home screen.

Siri Shortcuts will be covered in detail in the chapters entitled “An Overview of Siri Shortcut App Integration” and “A SwiftUI Siri Shortcut Tutorial”. Be sure to complete this chapter before looking at the Siri Shortcut chapters. Much of the content in this chapter applies equally to SiriKit domains and Siri Shortcuts.

SiriKit Intents

Each domain allows a predefined set of tasks, or intents, to be requested by the user for fulfillment by an app. An intent represents a specific task of which Siri is aware and for which SiriKit expects an integrated iOS app to be able to perform. The Messaging domain, for example, includes intents for sending and searching for messages, while the Workout domain contains intents for choosing, starting and finishing workouts. When the user makes a request of an app via Siri, the request is placed into an intent object of the corresponding type and passed to the app for handling.

In the case of Siri Shortcuts, a SiriKit integration is implemented by using a custom intent combined with an intents definition file describing how the app will interact with Siri.

How SiriKit Integration Works

Siri integration is performed via the iOS extension mechanism. Extensions are added as targets to the app project within Xcode in the same way as other extension types. SiriKit provides two types of extension, the key one being the Intents Extension. This extension contains an intent handler which is subclassed from the INExtension class of the Intents framework and contains the methods called by Siri during the process of communicating with the user. It is the responsibility of the intent handler to verify that Siri has collected all of the required information from the user, and then to execute the task defined in the intent.

The second extension type is the UI Extension. This extension is optional and comprises a storyboard file and a subclass of the IntentViewController class. When provided, Siri will use this UI when presenting information to the user. This can be useful for including additional information within the Siri user interface or for bringing the branding and theme of the main iOS app into the Siri environment.

When the user makes a request of an app via Siri, the first method to be called is the handler(forIntent:) method of the intent handler class contained in the Intents Extension. This method is passed the current intent object and returns a reference to the object that will serve as the intent handler. This can either be the intent handler class itself or another class that has been configured to implement one or more intent handling protocols.

The intent handler declares the types of intent it is able to handle and must then implement all of the protocol methods required to support those particular intent types. These methods are then called as part of a sequence of phases that make up the intent handling process as illustrated in Figure 41-1:

Figure 41-1

The first step after Siri calls the handler method involves calls to a series of methods to resolve the parameters associated with the intent.

Resolving Intent Parameters

Each intent domain type has associated with it a group of parameters that are used to provide details about the task to be performed by the app. While many parameters are mandatory, some are optional. The intent to send a message must, for example, contain a valid recipient parameter in order for a message to be sent. A number of parameters for a Photo search intent, on the other hand, are optional. A user might, for example, want to search for photos containing particular people, regardless of the date that the photos were taken.

When working with Siri domains, Siri knows all of the possible parameters for each intent type, and for each parameter Siri will ask the app extension’s intent handler to resolve the parameter via a corresponding method call. If Siri already has a parameter, it will ask the intent handler to verify that the parameter is valid. If Siri does not yet have a value for a parameter it will ask the intent handler if the parameter is required. If the intent handler notifies Siri that the parameter is not required, Siri will not ask the user to provide it. If, on the other hand, the parameter is needed, Siri will ask the user to provide the information.

Consider, for example, a photo search app called CityPicSearch that displays all the photos taken in a particular city. The user might begin by saying the following:

“Hey Siri. Find photos using CityPicSearch.”

From this sentence, Siri will infer that a photo search using the CityPicSearch app has been requested. Siri will know that CityPicSearch has been integrated with SiriKit and that the app has registered that it supports the InSearchForPhotosIntent intent type. Siri also knows that the InSearchForPhotosIntent intent allows photos to be searched for based on date created, people in the photo, the location of the photo and the photo album An Introduction to SiriKit in which the photo resides. What Siri does not know, however, is which of these parameters the CityPicSearch app actually needs to perform the task. To find out this information, Siri will call the resolve method for each of these parameters on the app’s intent handler. In each case the intent handler will respond indicating whether or not the parameter is required. In this case, the intent handler’s resolveLocationCreated method will return a status indicating that the parameter is mandatory. On receiving this notification, Siri will request the missing information from the user by saying:

“Find pictures from where?”

The user will then provide a location which Siri will pass to the app by calling resolveLocationCreated once again, including the selection in the intent object. The app will verify the validity of the location and indicate to Siri that the parameter is valid. This process will repeat for each parameter supported by the intent type until all necessary parameter requirements have been satisfied.

Techniques are also available to assist Siri and the user clarify ambiguous parameters. The intent handler can, for example, return a list of possible options for a parameter which will then be presented to the user for selection. If the user were to ask an app to send a message to “John”, the resolveRecipients method would be called by Siri. The method might perform a search of the contacts list and find multiple entries where the contact’s first name is John. In this situation the method could return a list of contacts with the first name of John. Siri would then ask the user to clarify which “John” is the intended recipient by presenting the list of matching contacts.

Once the parameters have either been resolved or indicated as not being required, Siri will call the confirm method of the intent handler.

The Confirm Method

The confirm method is implemented within the extension intent handler and is called by Siri when all of the intent parameters have been resolved. This method provides the intent handler with an opportunity to make sure that it is ready to handle the intent. If the confirm method reports a ready status, Siri calls the handle method.

The Handle Method

The handle method is where the activity associated with the intent is performed. Once the task is completed, a response is passed to Siri. The form of the response will depend on the type of activity performed. For example, a photo search activity will return a count of the number of matching photos, while a send message activity will indicate whether the message was sent successfully.

The handle method may also return a continueInApp response. This tells Siri that the remainder of the task is to be performed within the main app. On receiving this response, Siri will launch the app, passing in an NSUserActivity object. NSUserActivity is a class that enables the status of an app to be saved and restored. In iOS 10 and later, the NSUserActivity class has an additional property that allows an NSInteraction object to be stored along with the app state. Siri uses this interaction property to store the NSInteraction object for the session and passes it to the main iOS app. The interaction object, in turn, contains a copy of the intent object which the app can extract to continue processing the activity. A custom NSUserActivity object can be created by the extension and passed to the iOS app. Alternatively, if no custom object is specified, SiriKit will create one by default.

A photo search intent, for example, would need to use the continueInApp response and user activity object so that photos found during the search can be presented to the user (SiriKit does not currently provide a mechanism for displaying the images from a photo search intent within the Siri user interface).

It is important to note that an intent handler class may contain more than one handle method to handle different intent types. A messaging app, for example, would typically have different handler methods for send message and message search intents.

Custom Vocabulary

Clearly Siri has a broad knowledge of vocabulary in a wide range of languages. It is quite possible, however, that your app or app users might use certain words or terms which have no meaning or context for Siri. These terms can be added to your app so that they are recognized by Siri. These custom vocabulary terms are categorized as either user-specific or global.

User specific terms are terms that only apply to an individual user. This might be a photo album with an unusual name or the nicknames the user has entered for contacts in a messaging app. User specific terms are registered with Siri from within the main iOS app (not the extension) at application runtime using the setVocabularyStrings(oftype:) method of the NSVocabulary class and must be provided in the form of an ordered list with the most commonly used terms listed first. User-specific custom vocabulary terms may only be specified for contact and contact group names, photo tag and album names, workout names and CarPlay car profile names. When calling the setVocabularyStrings(oftype:) method with the ordered list, the category type specified must be one of the following:

  • contactName
  • contactGroupName
  • photoTag
  • photoAlbumName
  • workoutActivityName
  • carProfileName

Global vocabulary terms are specific to your app but apply to all app users. These terms are supplied with the app bundle in the form of a property list file named AppInventoryVocabulary.plist. These terms are only applicable to work out and ride sharing names.

The Siri User Interface

Each SiriKit domain has a standard user interface layout that is used by default to convey information to the user during the Siri integration. The Ride Booking extension, for example, will display information such as the destination and price. These default user interfaces can be customized by adding an intent UI app extension to the project. This topic is covered in the chapter entitled “Customizing the SiriKit Intent User Interface”. In the case of a Siri Shortcut, the same technique can be used to customize the user interface that appears within Siri when the shortcut is used.

Summary

SiriKit brings some of the power of Siri to third-party apps, allowing the functionality of an app to be accessed by the user using the Siri virtual assistant interface. Siri integration was originally only available when performing tasks that fall into narrowly defined domains such as messaging, photo searching and workouts. This has now been broadened to provide support for apps of just about any type. Siri integration uses the standard iOS extensions mechanism. The Intents Extension is responsible for interacting with Siri, while the optional UI Extension provides a way to control the appearance of any results presented to the user within the Siri environment.

All of the interaction with the user is handled by Siri, with the results structured and packaged into an intent. This intent is then passed to the intent handler of the Intents Extension via a series of method calls designed to verify that all the required information has been gathered. The intent is then handled, the requested task performed and the results presented to the user either via Siri or the main iOS app.

A SwiftUI Core Data and CloudKit Tutorial

Using the CoreDataDemo project created in the chapter entitled A SwiftUI Core Data Tutorial, this chapter will demonstrate how to add CloudKit support to an Xcode project and migrate from Core Data to CloudKit-based storage. This chapter assumes that you have read the chapter entitled An Overview of SwiftUI Core Data and CloudKit Storage.

Enabling CloudKit Support

Begin by launching Xcode and opening the CoreDataDemo project. Once the project has loaded into Xcode, the first step is to add the iCloud capability to the app. Select the CoreDataDemo target located at the top of the Project Navigator panel (marked A in Figure 46-1) so that the main panel displays the project settings. From within this panel, select the Signing & Capabilities tab (B) followed by the CoreDataDemo target entry (C):

Figure 46-1

Click on the “+” button (D) to display the dialog shown in Figure 46-2. Enter iCloud into the filter bar, select the result and press the keyboard enter key to add the capability to the project:

Figure 46-2

If iCloud is not listed as an option, you will need to pay to join the Apple Developer program as outlined in the chapter entitled “Joining the Apple Developer Program”. If you are already a member, use the steps outlined in the chapter entitled “Installing Xcode 13 and the iOS 15 SDK” to ensure you have created a Developer ID Application certificate.

Within the iCloud entitlement settings, make sure that the CloudKit service is enabled before clicking on the “+” button indicated by the arrow in Figure 46-3 below to add an iCloud container for the project:

Figure 46-3

After clicking the “+” button, the dialog shown in Figure 46-4 will appear containing a text field into which you will need to enter the container identifier. This entry should uniquely identify the container within the CloudKit ecosystem, generally includes your organization identifier (as defined when the project was created), and should be set to something similar to iCloud.com.yourcompany.CoreDataDemo.

Figure 46-4

Once you have entered the container name, click the OK button to add it to the app entitlements. Returning to the Signing & Capabilities screen, make sure that the new container is selected:

Figure 46-5

Enabling Background Notifications Support

When the app is running on multiple devices and a data change is made in one instance of the app, CloudKit will use remote notifications to notify other instances of the app to update to the latest data. To enable background notifications, repeat the above steps, this time adding the Background Modes entitlement. Once the entitlement has been added, review the settings and make sure that Remote notifications mode is enabled as highlighted in Figure 46-6:

Figure 46-6

Now that the necessary entitlements have been enabled for the app, all that remains is to make some minor code changes to the project.

Switching to the CloudKit Persistent Container

Locate the Persistence.swift file in the project navigator panel and select it so that it loads into the code editor. Within the init() function, change the container creation call from NSPersistentContainer to NSPersistentCloudKitContainer as follows:

.
.
let container: NSPersistentCloudKitContainer
.
.
init() {
    container = NSPersistentCloudKitContainer(name: "Products")
 
    container.loadPersistentStores { (storeDescription, error) in
        if let error = error as NSError? {
            fatalError("Container load failed: \(error)")
        }
    }
}

Since multiple instances of the app could potentially change the same data at the same time, we also need to define a merge policy to make sure that conflicting changes are handled as follows:

init() {
    container = NSPersistentCloudKitContainer(name: "Products")
 
    container.loadPersistentStores { (storeDescription, error) in
        if let error = error as NSError? {
            fatalError("Container load failed: \(error)")
        }
    }
    container.viewContext.automaticallyMergesChangesFromParent = true
}

Testing the App

CloudKit storage can be tested on either physical devices, simulators, or a mixture of both. All test devices and simulators must be signed in to iCloud using your Apple developer account and have the iCloud Drive option enabled. Once these requirements have been met, run the CoreDataDemo app and add some product entries. Next, run the app on another device or simulator and check that the newly added products appear. This confirms that the data is being stored and retrieved from iCloud.

With both app instances running, enter a new product in one instance and check that it appears in the other. Note that a bug in the simulator means that you may need to place the app in the background and then restore it before the new data will appear.

Reviewing the Saved Data in the CloudKit Console

Once some product entries have been added to the database, return to the Signing & Capabilities screen for the project (Figure 46-1) and click on the CloudKit Console button. This will launch the default web browser on your system and load the CloudKit Dashboard portal. Enter your Apple developer login and password and, once the dashboard has loaded, the home screen will provide the range of options illustrated in Figure 46-7:

Figure 46-7

Select the CloudKit Database option and, on the resulting web page, select the container for your app from the drop-down menu (marked A in Figure 46-8 below). Since the app is still in development and has not been published to the App Store, make sure that menu B is set to Development and not Production:

Figure 46-8

Next, we can query the records stored in the app container’s private database. Set the row of menus (C) to Private Database, com.apple.coredata.cloudkit.zone, and Query Records respectively. Finally, set the Record Type menu to CD_Product and the Fields menu to All:

Figure 46-9

Clicking on the Query Records button should display a list of all the product items saved in the database as illustrated in Figure 46-10:

Figure 46-10

If, instead of a list of database entries, you see a message which reads “Field ‘recordName’ is not marked queryable”, follow the steps in the next section.

Fixing the recordName Problem

When attempting to query the database, the error message shown below may appear instead of the query results:

Figure 46-11

To resolve this problem, select the Indexes option in the navigation panel (marked A in Figure 46-12) followed by CD_Product record type (B):

Figure 46-12

Within the list of indexes for the CD_Product record type, click on the Add Basic Index button located at the bottom of the list:

Figure 46-13

Within the new index row, select the recordName field and set the index type to Queryable:

Figure 46-14

After adding the new index, click on the Save Changes button at the top of the index list before returning to the Records screen. Repeat the steps to configure and perform the query. Instead of the error message, the database records should now be listed.

Filtering and Sorting Queries

The queries we have been running so far are returning all of the records in the database. Queries may also be performed based on sorting and filtering criteria by clicking in the “Add filter or sort to query” field. Clicking in this field will display a menu system that will guide you through setting up the criteria. In Figure 46-15, for example, the menu system is being used to set up a filtered query based on the CD_name field:

Figure 46-15

Similarly, Figure 46-16 shows the completed filter and query results:

Figure 46-16

The same technique can be used to sort the results in ascending or descending order. You can also combine multiple criteria in a single query. To edit or remove a query criterion, left-click on it and select the appropriate menu option.

Editing and Deleting Records

In addition to querying the records in the database, the CloudKit Console also allows records to be edited and deleted. To edit or delete a record, locate it in the query list and click on the entry in the name column as highlighted below:

Figure 46-17

Once the record has been selected, the Record Details panel shown in Figure 46-18 will appear. In addition to displaying detailed information about the record, this panel also allows the record to be modified or deleted.

Figure 46-18

Adding New Records

To add a new record to a database, click on the “+” located at the top of the query results list and select the Create New Record option:

Figure 46-19

When the New Record panel appears (Figure 46-20) enter the new data before clicking the Save button:

Figure 46-20

Viewing Telemetry Data

To view telemetry data, select the Telemetry tab at the top of the console as indicated in Figure 46-21, or by selecting the home screen Telemetry option (Figure 46-7):

Figure 46-21

Within the telemetry screen, select the container, environment, timescale, and database type options:

Figure 46-22

Hovering the mouse pointer over a graph will display a key explaining the metric represented by the different line colors:

Figure 46-23

The console also provides a menu to display data for different operation types:

Figure 46-24

By default, telemetry data is displayed for database activity. This can be changed to display data relating to notifications or database usage using the menu shown in Figure 46-25:

Figure 46-25

Summary

The first step in adding CloudKit support to an Xcode SwiftUI project is to add the iCloud capability, enabling both the CloudKit service and remote notifications, and configuring a container to store the databases associated with the app. The migration from Core Data to CloudKit is simply a matter of changing the code to use NSPersistentCloudKitContainer instead of NSPersistentContainer and re-building the project.

CloudKit databases can be queried, modified, managed, and monitored from within the CloudKit Console.

An Overview of SwiftUI Core Data and CloudKit Storage

CloudKit provides a way for apps to store cloud-based databases using iCloud storage so that it is accessible across multiple devices, users, and apps.

Although initially provided with a dedicated framework that allows code to be written to directly create, manage and access iCloud-based databases, the recommended approach is now to use CloudKit in conjunction with Core Data.

This chapter will provide a high-level introduction to the various elements that make up CloudKit, and explain how those correspond to Core Data.

An Overview of CloudKit

The CloudKit Framework provides applications with access to the iCloud servers hosted by Apple and provides an easy-to-use way to store, manage and retrieve data and other asset types (such as large binary files, videos, and images) in a structured way. This provides a platform for users to store private data and access it from multiple devices, and also for the developer to provide data that is publicly available to all the users of an application.

The first step in learning to use CloudKit is to gain an understanding of the key components that constitute the CloudKit framework. Keep in mind that we won’t be directly working with these components when using Core Data with CloudKit. We will, instead, continue to work with the Core Data elements covered in the previous chapters using a CloudKit-enabled version of the Persistent Container. This container will handle all of the work of mapping these Core Data components to their equivalents within the CloudKit ecosystem.

While it is theoretically possible to implement CloudKit-based Core Data storage without this knowledge, this information will be useful when using the CloudKit Console. Basic knowledge of how CloudKit works will also be invaluable if you decide to explore more advanced topics in the future such as CloudKit sharing and subscriptions.

CloudKit Containers

Each CloudKit-enabled application has at least one container on iCloud. The container for an application is represented in CloudKit by the CKContainer class and it is within these containers that the databases reside. Containers may also be shared between multiple applications. When working with Core Data, the container can be thought of as the equivalent of the Managed Object Model.

CloudKit Public Database

Each cloud container contains a single public database. This is the database into which is stored data that is needed by all users of an application. A map application, for example, might have a set of data about locations and routes that apply to all users of the application. This data would be stored within the public database of the application’s cloud container.

CloudKit Private Databases

Private cloud databases are used to store data that is private to each specific user. Each cloud container, therefore, will contain one private database for each user of the application.

Data Storage Quotas

Data and assets stored in the public cloud database of an app count against the storage quota of the app. Anything stored in a private database, on the other hand, is counted against the iCloud quota of the corresponding user. Applications should, therefore, try to minimize the amount of data stored in private databases to avoid users having to unnecessarily purchase additional iCloud storage space.

At the time of writing, each application is provided with 1PB of free iCloud storage for public data for all of its users.

Apple also imposes limits on the volume of data transfers and the number of queries per second that are included in the free tier. While official documentation on these quotas and corresponding pricing is hard to find, it is unlikely that the average project will encounter these restrictions.

CloudKit Records

Data is stored in both the public and private databases in the form of records. Records are represented by the CKRecord class and are essentially dictionaries of key-value pairs where keys are used to reference the data values stored in the record. When stored is data via CloudKit using Core Data, these records are represented by Core Data Managed Objects.

The overall concept of an application cloud container, private and public databases, zones, and records can be visualized as illustrated in Figure 45-1:

Figure 45-1

CloudKit Record IDs

Each CloudKit record has associated with it a unique record ID represented by the CKRecordID class. If a record ID is not specified when a record is first created, one is provided for it automatically by the CloudKit framework.

CloudKit References

CloudKit references are implemented using the CKReference class and provide a way to establish relationships between different records in a database. A reference is established by creating a CKReference instance for an originating record and assigning to it the record to which the relationship is to be targeted. The CKReference object is then stored in the originating record as a key-value pair field. A single record can contain multiple references to other records.

Once a record is configured with a reference pointing to a target record, that record is said to be owned by the target record. When the owner record is deleted, all records that refer to it are also deleted and so on down the chain of references (a concept referred to as cascading deletes).

Record Zones

CloudKit record zones (CKRecordZone) provide a mechanism for relating groups of records within a private database. Unless a record zone is specified when a record is saved to the cloud it is placed in the default zone of the target database. Custom zones can be added to private databases and used to organize related records and perform tasks such as writing to multiple records simultaneously in a single transaction. Each record zone has associated with it a unique record zone ID (CKRecordZoneID) which must be referenced when adding new records to a zone. All of the records within a public database are considered to be in the public default zone.

The CloudKit record zone translates to the Core Data persistent container. When working with Core Data in the previous chapter, persistent containers were created as instances of the NSPersistentContainer class. When integrating Core Data with CloudKit, however, we will be using the NSPersistentCloudKitContainer class instead. In terms of modifying code to use Core Data with CloudKit, this usually simply involves substituting NSPersistentCloudKitContainer for NSPersistentContainer.

CloudKit Console

The CloudKit Console is a web-based portal that provides an interface for managing the CloudKit options and storage for applications. The console can be accessed via the following URL:

https://icloud.developer.apple.com/dashboard/

Alternatively, the CloudKit Console can be accessed via the button located in the iCloud section of the Xcode Signing & Capabilities panel for a project as shown in Figure 45-2:

Figure 45-2

Access to the dashboard requires a valid Apple developer login and password and, once loaded into a browser window, will appear providing access to the CloudKit containers associated with your team account.

Once one or more containers have been created, the console provides the ability to view data, add, update, query, and delete records, modify the database schema, view subscriptions and configure new security roles. It also provides an interface for migrating data from a development environment over to a production environment in preparation for an application to go live in the App Store

The Logs and Telemetry options provide an overview of CloudKit usage by the currently selected container, including operations performed per second, average data request size and error frequency, and log details of each transaction.

In the case of data access through the CloudKit Console, it is important to be aware that private user data cannot be accessed using the dashboard interface. Only data stored in the public database and the private databases belonging to the developer account used to log in to the console can be viewed and modified.

CloudKit Sharing

Clearly, a CloudKit record contained within the public database of an app is accessible to all users of that app. Situations might arise, however, where a user wants to share with others specific records contained within a private database. This was made possible with the introduction of CloudKit sharing.

CloudKit Subscriptions

CloudKit subscriptions allow users to be notified when a change occurs within the cloud databases belonging to an installed app. Subscriptions use the standard iOS push notifications infrastructure and can be triggered based on a variety of criteria such as when records are added, updated, or deleted. Notifications can also be further refined using predicates so that notifications are based on data in a record matching certain criteria. When a notification arrives, it is presented to the user in the same way as other notifications through an alert or a notification entry on the lock screen.

Summary

This chapter has covered a number of the key classes and elements that make up the data storage features of the CloudKit framework. Each application has its own cloud container which, in turn, contains a single public cloud database in addition to one private database for each application user. Data is stored in databases in the form of records using key-value pair fields. Larger data such as videos and photos are stored as assets which, in turn, are stored as fields in records. Records stored in private databases can be grouped into record zones and records may be associated with each other through the creation of relationships. Each application user has an iCloud user id and a corresponding user record both of which can be obtained using the CloudKit framework. In addition, CloudKit user discovery can be used to obtain, subject to permission having been given, a list of IDs for those users in the current user’s address book who have also installed and run the app.

Finally, the CloudKit Dashboard is a web-based portal that provides an interface for managing the CloudKit options and storage for applications.

A SwiftUI Core Data Tutorial

Now that we have explored the concepts of Core Data it is time to put that knowledge to use by creating an example app project. In this project tutorial, we will be creating a simple inventory app that uses Core Data to persistently store the names and quantities of products. This will include the ability to add, delete, and search for database entries.

Creating the CoreDataDemo Project

Launch Xcode, select the option to create a new project and choose the Multiplatform App template before clicking the Next button. On the project options screen, name the project CoreDataDemo and choose an organization identifier that will uniquely identify your app (this will be important when we add CloudKit support to the project in a later chapter).

Note that the options screen includes a Use Core Data setting as highlighted in Figure 44-1. This setting does the work of setting up the project for Core Data support and generates code to implement a simple app that demonstrates Core Data in action. Instead of using this template, this tutorial will take you through the steps of manually adding Core Data support to a project so that you have a better understanding of how Core Data works. For this reason, make sure the Use Core Data option is turned off before clicking the Next button:

Figure 44-1

Select a suitable location in which to save the project before clicking on the Finish button.

Defining the Entity Description

For this example, the entity takes the form of a data model designed to hold the names and quantities that will make up the product inventory. Right-click on the Shared folder within the project navigator and select the New File… option from the menu when it appears. Within the template dialog, select the Data Model entry located in the Core Data section as shown in Figure 44-2, then click the Next button:

Figure 44-2

Name the file Products and click on the Create button to generate the file. Once the file has been created, it will appear within the entity editor as shown below:

Figure 44-3

To add a new entity to the model, click on the Add Entity button marked A in Figure 44-3 above. Xcode will add a new entity (named Entity) to the model and list it beneath the Entities heading (B). Click on the new entity and change the name to Product:

Figure 44-4

Now that the entity has been created, the next step is to add the name and quantity attributes. To add the first attribute, click on the + button located beneath the Attributes section of the main panel. Name the new attribute name and change the Type to String as shown in Figure 44-5:

Figure 44-5

Repeat these steps to add a second attribute of type String named quantity. Upon completion of these steps, the attributes panel should match Figure 44-6:

Figure 44-6

Creating the Persistence Controller

The next requirement for our project is a persistence controller class in which to create and initialize an NSPersistentContainer instance. Right-click once again on the Shared folder in the project navigator and select the New File… menu option. Select the Swift File template option and save it as Persistence.swift. With the new file loaded into the code editor, modify it so that it reads as follows:

import CoreData
 
struct PersistenceController {
    static let shared = PersistenceController()
    
    let container: NSPersistentContainer
 
    init() {
        container = NSPersistentContainer(name: "Products")
        
        container.loadPersistentStores { (storeDescription, error) in
            if let error = error as NSError? {
                fatalError("Container load failed: \(error)")
            }
        }
    }
}

Setting up the View Context

Now that we have created a persistent controller we can use it to obtain a reference to the view context. An ideal place to perform this task is within the CoreDataDemoApp.swift file. To make the context accessible to the views that will make up the app, we will insert it into the view hierarchy as an environment object as follows:

import SwiftUI
 
@main
struct CoreDataDemoApp: App {
    
    let persistenceController = PersistenceController.shared
    
    var body: some Scene {
        WindowGroup {
            ContentView()
                .environment(\.managedObjectContext, 
                             persistenceController.container.viewContext)
        }
    }
}

Preparing the ContentView for Core Data

Before we start adding views to design the app user interface, the following initial changes are required within the ContentView.swift file:

import SwiftUI
import CoreData
 
struct ContentView: View {
    
    @State var name: String = ""
    @State var quantity: String = ""
    
    @Environment(\.managedObjectContext) private var viewContext
    
    @FetchRequest(entity: Product.entity(), sortDescriptors: [])
    private var products: FetchedResults<Product>
    
    var body: some View {
.
.

In addition to importing the CoreData library, we have also declared two state objects into which will be stored the product name and quantity as they are entered by the user. We have also gained access to the view context environment object that was created in the CoreDataDemoApp.swift file.

The @FetchRequest property wrapper is also used to declare a variable named products into which Core Data will store the latest product data stored in the database.

Designing the User Interface

With most of the preparatory work complete, we can now begin designing the layout of the main content view. Remaining in the ContentView.swift file, modify the body of the ContentView structure so that it reads as follows:

.
.
   var body: some View {
        NavigationView {
            VStack {
                TextField("Product name", text: $name)
                TextField("Product quantity", text: $quantity)
                
                HStack {
                    Spacer()
                    Button("Add") {
                        
                    }
                    Spacer()
                    Button("Clear") {
                        name = ""
                        quantity = ""
                    }
                    Spacer()
                }
                .padding()
                .frame(maxWidth: .infinity)
                
                List {
                    ForEach(products) { product in
                        HStack {
                            Text(product.name ?? "Not found")
                            Spacer()
                            Text(product.quantity ?? "Not found")
                        }
                    }
                }
                .navigationTitle("Product Database")
            }
            .padding()
            .textFieldStyle(RoundedBorderTextFieldStyle())
        }
    }
.
.

The layout initially consists of two TextField views, two Buttons, and a List which should render within the preview canvas as follows:

Figure 44-7

Saving Products

More code changes are now required so that data entered into the product name and quantity text fields is saved by Core Data into persistent storage when the Add button is clicked. Edit the ContentView.swift file once again to add this functionality:

.
.
   var body: some View {
        NavigationView {
            VStack {
                TextField("Product name", text: $name)
                TextField("Product quantity", text: $quantity)
                
                HStack {
                    Spacer()
                    Button("Add") {
                        addProduct()
                    }
                    Spacer()
                    Button("Clear") {
                        name = ""
                        quantity = ""
                    }
.
.
            .padding()
            .textFieldStyle(RoundedBorderTextFieldStyle())
        }
    }
    
    private func addProduct() {
        
        withAnimation {
            let product = Product(context: viewContext)
            product.name = name
            product.quantity = quantity
            
            saveContext()
        }
    }
    
    private func saveContext() {
        do {
            try viewContext.save()
        } catch {
            let error = error as NSError
            fatalError("An error occured: \(error)")
        }
    }
}
.
.

The first change configured the Add button to the call a function named addProduct() which was declared as follows:

private func addProduct() {
    
    withAnimation {
        let product = Product(context: viewContext)
        product.name = name
        product.quantity = quantity
        
        saveContext()
    }
}

The addProduct() function creates a new Product entity instance and assigns the current content of the product name and quantity state properties to the corresponding entity attributes. A call is then made to the following saveContext() function:

private func saveContext() {
    do {
        try viewContext.save()
    } catch {
        let error = error as NSError
        fatalError("An error occured: \(error)")
    }
}

The saveContext() function uses a “do.. try .. catch” construct to save the current viewContext to persistent storage. For testing purposes, a fatal error is triggered to terminate the app if the save action failed. More comprehensive error handling would typically be required for a production-quality app.

Saving the data will cause the latest data to be fetched and assigned to the products data variable. This, in turn, will cause the List view to update with the latest products. To make this update visually appealing, the code in the addProduct() function is placed in a withAnimation call.

Testing the addProduct() Function

Compile and run the app on a device or simulator, enter a few product and quantity entries, and verify that those entries appear in the List view as they are added. After entering information into the text fields, check that clicking on the Clear button clears the current entries.

At this point in the tutorial, the running app should resemble that shown in Figure 44-8 after some products have been added:

Figure 44-8

To make the list more organized, the product items need to be sorted in ascending alphabetical order based on the name attribute. To implement this, add a sort descriptor to the @FetchRequest definition as outlined below. This requires the creation of an NSSortDescriptor instance configured with the name attribute declared as the key and the ascending property set to true:

@FetchRequest(entity: Product.entity(), 
           sortDescriptors: [NSSortDescriptor(key: "name", ascending: true)])
private var products: FetchedResults<Product>

When the app is now run, the list of products will be sorted in ascending alphabetic order.

Deleting Products

Now that the app has a mechanism for adding product entries to the database, we need a way to delete entries that are no longer needed. For this project, we will use the same steps demonstrated in the chapter entitled SwiftUI Lists and Navigation. This will allow the user to delete entries by swiping on the list item and tapping the delete button. Beneath the existing addProduct() function, add a new function named deleteProduct() that reads as follows:

private func deleteProducts(offsets: IndexSet) {
    withAnimation {
        offsets.map { products[$0] }.forEach(viewContext.delete)
            saveContext()
        }
}

When the method is called, it is passed a set of offsets within the List entries representing the positions of the items selected by the user for deletion. The above code loops through these entries calling the viewContext delete() function for each deleted item. Once the deletions are complete, the changes are saved to the database via a call to our saveContext() function.

Now that we have added the deleteProduct() function, the List view can be modified to call it via the onDelete() modifier:

.
.
       List {
            ForEach(products) { product in
                HStack {
                    Text(product.name ?? "Not found")
                    Spacer()
                    Text(product.quantity ?? "Not found")
                }
            }
            .onDelete(perform: deleteProducts)
        }
        .navigationTitle("Product Database")
.
.

Run the app and verify both that performing a leftward swipe on a list item reveals the delete option and that clicking it removes the item from the list.

Figure 44-9

Adding the Search Function

The final feature to be added to the project will allow us to search the database for products that match the text entered into the name text field. The results will appear in a list contained within a second view named ResultsView. When it is called from ContentView, ResultsView will be passed the current value of the name state property and a reference to the viewContext object.

Begin by adding the ResultsView structure to the ContentView.swift file as follows:

struct ResultsView: View {
    
    var name: String
    var viewContext: NSManagedObjectContext
    @State var matches: [Product]?
 
    var body: some View {
       
        return VStack {
            List {
                ForEach(matches ?? []) { match in
                    HStack {
                        Text(match.name ?? "Not found")
                        Spacer()
                        Text(match.quantity ?? "Not found")
                    }
                }
            }
            .navigationTitle("Results")   
        }
    }
}

In addition to the name and viewContext parameters, the declaration also includes a state property named matches into which will be placed the matching product search results which, in turn, will be displayed within the List view.

We now need to add some code to perform the search and will do so by applying a task() modifier to the VStack container view. This will ensure that search is performed asynchronously and that all of the view’s properties have been initialized before the search is executed:

.
.
    return VStack {
        List {
      
            ForEach(myMatches ?? []) { match in
                HStack {
                    Text(match.name ?? "Not found")
                    Spacer()
                    Text(match.quantity ?? "Not found")
                }
            }
        }
        .navigationTitle("Results")
       
    }
    .task {
        let fetchRequest: NSFetchRequest<Product> = Product.fetchRequest()
        
        fetchRequest.entity = Product.entity()
        fetchRequest.predicate = NSPredicate(
            format: "name CONTAINS %@", name
        )
        matches = try? viewContext.fetch(fetchRequest)
    }
.
.

So that the search finds all products that contain the specified text, the predicate is configured using the CONTAINS keyword. This provides more flexibility than performing exact match searches using the LIKE keyword by finding partial matches.

The code in the closure of the task() modifier obtains an NSFetchRequest instance from the Product entity and assigns it an NSPredicate instance configured to find matches between the name variable and the name product entity attribute. The fetch request is then passed to the fetch() method of the view context, and the results assigned to the matches state object. This, in turn, will cause the List to be re-rendered with the matching products.

The last task before testing the search feature is to add a navigation link to ResultsView, keeping in mind that ResultsView is expecting to be passed the name state object and a reference to viewContext. This needs to be positioned between the Add and Clear buttons as follows:

.
.
   HStack {
        Spacer()
        Button("Add") {
            addProduct()
        }
        Spacer()
        NavigationLink(destination: ResultsView(name: name, 
                       viewContext: viewContext)) {
            Text("Find")
        }
        Spacer()
        Button("Clear") {
            name = ""
            quantity = ""
        }
        Spacer()
    } 
.
.

Check the preview canvas to confirm that the navigation link appears as shown in Figure 44-10:

Figure 44-10

Testing the Completed App

Run the app once again and add some additional products, preferably with some containing the same word. Enter the common word into the name text field and click on the Find link. The ResultsView screen should appear with a list of matching items. Figure 44-11, for example, illustrates a search performed on the word “Milk”:

Figure 44-11

Summary

In this chapter, we have used Core Data to provide persistent database storage within an app project. Topics covered include the creation of a Core Data entity model and the configuration of entity attributes. Steps were also taken to initialize a persistent container from which we obtained the view context. The project also used the @FetchRequest property wrapper configured to store entries in alphabetical order and also made use of the view context to add, delete, and search for database entries. In implementing the search behavior, we used an NSFetchRequest instance configured with an NSPredicate object and passed that to the fetch() method of the view context to find matching results.

An Introduction to Core Data and SwiftUI

A common requirement when developing iOS apps is to store data in some form of structured database. One option is to directly manage data using an embedded database system such as SQLite. While this is a perfectly good approach for working with SQLite in many cases, it does require knowledge of SQL and can lead to some complexity in terms of writing code and maintaining the database structure. This complexity is further compounded by the non-object-oriented nature of the SQLite API functions. In recognition of these shortcomings, Apple introduced the Core Data Framework. Core Data is essentially a framework that places a wrapper around the SQLite database (and other storage environments) enabling the developer to work with data in terms of Swift objects without requiring any knowledge of the underlying database technology.

We will begin this chapter by defining some of the concepts that comprise the Core Data model before providing an overview of the steps involved in working with this framework. Once these topics have been covered, the next chapter will work through a SwiftUI Core Data tutorial.

The Core Data Stack

Core Data consists of several framework objects that integrate to provide the data storage functionality. This stack can be visually represented as illustrated in Figure 43-1:

Figure 43-1

As we can see from Figure 43-1, the app sits on top of the stack and interacts with the managed data objects handled by the managed object context. Of particular significance in this diagram is the fact that although the lower levels in the stack perform a considerable amount of the work involved in providing Core Data functionality, the application code does not interact with them directly.

Before moving on to the more practical areas of working with Core Data it is important to spend some time explaining the elements that comprise the Core Data stack in a little more detail.

Persistent Container

The persistent container handles the creation of the Core Data stack and is designed to be easily subclassed to add additional application-specific methods to the base Core Data functionality. Once initialized, the persistent container instance provides access to the managed object context.

Managed Objects

Managed objects are the objects that are created by your application code to store data. A managed object may be thought of as a row or a record in a relational database table. For each new record to be added, a new managed object must be created to store the data. Similarly, retrieved data will be returned in the form of managed objects, one for each record matching the defined retrieval criteria. Managed objects are instances of the NSManagedObject class, or a subclass thereof. These objects are contained and maintained by the managed object context.

Managed Object Context

Core Data-based applications never interact directly with the persistent store. Instead, the application code interacts with the managed objects contained in the managed object context layer of the Core Data stack. The context maintains the status of the objects in relation to the underlying data store and manages the relationships between managed objects defined by the managed object model. All interactions with the underlying database are held temporarily within the context until the context is instructed to save the changes, at which point the changes are passed down through the Core Data stack and written to the persistent store.

Managed Object Model

So far we have focused on the management of data objects but have not yet looked at how the data models are defined. This is the task of the Managed Object Model which defines a concept referred to as entities.

Much as a class description defines a blueprint for an object instance, entities define the data model for managed objects. In essence, an entity is analogous to the schema that defines a table in a relational database. As such, each entity has a set of attributes associated with it that define the data to be stored in managed objects derived from that entity. For example, a Contacts entity might contain name, address, and phone number attributes.

In addition to attributes, entities can also contain relationships, fetched properties, persistent stores, and fetch requests:

  • Relationships – In the context of Core Data, relationships are the same as those in other relational database systems in that they refer to how one data object relates to another. Core Data relationships can be one-to-one, one-to-many, or many-to-many.
  • Fetched property – This provides an alternative to defining relationships. Fetched properties allow properties of one data object to be accessed from another data object as though a relationship had been defined between those entities. Fetched properties lack the flexibility of relationships and are referred to by Apple’s Core Data documentation as “weak, one-way relationships” best suited to “loosely coupled relationships”.
  • Fetch request – A predefined query that can be referenced to retrieve data objects based on defined predicates. For example, a fetch request can be configured into an entity to retrieve all contact objects where the name field matches “John Smith”.

Persistent Store Coordinator

The persistent store coordinator is responsible for coordinating access to multiple persistent object stores. As an iOS developer, you will never directly interact with the persistent store coordinator and will very rarely need to develop an application that requires more than one persistent object store. When multiple stores are required, the coordinator presents these stores to the upper layers of the Core Data stack as a single store.

Persistent Object Store

The term persistent object store refers to the underlying storage environment in which data are stored when using Core Data. Core Data supports three disk-based and one memory-based persistent store. Disk-based options consist of SQLite, XML, and binary. By default, iOS will use SQLite as the persistent store. In practice, the type of store being used is transparent to you as the developer. Regardless of your choice of persistent store, your code will make the same calls to the same Core Data APIs to manage the data objects required by your application.

Defining an Entity Description

Entity descriptions may be defined from within the Xcode environment. When a new project is created with the option to include Core Data, a template file will be created named <entityname>.xcdatamodeld. Xcode also provides a way to manually add entity description files to existing projects. Selecting this file in the Xcode project navigator panel will load the model into the entity editing environment as illustrated in Figure 43-2:

Figure 43-2

Create a new entity by clicking on the Add Entity button located in the bottom panel. The new entity will appear as a text box in the Entities list. By default, this will be named Entity. Double-click on this name to change it.

To add attributes to the entity, click on the Add Attribute button located in the bottom panel, or use the + button located beneath the Attributes section. In the Attributes panel, name the attribute and specify the type and any other options that are required.

Repeat the above steps to add more attributes and additional entities.

The Xcode entity editor also allows relationships to be established between entities. Assume, for example, two entities named Contacts and Sales. To establish a relationship between the two tables select the Contacts entity and click on the + button beneath the Relationships panel. In the detail panel, name the relationship, specify the destination as the Sales entity, and any other options that are required for the relationship. Once the relationship has been established it is, perhaps, best viewed graphically by selecting the Table, Graph option in the Editor Style control located in the bottom panel:

Figure 43-3

Initializing the Persistent Container

The persistent container is initialized by creating a new NSPersistentContainer instance, passing through the name of the model to be used, and then making a call to the loadPersistentStores method of that object as follows:

let persistentContainer: NSPersistentContainer
 
persistentContainer = NSPersistentContainer(name: "DemoData")
persistentContainer.loadPersistentStores { (storeDescription, error) in
    if let error = error as NSError? {
        fatalError("Container load failed: \(error)")
    }
}

Obtaining the Managed Object Context

Since many of the Core Data methods require the managed object context as an argument, the next step after defining entity descriptions often involves obtaining a reference to the context. This can be achieved by accessing the viewContext property of the persistent container instance:

let managedObjectContext = persistentContainer.viewContext

Setting the Attributes of a Managed Object

As previously discussed, entities and the managed objects from which they are instantiated contain data in the form of attributes. Once a managed object instance has been created as outlined above, those attribute values can be used to store the data before the object is saved. Assuming a managed object named contact with attributes named name, address and phone respectively, the values of these attributes may be set as follows before saving the object to storage:

contact.name = "John Smith" 
contact.address = "1 Infinite Loop" 
contact.phone = "555-564-0980"

Saving a Managed Object

Once a managed object instance has been created and configured with the data to be stored it can be saved to storage using the save() method of the managed object context as follows:

do {
    try viewContext.save()
} catch {
    let error = error as NSError
    fatalError("An error occured: \(error)")
}

Fetching Managed Objects

Once managed objects are saved into the persistent object store those objects and the data they contain will likely need to be retrieved. One way to fetch data from Core Data storage is to use the @FetchRequest property wrapper when declaring a variable in which to store the data. The following code, for example, declares a variable named customers which will be automatically updated as data is added to or removed from the database:

@FetchRequest(entity: Customer.entity(), sortDescriptors: [])
private var customers: FetchedResults<Customer>

The @FetchRequest property wrapper may also be configured to sort the fetched results. In the following example, the customer data stored in the customers variable will be sorted alphabetically in ascending order based on the name entity attribute:

@FetchRequest(entity: Customer.entity(), 
        sortDescriptors: [NSSortDescriptor(key: "name", ascending: true)])
private var customers: FetchedResults<Customer>

Retrieving Managed Objects based on Criteria

The preceding example retrieved all of the managed objects from the persistent object store. More often than not only managed objects that match specified criteria are required during a retrieval operation. This is performed by defining a predicate that dictates criteria that a managed object must meet to be eligible for retrieval. For example, the following code configures a @FetchRequest property wrapper declaration with a predicate to extract only those managed objects where the name attribute matches “John Smith”:

@FetchRequest(
  entity: Customer.entity(),
  sortDescriptors: [],
  predicate: NSPredicate(format: "name LIKE %@", "John Smith")
) 
private var customers: FetchedResults<Customer>

The above example will maintain the customers variable so that it always contains the entries that match the specified predicate criteria. It is also possible to perform one-time fetch operations by creating NSFetchRequest instances, configuring them with the entity and predicate settings, and then passing them to the fetch() method of the managed object context. For example:

@State var matches: [Customer]?
let fetchRequest: NSFetchRequest<Product> = Product.fetchRequest()
 
fetchRequest.entity = Customer.entity()
fetchRequest.predicate = NSPredicate(
    format: "name LIKE %@", "John Smith"
)
 
matches = try? viewContext.fetch(fetchRequest)

Summary

The Core Data Framework stack provides a flexible alternative to directly managing data using SQLite or other data storage mechanisms. By providing an object-oriented abstraction layer on top of the data the task of managing data storage is made significantly easier for the SwiftUI application developer. Now that the basics of Core Data have been covered, the next chapter entitled “A SwiftUI Core Data Tutorial” will work through the creation of an example application.

A SwiftUI DocumentGroup Tutorial

The previous chapter provided an introduction to the DocumentGroup scene type provided with SwiftUI and explored the architecture that makes it possible to add document browsing and management to apps.

This chapter will demonstrate how to take the standard Xcode Multiplatform Document App template and modify it to work with image files instead of plain text documents. On completion of the tutorial, the app will allow image files to be opened, modified using a sepia filter and then saved back to the original file.

Creating the ImageDocDemo Project

Begin by launching Xcode and create a new project named ImageDocDemo using the Multiplatform Document App template.

Modifying the Info.plist File

Since the app will be working with image files instead of plain text, some changes need to be made to the type identifiers declared in the Info.plist file. To make these changes, select the ImageDocDemo entry at the top of the project navigator window (marked A in Figure 40-1), followed by the ImageDocDemo (iOS) target (B) before clicking on the Info tab (C).

Figure 40-1

Scroll down to the Document Types section within the Info screen and change the Types field from com.example. plain-text to com.ebookfrenzy.image:

Figure 40-2

Next, locate the Imported Type Identifiers section and make the following changes:

  • Description – Example Image
  • Identifier – com.ebookfrenzy.image
  • Conforms To – public.image
  • Extensions – png

Once these changes have been made, the settings should match those shown in Figure 40-3:

Figure 40-3

Adding an Image Asset

If the user decides to create a new document instead of opening an existing one, a sample image will be displayed from the project asset catalog. For this purpose the cascadefalls.png file located in the project_images folder of the sample code archive will be added to the asset catalog. If you do not already have the source code downloaded, it can be downloaded from the following URL: https://www.ebookfrenzy.com/retail/swiftui-ios14/

Once the image file has been located in a Finder window, select the Assets.xcassets entry in the Xcode project navigator and drag and drop the image as shown in Figure 40-4:

Figure 40-4

Modifying the ImageDocDemoDocument.swift File

Although we have changed the type identifiers to support images instead of plain text, the document declaration is still implemented for handling text-based content. Select the ImageDocDemoDocument.swift file to load it into the editor and begin by modifying the UTType extension so that it reads as follows:

extension UTType {
    static var exampleImage: UTType {
        UTType(importedAs: "com.ebookfrenzy.image")
    }
}

Next, locate the readableContentTypes variable and modify it to use the new UTType:

static var readableContentTypes: [UTType] { [.exampleImage] }

With the necessary type changes made, the next step is to modify the structure to work with images instead of string data. Remaining in the ImageDocDemoDocument.swift file, change the text variable from a string to an image and modify the first initializer to use the cascadefalls image:

.
.
struct ImageDocDemoDocument: FileDocument {
    
    var image: UIImage = UIImage()
 
    init() {
        if let image = UIImage(named: "cascadefalls") {
            self.image = image
        }
    }
.
.

Moving on to the second init() method, make the following modifications to decode image instead of string data:

init(configuration: ReadConfiguration) throws {
    guard let data = configuration.file.regularFileContents,
          let decodedImage: UIImage = UIImage(data: data)
    else {
        throw CocoaError(.fileReadCorruptFile)
    }
    image = decodedImage
}

Finally, modify the write() method to encode the image to Data format so that it can be saved to the document:

func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper {
    let data = image.pngData()!
    return .init(regularFileWithContents: data)
}

Designing the Content View

Before performing some initial tests on the project so far, the content view needs to be modified to display an image instead of text content. We will also take this opportunity to add a Button view to the layout to apply the sepia filter to the image. Edit the ContentView.swift file and modify it so that it reads as follows:

import SwiftUI
 
struct ContentView: View {
    
    @Binding var document: ImageDocDemoDocument
 
    var body: some View {
        VStack {
            Image(uiImage: document.image)
                .resizable()
                .aspectRatio(contentMode: .fit)
                .padding()
            Button(action: {
                
            }, label: {
                Text("Filter Image")
            })
            .padding()
        }
    }
}

With the changes made, run the app on a device or simulator, use the browser to navigate to a suitable location and then click on the Create Document item. The app will create a new image document containing the sample image from the asset catalog and then display it in the content view:

Figure 40-5

Tap the back arrow in the top left-hand corner to return to the browser where the new document should be A SwiftUI DocumentGroup Tutorial

listed with an icon containing a thumbnail image:

Figure 40-6

Filtering the Image

The final step in this tutorial is to apply the sepia filter to the image when the Button in the content view is tapped. This will make use of the CoreImage Framework and involves converting the UIImage to a CIImage and applying the sepia tone filter before being converted back to a UIImage. Edit the ContentView.swift file and make the following changes:

import SwiftUI
import CoreImage
import CoreImage.CIFilterBuiltins
 
struct ContentView: View {
    
    @Binding var document: ImageDocDemoDocument
    @State private var ciFilter = CIFilter.sepiaTone()
    
    let context = CIContext()
    
    var body: some View {
        VStack {
            Image(uiImage: document.image)
                .resizable()
                .aspectRatio(contentMode: .fit)
                .padding()
            Button(action: {
                filterImage()
            }, label: {
                Text("Filter Image")
            })
            .padding()
        }
    }
    
    func filterImage() {
        ciFilter.intensity = Float(1.0)
 
        let ciImage = CIImage(image: document.image)
        
        ciFilter.setValue(ciImage, forKey: kCIInputImageKey)
        
        guard let outputImage = ciFilter.outputImage else { return }
 
        if let cgImage = context.createCGImage(outputImage, 
                                       from: outputImage.extent) {
            document.image = UIImage(cgImage: cgImage)
        }
    }
}

Testing the App

Run the app once again and either create a new image document, or select the existing image to display the content view. Within the content view, tap the Filter Image button and wait while the sepia filter is applied to the image. Tap the back arrow to return to the browser where the thumbnail image will now appear in sepia tones. Select the image to load it into the content view and verify that the sepia changes were indeed saved to the document.

Summary

This chapter has demonstrated how to modify the Xcode Document App template to work with different content types. This involved changing the type identifiers, modifying the document declaration and adapting the content view to handle image content.

An Overview of SwiftUI DocumentGroup Scenes

The chapter entitled SwiftUI Architecture introduced the concept of SwiftUI scenes and explained that the SwiftUI framework, in addition to allowing you to build your own scenes, also includes two pre-built scene types in the form of WindowGroup and DocumentGroup. So far, the examples in this book have made exclusive use of the WindowGroup scene. This chapter will introduce the DocumentGroup scene and explain how it can be used to build document-based apps in SwiftUI.

Documents in Apps

If you have used iOS for an appreciable amount of time, the chances are good that you will have encountered the built-in Files app. The Files app provides a way to browse, select and manage the Documents stored both on the local device file system and iCloud storage in addition to third-party providers such as Google Drive. Documents in this context can include just about any file type including plain text, image, data and binary files. Figure 39-1 shows a typical browsing session within the iOS Files app:

Figure 39-1

The purpose of the DocumentGroup scene is to allow the same capabilities provided by the Files app to be built into SwiftUI apps, in addition to the ability to create new files.

Document support can be built into an app with relatively little work. In fact, Xcode includes a project template specifically for this task which performs much of the setup work for you. Before attempting to work with DocumentGroups, however, there are some basic concepts which first need to be covered. A good way to traverse this learning curve is to review the Document App project template generated by Xcode.

Creating the DocDemo App

Begin by launching Xcode and creating a new project using the Multiplatform Document App template option as shown in Figure 39-2 below:

Figure 39-2

Click the Next button, name the project DocDemo and save the project to a suitable location.

The DocumentGroup Scene

The DocumentGroup scene contains most of the infrastructure necessary to provide app users with the ability to create, delete, move, rename and select files and folders from within an app. An initial document group scene is declared by Xcode within the DocDemoApp.swift file as follows:

import SwiftUI
 
@main
struct DocDemoApp: App {
    var body: some Scene {
        DocumentGroup(newDocument: DocDemoDocument()) { file in
            ContentView(document: file.$document)
        }
    }
}

As currently implemented, the first scene presented to the user when the app starts will be the DocumentGroup user interface which will resemble Figure 39-1 above. Passed through to the DocumentGroup is a DocDemoDocument instance which, along with some additional configuration settings, contains the code to create, read and write files. When a user either selects an existing file, or creates a new one, the content view is displayed and passed the DocDemoDocument instance for the selected file from which the content may be extracted and presented to the user:

ContentView(document: file.$document)

The DocDemoDocument.swift file generated by Xcode is designed to support plain text files and may be used as the basis for supporting other file types. Before exploring this file in detail, we first need to understand file types.

Declaring File Type Support

A key step in implementing document support is declaring the file types which the app supports. The DocumentGroup user interface uses this information to ensure that only files of supported types are selectable when browsing. A user browsing documents in an app which only supports image files, for example, would see documents of other types (such as plain text) grayed out and unselectable within the document list. This can be separated into the following components:

Document Content Type Identifier

Defining the types of file supported by an app begins by declaring a document content type identifier. This is declared using Uniform Type Identifier (UTI) syntax which typically takes the form of a reverse domain name combined with a common type identifier. A document identifier for an app which supports plain text files, for example, might be declared as follows:

com.ebookfrenzy.plain-text

Handler Rank

The document content type may also declare a handler rank value. This value declares to the system how the app relates to the file type. If the app uses its own custom file type, this should be set to Owner. If the app is to be opened as the default app for files of this type, the value should be set to Default. If, on the other hand, the app can handle files of this type but is not intended to be the default handler a value of Alternate should be used. Finally, None should be used if the app is not to be associated with the file type.

Type Identifiers

Having declared a document content type identifier, this identifier must have associated with it a list of specific data types to which it conforms. This is achieved using type identifiers. These type identifiers can be chosen from an extensive list of built-in types provided by Apple and are generally prefixed with “public.”. For example the UTI for a plain text document is public.plain-text, while that for any type of image file is public.image. Similarly, if an app only supports JPEG image files, the public.jpeg UTI would be used.

Each of the built-in UTI types has associated with it a UTType equivalent which can be used when working with types programmatically. The public.plain-text UTI, for example, has a UTType instance named plainText while the UTType instance for public.mpeg4move is named mpeg4Movie. A full list of supported UTType declarations can be found at the following URL:

https://developer.apple.com/documentation/uniformtypeidentifiers/uttype/system_declared_types

Filename Extensions

In addition to declaring the type identifiers, filename extensions for which support is provided may also be specified (for example .txt, .png, .doc, .mydata etc.). Note that many of the built-in type identifiers are already configured to support associated file types. The public.png type, for example, is pre-configured to recognize .png filename extensions.

The extension declared here will also be appended to the filename of any new documents created by the app.

Custom Type Document Content Identifiers

When working with proprietary data formats (perhaps your app has its own database format), it is also possible to declare your own document content identifier without using one of the common identifiers. A document type identifier for a custom type might, therefore, be declared as follows:

com.ebookfrenzy.mydata

Exported vs. Imported Type Identifiers

When a built-in type is used (such as plain.image), it is said to be an imported type identifier (since it is imported into the app from the range of identifiers already known to the system). A custom type identifier, on the other hand, is described as an exported type identifier because it originates from within the app and is exported to the system so that the browser can recognize files of that type as being associated with the app.

Configuring File Type Support in Xcode

All of the above settings are configured within the project’s Info.plist file. Although these changes can be made with the Xcode property list editor, a better option is to access the settings via the Xcode Info screen of the app target. To review the settings for the example project using this approach, select the DocDemo entry at the top of the project navigator window (marked A in Figure 39-3), followed by the DocDemo (iOS) target (B) before clicking on the Info tab (C).

Figure 39-3

Scroll down to the Document Types section within the Info screen and note that Xcode has created a single document content type identifier set to com.example.plain-text with the handler rank set to Default:

Figure 39-4

Next, scroll down to the Imported Type Identifiers section where we can see that our document content type identifier (com.example.plain-text) has been declared as conforming to the public.plain-text type with a single filename extension of exampletext:

Figure 39-5

Type identifiers for custom types are declared in the Exported Type Identifiers section of the Info screen. For example a binary custom file might be declared as conforming to public.data while the file names for this type might have a mydata filename extension:

Figure 39-6

Note that in both cases, icons may be added to represent the files within the document browser user interface.

The Document Structure

When the example project was created, Xcode generated a file named DocDemoDocument.swift, an instance of which is passed to ContentView within the App declaration. As generated, this file reads as follows:

import SwiftUI
import UniformTypeIdentifiers
 
extension UTType {
    static var exampleText: UTType {
        UTType(importedAs: "com.example.plain-text")
    }
}
 
struct DocDemoDocument: FileDocument {
    var text: String
 
    init(text: String = "Hello, world!") {
        self.text = text
    }
 
    static var readableContentTypes: [UTType] { [.exampleText] }
 
    init(configuration: ReadConfiguration) throws {
        guard let data = configuration.file.regularFileContents,
              let string = String(data: data, encoding: .utf8)
        else {
            throw CocoaError(.fileReadCorruptFile)
        }
        text = string
    }
    
    func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper {
        let data = text.data(using: .utf8)!
        return .init(regularFileWithContents: data)
    }
}

The structure is based on the FileDocument class and begins by declaring a new UTType named exampleText which imports our com.example.plain-text identifier. This is then referenced in the readableContentTypes array to indicate which types of file can be opened by the app:

extension UTType {
    static var exampleText: UTType {
        UTType(importedAs: "com.example.plain-text")
    }
}
.
.
    static var readableContentTypes: [UTType] { [.exampleText] }
.
.

The structure also includes two initializers, the first of which will be called when the creation of a new document is requested by the user and simply configures a sample text string as the initial data:

init(text: String = "Hello, world!") {
    self.text = text
}

The second initializer, on the other hand, is called when the user opens an existing document and is passed a ReadConfiguration instance:

init(configuration: ReadConfiguration) throws {
    guard let data = configuration.file.regularFileContents,
          let string = String(data: data, encoding: .utf8)
    else {
        throw CocoaError(.fileReadCorruptFile)
    }
    text = string
}

The ReadConfiguration instance holds the content of the file in Data format which may be accessed via the regularFileContents property. Steps are then taken to decode this data and convert it to a String so that it can be displayed to the user. The exact steps to decode the data will depend on how the data was originally encoded within the fileWrapper() method. In this case, the method is designed to work with String data:

func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper {
    let data = text.data(using: .utf8)!
    return .init(regularFileWithContents: data)
}

The fileWrapper() method is passed a WriteConfiguration instance for the selected file and is expected to return a FileWrapper instance initialized with the data to be written. In order for the content to be written to the file it must first be converted to data and stored in a Data object. In this case the text String value is simply encoded to data. The steps involved to achieve this in your own apps will depend on the type of content being stored in the document.

The Content View

As we have seen early in the chapter, the ContentView is passed an instance of the DocDemoDocument structure from within the App declaration:

ContentView(document: file.$document)

In the case of the DocDemo example, the ContentView binds to this property and references it as the content for a TextEditor view:

.
.
struct ContentView: View {
    @Binding var document: DocDemoDocument
 
    var body: some View {
        TextEditor(text: $document.text)
    }
}
.
.

When the view appears it will display the current string assigned to the text property of the document instance and, as the user edits the text, the changes will be stored. When the user navigates back to the document browser, a call to the fileWrapper() method will be triggered automatically and the changes saved to the document.

Running the Example App

Having explored the internals of the example DocDemo app, the final step is to experience the app in action. With this in mind, compile and run the app on a device or simulator and, once running, select the Browse tab located at the bottom of the screen:

Figure 39-7

Navigate to a suitable location either on the device or within your iCloud storage and click on the Create Document entry as shown in Figure 39-8:

Figure 39-8

The new file will be created and the content loaded into the ContentView. Edit the sample text and return to the document browser where the document (named untitled) will now be listed. Open the document once again so that it loads into the ContentView and verify that the changes were saved.

Summary

The SwiftUI DocumentGroup scene allows the document browsing and management capabilities available within the built-in Files app to be integrated into apps with relatively little effort. The core element of DocumentGroup implementation is the document declaration which acts as the interface between the document browser and views that make up the app and is responsible for encoding and decoding document content. In addition, the Info.plist file for the app must include information about the types of files the app is able to support.