A SwiftUI SiriKit NSUserActivity Tutorial

In this chapter, an example project will be created that uses the Photo domain of SiriKit to allow the user, via Siri voice commands, to search for and display a photo taken on a specified date. In the process of designing this app, the tutorial will also demonstrate the use of the NSUserActivity class to allow processing of the intent to be transferred from the Intents Extension to the main iOS app.

About the SiriKit Photo Search Project

The project created in this tutorial is going to take the form of an app that uses the SiriKit Photo Search domain to locate photos in the Photo library. Specifically, the app will allow the user to use Siri to search for photos taken on a specific date. In the event that photos matching the date criteria are found, the main app will be launched and used to display the first photo taken on the chosen day.

Creating the SiriPhoto Project

Begin this tutorial by launching Xcode and selecting the options to create a new Multiplatform App project named SiriPhoto.

Enabling the Siri Entitlement

Once the main project has been created the Siri entitlement must be enabled for the project. Select the SiriPhoto target located at the top of the Project Navigator panel (marked A in Figure 44-1) so that the main panel displays the project settings. From within this panel, select the Signing & Capabilities tab (B) followed by the SiriPhoto target entry (C):

Figure 44-1

Click on the “+ Capability” button (D) to display the dialog shown in Figure 44-2 below. Enter Siri into the filter bar, select the result and press the keyboard enter key to add the capability to the project:

Figure 44-2

Seeking Siri Authorization

In addition to enabling the Siri entitlement, the app must also seek authorization from the user to integrate the app with Siri. This is a two-step process which begins with the addition of an entry to the Info.plist file of the iOS app target for the NSSiriUsageDescription key with a corresponding string value explaining how the app makes use of Siri.

Select the Info.plist file located within the iOS folder in the project navigator panel as shown in Figure 44-3:

Figure 44-3

Once the file is loaded into the editor, locate the bottom entry in the list of properties and hover the mouse pointer over the item. When the ‘+’ button appears, click on it to add a new entry to the list. From within the drop-down list of available keys, locate and select the Privacy – Siri Usage Description option as shown in Figure 44-4:

Figure 44-4

Within the value field for the property, enter a message to display to the user when requesting permission to use speech recognition. For example:

Siri support search and display photo library images.

Repeat the above steps to add a Privacy – Photo Library Usage Description entry set to the following to that the app is able to request photo library access permission from the user:

This app accesses your photo library to search and display photos.

In addition to adding the Siri usage description key, a call also needs to be made to the requestSiriAuthorization class method of the INPreferences class. Ideally, this call should be made the first time that the app runs, not only so that authorization can be obtained, but also so that the user learns that the app includes Siri support. For the purposes of this project, the call will be made within the onChange() modifier based on the scenePhase changes within the app declaration located in the SiriPhotoApp.swift file as follows:

import SwiftUI
import Intents
 
@main
struct SiriPhotoApp: App {
    
    @Environment(\.scenePhase) private var scenePhase
    
    var body: some Scene {
        WindowGroup {
            ContentView()
        }
        .onChange(of: scenePhase) { phase in
            INPreferences.requestSiriAuthorization({status in
                // Handle errors here
            })
        }
    }
}

Before proceeding, compile and run the app on an iOS device or simulator. When the app loads, a dialog will appear requesting authorization to use Siri. Select the OK button in the dialog to provide authorization.

Adding an Image Asset

The completed app will need an image to display when no matching photo is found for the search criteria. This image is named image-missing.png and can be found in the project_images folder of the source code download archive available from the following URL:

https://www.ebookfrenzy.com/code/SwiftUI-iOS14-CodeSamples.zip

Within the Xcode project navigator, locate and select the Assets.xcassets file located in the Shared folder. In a separate Finder window, locate the project_images folder from the sample code and drag and drop the image into the asset catalog as shown in Figure 44-5 below:

Figure 44-5

Adding the Intents Extension to the Project

With some of the initial work on the iOS app complete, it is now time to add the Intents Extension to the project. Select Xcode’s File -> New -> Target… menu option to display the template selection screen. From the range of available templates, select the Intents Extension option as shown in Figure 44-6:

Figure 44-6

With the Intents Extension template selected, click on the Next button and enter SiriPhotoIntent into the Product Name field. Before clicking on the Finish button, turn off the Include UI Extension option and make sure that the Starting Point is set to None since this extension will not be based on the Messaging domain. When prompted to do so, enable the build scheme for the Intents Extension by clicking on the Activate button in the resulting panel.

Reviewing the Default Intents Extension

The files for the Intents Extension are located in the SiriPhotoIntent folder which will now be accessible from within the Project Navigator panel. Within this folder are an Info.plist file and a file named IntentHandler.swift.

The IntentHandler.swift file contains the IntentHandler class declaration which currently only contains a stub handler() method.

Modifying the Supported Intents

Currently we have an app which is intended to search for photos but for which no supported intents have been declared. Clearly some changes need to be made to implement the required functionality.

The first step is to configure the Info.plist file for the SiriPhotoIntent extension. Select this file and unfold the NSExtension settings until the IntentsSupported array is visible:

Figure 44-7

Currently the array does not contain any supported intents. Add a photo search intent to the array by clicking on the + button indicated by the arrow in the above figure and entering INSearchForPhotosIntent into the newly created Item 0 value field. On completion of these steps the array should match that shown in Figure 44-8:

Figure 44-8

Modifying the IntentHandler Implementation

The IntentHandler class now needs to be updated to add support for Siri photo search intents. Edit the IntentHandler.swift file and change the class declaration so it reads as follows:

import Intents
import Photos
 
class IntentHandler: INExtension, INSearchForPhotosIntentHandling {
 
    override func handler(for intent: INIntent) -> Any {
        
        return self
    }
}

The only method currently implemented within the IntentHandler.swift file is the handler method. This method is the entry point into the extension and is called by SiriKit when the user indicates that the SiriPhoto app is to be used to perform a task. When calling this method, SiriKit expects in return a reference to the object responsible for handling the intent. Since this will be the responsibility of the IntentHandler class, the handler method simply returns a reference to itself.

Implementing the Resolve Methods

SiriKit is aware of a range of parameters which can be used to specify photo search criteria. These parameters consist of the photo creation date, the geographical location where the photo was taken, the people in the photo and album in which it resides. For each of these parameters, SiriKit will call a specific resolve method on the IntentHandler instance. Each method is passed the current intent object and is required to notify Siri whether or not the parameter is required and, if so, whether the intent contains a valid property value. The methods are also passed a completion handler reference which must be called to notify Siri of the response.

The first method called by Siri is the resolveDateCreated method which should now be implemented in the IntentHandler.swift file as follows:

func resolveDateCreated(for
    intent: INSearchForPhotosIntent,
    with completion: @escaping
        (INDateComponentsRangeResolutionResult) -> Void) {
 
    if let dateCreated = intent.dateCreated {
        completion(INDateComponentsRangeResolutionResult.success(
            with: dateCreated))
    } else {
        completion(INDateComponentsRangeResolutionResult.needsValue())
    }
}

The method verifies that the dateCreated property of the intent object contains a value. In the event that it does, the completion handler is called indicating to Siri that the date requirement has been successfully met within the intent. In this situation, Siri will call the next resolve method in the sequence.

If no date has been provided the completion handler is called indicating the property is still needed. On receiving this response, Siri will ask the user to provide a date for the photo search. This process will repeat until either a date is provided or the user abandons the Siri session.

The SiriPhoto app is only able to search for photos by date. The remaining resolver methods can, therefore, be implemented simply to return notRequired results to Siri. This will let Siri know that values for these parameters do not need to be obtained from the user. Remaining within the IntentHandler.swift file, implement these methods as follows:

func resolveAlbumName(for intent: INSearchForPhotosIntent, 
    with completion: @escaping (INStringResolutionResult) -> Void) {
    completion(INStringResolutionResult.notRequired())
}
 
func resolvePeopleInPhoto(for intent: 
     INSearchForPhotosIntent, with completion: @escaping ([INPersonResolutionResult]) -> Void) {
    completion([INPersonResolutionResult.notRequired()])
}
 
func resolveLocationCreated(for intent: 
    INSearchForPhotosIntent, with completion: @escaping (INPlacemarkResolutionResult) -> Void) {
        completion(INPlacemarkResolutionResult.notRequired())
}

With these methods implemented, the resolution phase of the intent handling process is now complete.

Implementing the Confirmation Method

When Siri has gathered the necessary information for the user, a call is made to the confirm method of the intent handler instance. The purpose of this call is to provide the handler with an opportunity to check that everything is ready to handle the intent. In the case of the SiriPhoto app, there are no special requirements so the method can be implemented to reply with a ready status:

func confirm(intent: INSearchForPhotosIntent, 
    completion: @escaping (INSearchForPhotosIntentResponse) -> Void)
{
    let response = INSearchForPhotosIntentResponse(code: .ready, 
        userActivity: nil)
    completion(response)
}

Handling the Intent

The last step in implementing the extension is to handle the intent. After the confirm method indicates that the extension is ready, Siri calls the handle method. This method is, once again, passed the intent object and a completion handler to be called when the intent has been handled by the extension. Implement this method now so that it reads as follows:

func handle(intent: INSearchForPhotosIntent, completion: @escaping
    (INSearchForPhotosIntentResponse) -> Void) {
    
    let activityType = "com.ebookfrenzy.siriphotointent"
    let activity = NSUserActivity(activityType: activityType)
    
    let response = INSearchForPhotosIntentResponse(code:
        INSearchForPhotosIntentResponseCode.continueInApp,
                                             userActivity: activity)
    
    if intent.dateCreated != nil {
        let calendar = Calendar(identifier: .gregorian)
        
        if let startComponents = intent.dateCreated?.startDateComponents,
            let endComponents = intent.dateCreated?.endDateComponents {
            
            if let startDate = calendar.date(from:
                startComponents),
                let endDate = calendar.date(from:
                    endComponents) {
                
                response.searchResultsCount = 
                   photoSearchFrom(startDate, to: endDate)
            }
        }
    }
    completion(response)
}

The above code requires some explanation. The method is responsible for constructing the intent response object containing the NSUserActivity object which will be handed off to the SiriPhoto app. The method begins by creating a new NSUserActivity instance configured with a type as follows:

let activityType = "com.ebookfrenzy.siriphotointent"
let activity = NSUserActivity(activityType: activityType)

The activity type can be any string value but generally takes the form of the app or extension name and company reverse domain name. Later in the chapter, this type name will need to be added as a supported activity type to the Info.plist file for the SiriPhoto app and referenced in the App declaration so that SiriPhoto knows which intent triggered the app launch.

Next, the method creates a new intent response instance and configures it with a code to let Siri know that the intent handling will be continued within the main SiriPhoto app. The intent response is also initialized with the NSUserActivity instance created previously:

let response = INSearchForPhotosIntentResponse(code:
                    INSearchForPhotosIntentResponseCode.continueInApp,
                               userActivity: activity)

The code then converts the start and end dates from DateComponents objects to Date objects and calls a method named photoSearchFrom(to:) to confirm that photo matches are available for the specified date range. The photoSearchFrom(to:) method (which will be implemented next) returns a count of the matching photos. This count is then assigned to the searchResultsCount property of the response object, which is then returned to Siri via the completion handler:

    if let startComponents = intent.dateCreated?.startDateComponents,
        let endComponents = intent.dateCreated?.endDateComponents {
 
        if let startDate = calendar.date(from:
            startComponents),
          let endDate = calendar.date(from:
              endComponents) {
 
        response.searchResultsCount = photoSearchFrom(startDate,
                            to: endDate)
        }
    }
}
completion(response)

If the extension returns a zero count via the searchResultsCount property of the response object, Siri will notify the user that no photos matched the search criteria. If one or more photo matches were found, Siri will launch the main SiriPhoto app and pass it the NSUserActivity object.

The final step in implementing the extension is to add the photoSearchFrom(to:) method to the IntentHandler. swift file:

func photoSearchFrom(_ startDate: Date, to endDate: Date) -> Int {
 
    let fetchOptions = PHFetchOptions()
 
    fetchOptions.predicate = NSPredicate(format: "creationDate > %@ AND creationDate < %@", startDate as CVarArg, endDate as CVarArg)
    let fetchResult = PHAsset.fetchAssets(with: PHAssetMediaType.image, 
                           options: fetchOptions)
    return fetchResult.count
}

The method makes use of the standard iOS Photos Framework to perform a search of the Photo library. It begins by creating a PHFetchOptions object. A predicate is then initialized and assigned to the fetchOptions instance specifying that the search is looking for photos taken between the start and end dates. Finally, the search for matching images is initiated, and the resulting count of matches returned.

Testing the App

Though there is still some work to be completed for the main SiriPhoto app, the Siri extension functionality is now ready to be tested. Within Xcode, make sure that SiriPhotoIntent is selected as the current target and click on the run button. When prompted for a host app, select Siri and click the run button. When Siri has started listening, say the following:

“Find a photo with SiriPhoto”

Siri will respond by seeking the day for which you would like to find a photo. After you specify a date, Siri will either launch the SiriPhoto app if photos exist for that day, or state that no photos could be found. Note that the first time a photo is requested the privacy dialog will appear seeking permission to access the photo library.

Provide permission when prompted and then repeat the photo search request.

Adding a Data Class to SiriPhoto

When SiriKit launches the SiriPhoto app in response to a successful photo search, it will pass the app an NSUserActivity instance. The app will need to handle this activity and use the intent response it contains to extract the matching photo from the library. The photo image will, in turn, need to be stored as a published observable property so that the content view is always displaying the latest photo. These tasks will be performed in a new Swift class declaration named PhotoHandler.

Add this new class to the project by right-clicking on the Shared folder in the project navigator panel and selecting the New File… menu option. In the template selection panel, choose the Swift File option before clicking on the Next button. Name the new class PhotoHandler and click on the Create button. With the PhotoHandler.swift file loaded into the code editor, modify it as follows:

import SwiftUI
import Combine
import Intents
import Photos
 
class PhotoHandler: ObservableObject {
    
    @Published var image: UIImage?
    var userActivity: NSUserActivity
    
    init (userActivity: NSUserActivity) {
        
        self.userActivity = userActivity
        self.image = UIImage(named: "image-missing")
        
    }
}

The above changes declare an observable class containing UIImage and NSUserActivity properties. The image property is declared as being published and will be observed by the content view later in the tutorial.

The class initializer stores the NSUserActivity instance passed through when the class is instantiated and assigns the missing image icon to the image property so that it will be displayed if there is no matching image from SiriKit.

Next, the class needs a method which can be called by the app to extract the photo from the library. Remaining in the PhotoHandler.swift file, add this method to the class as follows:

func handleActivity() {
    
    let intent = userActivity.interaction?.intent
        as! INSearchForPhotosIntent
    
    if (intent.dateCreated?.startDateComponents) != nil {
        let calendar = Calendar(identifier: .gregorian)
        let startDate = calendar.date(from:
            (intent.dateCreated?.startDateComponents)!)
        let endDate = calendar.date(from:
            (intent.dateCreated?.endDateComponents)!)
        getPhoto(startDate!, endDate!)
    }
}

The handleActivity() method extracts the intent from the user activity object and then converts the start and end dates to Date objects. These dates are then passed to the getPhoto() method which now also needs to be added to the class:

func getPhoto(_ startDate: Date, _ endDate: Date){
    
    let fetchOptions = PHFetchOptions()
    
    fetchOptions.predicate = NSPredicate(
         format: "creationDate > %@ AND creationDate < %@", 
                  startDate as CVarArg, endDate as CVarArg)
    let fetchResult = PHAsset.fetchAssets(with:
        PHAssetMediaType.image, options: fetchOptions)
    
    let imgManager = PHImageManager.default()
    
    if let firstObject = fetchResult.firstObject {
        imgManager.requestImage(for: firstObject as PHAsset,
                                targetSize: CGSize(width: 500, 
                                                    height: 500),
                                contentMode: 
                                     PHImageContentMode.aspectFill,
                                options: nil,
                                resultHandler: { (image, _) in
                                    self.image = image
        })
    }
}

The getPhoto() method performs the same steps used by the intent handler to search the Photo library based on the search date parameters. Once the search results have returned, however, the PHImageManager instance is used to retrieve the image from the library and assign it to the published image variable.

Designing the Content View

The user interface for the app is going to consist of a single Image view on which will be displayed the first photo taken on the day chosen by the user via Siri voice commands. Edit the ContentView.swift file and modify it so that it reads as follows:

import SwiftUI
 
struct ContentView: View {
 
    @StateObject var photoHandler: PhotoHandler
    
    var body: some View {
        Image(uiImage: photoHandler.image!)
            .resizable()
            .aspectRatio(contentMode: .fit)
            .padding()
    }
}
 
struct ContentView_Previews: PreviewProvider {
    static var previews: some View {
        ContentView(photoHandler: PhotoHandler(userActivity: 
              NSUserActivity(activityType: "Placeholder")))
    }
}

The changes simply add a PhotoHandler state object variable declaration, the image property of which is used to display an image on an Image view. The preview declaration is then adapted to pass a PhotoHandler instance to the content view initialized with a placeholder NSUserObject. Steps also need to be taken to pass a placeholder PhotoHandler instance to the content view within the SiriPhotoApp.swift file as follows:

import SwiftUI
import Intents
 
@main
struct SiriPhotoApp: App {
 
    @Environment(\.scenePhase) private var scenePhase
    var photoHandler: PhotoHandler = 
        PhotoHandler(userActivity: NSUserActivity(activityType: "Placeholder"))
    
    var body: some Scene {
        WindowGroup {
            ContentView(photoHandler: photoHandler)
        }
        .onChange(of: scenePhase) { phase in
            INPreferences.requestSiriAuthorization({status in
                // Handle errors here
            })
        }
    }
}

When previewed, the ContentView layout should be rendered as shown in the figure below:

Figure 44-9

Adding Supported Activity Types to SiriPhoto

When the intent handler was implemented earlier in the chapter, the NSUserActivity object containing the photo search information was configured with an activity type string. In order for the SiriPhoto app to receive the activity, the type must be declared using the NSUserActivityTypes property in the app’s iOS Info.plist file. Within the project navigator panel, select the Info.plist file located in the iOS folder. Hover the mouse pointer over the last entry in the property list and click on the ‘+’ button to add a new property. In the Key field, enter NSUserActivityTypes and change the Type setting to Array as shown in Figure 44-10:

Figure 44-10

Click on the ‘+’ button indicated by the arrow above to add a new item to the array. Set the value for Item 0 to com.ebookfrenzy.siriphotointent so that it matches the type assigned to the user activity instance:

Figure 44-11

Handling the NSUserActivity Object

The intent handler in the extension has instructed Siri to continue the intent handling process by launching the main SiriPhoto app. When the app is launched by Siri it will be provided the NSUserActivity object for the session containing the intent object. When an app is launched and passed an NSUserActivity object it can be accessed from within the App declaration by adding the onContinueUserActivity() modifier to the ContentView, passing through the activity type and defining the actions to be performed. Within the SiriPhotoApp.swift file, implement these changes as follows:

import SwiftUI
 
@main
struct SiriPhotoApp: App {
    
    var photoHandler: PhotoHandler = PhotoHandler(userActivity: 
        NSUserActivity(activityType: "Placeholder"))
    
    var body: some Scene {
        WindowGroup {
            ContentView(photoHandler: photoHandler)
                .onContinueUserActivity(
                       "com.ebookfrenzy.siriphotointent", 
                perform: { userActivity in
                    photoHandler.userActivity = userActivity
                    photoHandler.handleActivity()
                })
        }
.
.

The declaration begins by creating a placeholder PhotoHandler instance which can be passed to the ContentView in the event that the app is not launched by a supported activity type, or by the user tapping on the app in on the device home screen.

Next, the onContinueUserActivity() modifier is configured to only detect the activity type associated with the SiriPhotoIntent. If the type is detected, the NSUserActivity object passed to the app is assigned to the placeholder PhotoHandler instance and the handleActivity() method called to fetch the photo from the library. Because the content view is observing the image property, the Image view will update to display the extracted photo image.

Testing the Completed App

Run the SiriPhotoIntent extension, perform a photo search and, assuming photos are available for the selected day, wait for the main SiriPhoto app to load. When the app has loaded, the first photo taken on the specified date should appear within the Image view:

Figure 44-12

Summary

This chapter has worked through the creation of a simple app designed to use SiriKit to locate a photo taken on a particular date. The example has demonstrated the creation of an Intents Extension and the implementation of the intent handler methods necessary to interact with the Siri environment, including resolving missing parameters in the Siri intent. The project also explored the use of the NSUserActivity class to transfer the intent from the extension to the main iOS app.

Customizing the SiriKit Intent User Interface

Each SiriKit domain will default to a standard user interface layout to present information to the user during the Siri session. In the previous chapter, for example, the standard user interface was used by SiriKit to display to the user the message recipients and content to the user before sending the message. The default appearance can, however, be customized by making use of an Intent UI app extension. This UI Extension provides a way to control the appearance of information when it is displayed within the Siri interface. It also allows an extension to present additional information that would not normally be displayed by Siri or to present information using a visual style that reflects the design theme of the main app.

Adding the Intents UI Extension

When the Intents Extension was added to the SiriDemo project in the previous chapter, the option to include an Intents UI Extension was disabled. Now that we are ready to create a customized user interface for the intent, select the Xcode File -> New -> Target… menu option and add an Intents UI Extension to the project. Name the product SiriDemoIntentUI and, when prompted to do so, activate the build scheme for the new extension.

Modifying the UI Extension

SiriKit provides two mechanisms for performing this customization each of which involves implementing a method in the intent UI view controller class file. A simpler and less flexible option involves the use of the configure method. For greater control, the previously mentioned configureView method is available.

Using the configure Method

The files for this Intent UI Extension added above can be found within the Project navigator panel under the SiriDemoIntentUI folder.

Included within the SiriDemoIntentUI extension is a storyboard file named MainInterface.storyboard. For those unfamiliar with how user interfaces were built prior to the introduction of SwiftUI, this is an Interface Builder file. When the configure method is used to customize the user interface, this scene is used to display additional content which will appear directly above the standard SiriKit provided UI content. This layout is sometimes referred to as the Siri Snippet.

Although not visible by default, at the top of the message panel presented by Siri is the area represented by the UI Extension. Specifically, this displays the scene defined in the MainInterface.storyboard file of the SiriDemoIntentUI extension folder. The lower section of the panel is the default user interface provided by Siri for this particular SiriKit domain.

To provide a custom user interface using the UI Extension, the user interface needs to be implemented in the MainInterface.storyboard file and the configure method added to the IntentViewController.swift file. The IntentViewController class in this file is a subclass of UIViewController and configured such that it implements the INUIHostedViewControlling protocol.

The UI Extension is only used when information is being presented to the user in relation to an intent type that has been declared as supported in the UI Extension’s Info.plist file. When the extension is used, the configure method of the IntentViewController is called and passed an INInteraction object containing both the NSUserActivity and intent objects associated with the current Siri session. This allows context information about the session to be extracted and displayed to the user via the custom user interface defined in the MainInterface.storyboard file.

To add content above the “To:” line, therefore, we just need to implement the configure method and add some views to the UIView instance in the storyboard file. These views can be added either via Interface Builder or programmatically with the configure method.

For more advanced configuration, however, the configureView() approach provides far greater flexibility, and is the focus of this chapter.

Using the configureView Method

Unlike the configure method, the configureView method allows each section of the default user interface to be replaced with custom content and view layout.

SiriKit considers the default layout to be a vertical stack in which each row is represented by a parameter. For each layer of the stack (starting at the top and finishing at the bottom of the layout) the configureView method is called, passed information about the corresponding parameters and given the opportunity to provide a custom layout to be displayed within the corresponding stack row of the Siri user interface. The method is also passed a completion handler to be called with the appropriate configuration information to be passed back to Siri.

The parameters passed to the method take the form of INParameter instances. It is the responsibility of the configureView method to find out if a parameter is one for which it wants to provide a custom layout. It does this by creating local NSParameter instances of the type it is interested in and comparing these to the parameters passed to the method. Parameter instances are created by combining the intent class type with a specific key path representing the parameter (each type of intent has its own set of path keys which can be found in the documentation for that class). If the method needs to confirm that the passed parameter relates to the content of a send message intent, for example, the code would read as follows:

func configureView(for parameters: Set<INParameter>, of interaction: 
   INInteraction, interactiveBehavior: INUIInteractiveBehavior, context: 
    INUIHostedViewContext, completion: @escaping (Bool, Set<INParameter>, 
      CGSize) -> Void) {
 
    let content = INParameter(for: INSendMessageIntent.self, 
               keyPath: #keyPath(INSendMessageIntent.content))
 
    if parameters == [content] {
       // Configure ViewController before calling completion handler
   }
.
.
}

When creating a custom layout, it is likely that the method will need to access the data contained within the parameter. In the above code, for example, it might be useful to extract the message content from the parameter and incorporate it into the custom layout. This is achieved by calling the parameterValue method of the INInteraction object which is also passed to the configureView method. Each parameter type has associated with it a set of properties. In this case, the property for the message content is named, appropriately enough, content and can be accessed as follows:

.
.
let content = INParameter(for: INSendMessageIntent.self, 
               keyPath: #keyPath(INSendMessageIntent.content))
 
if parameters == [content] {
   let contentString = interaction.parameterValue(for: content)
}
.
.

When the configureView method is ready to provide Siri with a custom layout, it calls the provided completion handler, passing through a Boolean true value, the original parameters and a CGSize object defining the size of the layout as it is to appear in the corresponding row of the Siri user interface stack, for example:

completion(true, parameters, size)

If the default Siri content is to be displayed for the specified parameters instead of a custom user interface, the completion handler is called with a false value and a zero CGSize object:

completion(false, parameters, CGSize.zero)

In addition to calling the configureView method for each parameter, Siri will first make a call to the method to request a configuration for no parameters. By default, the method should check for this condition and call the completion handler as follows:

if parameters.isEmpty {
    completion(false, [], CGSize.zero)
}

The foundation for the custom user interface for each parameter is the View contained within the intent UI MainInterface.storyboard file. Once the configureView method has identified the parameters it can dynamically add views to the layout, or make changes to existing views contained within the scene.

Designing the Siri Snippet

The previous section covered a considerable amount of information, much of which will become clearer by working through an example.

Begin by selecting the MainInterface.storyboard file belonging to the SiriDemoIntentUI extension. While future releases of Xcode will hopefully allow the snippet to be declared using SwiftUI, this currently involves working with Interface Builder to add components, configure layout constraints and set up outlets.

The first step is to add a Label to the layout canvas. Display the Library by clicking on the button marked A in Figure 43-1 below and drag and drop a Label object from the Library (B) onto the layout canvas as indicated by the arrow:

Figure 43-1

Next, the Label needs to be constrained so that it has a 5dp margin between the leading, trailing and top edges of the parent view. With the Label selected in the canvas, click on the Add New Constraints button located in the bottom right-hand corner of the editor to display the menu shown in Figure 43-2 below:

Figure 43-2

Enter 5 into the top, left and right boxes and click on the I-beam icons next to each value so that they are displayed in solid red instead of dashed lines before clicking on the Add 3 Constraints button.

Before proceeding to the next step, establish an outlet connection from the Label component to a variable in the IntentViewController.swift file named contentLabel. This will allow the view controller to change the text displayed on the Label to reflect the intent content parameter. This is achieved using the Assistant Editor which is displayed by selecting the Xcode Editor -> Assistant menu option. Once displayed, Ctrl-click on the Label in the canvas and drag the resulting line to a position in the Assistant Editor immediately above the viewDidLoad() declaration:

Figure 43-3

On releasing the line, the dialog shown in Figure 43-4 will appear. Enter contentLabel into the Name field and click on Connect to establish the outlet.

Figure 43-4

Ctrl-click on the snippet background view and drag to immediately beneath the newly declared contentLabel outlet, this time creating an outlet named contentView:

Figure 43-5

On completion of these steps, the outlets should appear in the IntentViewController.swift file as follows:

class IntentViewController: UIViewController, INUIHostedViewControlling {
    
    @IBOutlet weak var contentLabel: UILabel!
    @IBOutlet weak var contentView: UIView!
.
.

Implementing a configureView Method

Next, edit the configureView method located in the IntentViewController.swift file to extract the content and recipients from the intent, and to modify the Siri snippet for the content parameter as follows:

func configureView(for parameters: Set<INParameter>, of interaction: 
    INInteraction, interactiveBehavior: INUIInteractiveBehavior, context: 
    INUIHostedViewContext, completion: @escaping (Bool, Set<INParameter>, 
     CGSize) -> Void) {
 
    var size = CGSize.zero
    
    let content = INParameter(for: INSendMessageIntent.self, keyPath:
        #keyPath(INSendMessageIntent.content))
 
    let recipients = INParameter(for: INSendMessageIntent.self,
                        keyPath: #keyPath(INSendMessageIntent.recipients))
    
    let recipientsValue = interaction.parameterValue(
           for: recipients) as! Array<INPerson>
 
    if parameters == [content] {
        let contentValue = interaction.parameterValue(for: content)
        
        self.contentLabel.text = contentValue as? String
        self.contentLabel.textColor = UIColor.white
        self.contentView.backgroundColor = UIColor.brown
        size = CGSize(width: 100, height: 70)
    }
    completion(true, parameters, size)
}

The code begins by declaring a variable in which to contain the required size of the Siri snippet before the content and recipients are extracted from the intent parameter. If the parameters include message content, it is applied to the Label widget in the snippet. The background of the snippet view is set to brown, the text color to white, and the dimensions to 100 x 70dp.

The recipients parameter takes the form of an array of INPerson objects, from which can be extracted the recipients’ display names. Code now needs to be added to iterate through each recipient in the array, adding each name to a string to be displayed on the contentLabel view. Code will also be added to use a different font and text color on the label and to change the background color of the view. Since the recipients list requires less space, the height of the view is set to 30dp:

.
.
    if parameters == [content] {
        let contentValue = interaction.parameterValue(for: content)
        self.contentLabel.text = contentValue as? String
        self.contentView.backgroundColor = UIColor.brown
        size = CGSize(width: 100, height: 70)      
    } else if recipientsValue.count > 0 {
        var recipientStr = "To:"
        var first = true
            
        for name in recipientsValue {
            let separator = first ? " " : ", "
            first = false
            recipientStr += separator + name.displayName
        }
            
        self.contentLabel.font = UIFont(name: "Arial-BoldItalicMT", size: 20.0)
        self.contentLabel.text = recipientStr
        self.contentLabel.textColor = UIColor.white
        self.contentView.backgroundColor = UIColor.blue
        size = CGSize(width: 100, height: 30)
    } else if parameters.isEmpty {
        completion(false, [], CGSize.zero)
    }
    completion(true, parameters, size)
.
.

Note that the above additions to the configureView() method also include a check for empty parameters, in which case a false value is returned together with a zeroed CGSize object indicating that there is nothing to display.

Testing the Extension

To test the extension, begin by changing the run target menu to the SiriDemoIntentUI target as shown in Figure 43-6 below:

Figure 43-6

Next, display the menu again, this time selecting the Edit Scheme… menu option:

Figure 43-7

In the resulting dialog select the Run option from the left-hand panel and enter the following into the Siri Intent Query box before clicking on the Close button:

Use SiriDemo to tell John and Kate I’ll be 10 minutes late.

Compile and run the Intents UI Extension and verify that the recipient row now appears with a blue background, a 30 point height and uses a larger italic font while the content appears with a brown background and a 70dp height:

Figure 43-8

Summary

While the default user interface provided by SiriKit for the various domains will be adequate for some apps, most intent extensions will need to be customized to present information in a way that matches the style and theme of the associated app, or to provide additional information not supported by the default layout. The default UI can be replaced by adding an Intent UI extension to the app project. The UI extension provides two options for configuring the user interface presented by Siri. The simpler of the two involves the use of the configure method to present a custom view above the default Siri user interface layout. A more flexible approach involves the implementation of the configureView method. SiriKit associates each line of information displayed in the default layout with a parameter. When implemented, the configureView method will be called for each of these parameters and provided with the option to return a custom View containing the layout and information to be used in place of the default user interface element.

A SwiftUI SiriKit Tutorial

The previous chapter covered much of the theory associated with integrating Siri into an iOS app. This chapter will review the example Siri messaging extension that is created by Xcode when a new Intents Extension is added to a project. This will not only show a practical implementation of the topics covered in the previous chapter, but will also provide some more detail on how the integration works. The next chapter will cover the steps required to make use of a UI Extension within an app project.

Creating the Example Project

Begin by launching Xcode and creating a new Multiplatform App project named SiriDemo.

Enabling the Siri Entitlement

Once the main project has been created the Siri entitlement must be enabled for the project. Select the SiriDemo target located at the top of the Project Navigator panel (marked A in Figure 42-1) so that the main panel displays the project settings. From within this panel, select the Signing & Capabilities tab (B) followed by the SiriDemo target entry (C):

Figure 42-1

Click on the “+ Capability” button (D) to display the dialog shown in Figure 42-2. Enter Siri into the filter bar, select the result and press the keyboard enter key to add the capability to the project:

Figure 42-2

If Siri is not listed as an option, you will need to pay to join the Apple Developer program as outlined in the chapter entitled “Joining the Apple Developer Program”.

Seeking Siri Authorization

In addition to enabling the Siri entitlement, the app must also seek authorization from the user to integrate the app with Siri. This is a two-step process which begins with the addition of an entry to the Info.plist file of the iOS app target for the NSSiriUsageDescription key with a corresponding string value explaining how the app makes use of Siri.

Select the Info.plist file located within the iOS folder in the project navigator panel as shown in Figure 42-3:

Figure 42-3

Once the file is loaded into the editor, locate the bottom entry in the list of properties and hover the mouse pointer over the item. When the plus button appears, click on it to add a new entry to the list. From within the drop-down list of available keys, locate and select the Privacy – Siri Usage Description option as shown in Figure 42-4:

Figure 42-4

Within the value field for the property, enter a message to display to the user when requesting permission to use speech recognition. For example:

Siri support is used to send and review messages.

In addition to adding the Siri usage description key, a call also needs to be made to the requestSiriAuthorization() class method of the INPreferences class. Ideally, this call should be made the first time that the app runs, not only so that authorization can be obtained, but also so that the user learns that the app includes Siri support. For the purposes of this project, the call will be made within the onChange() modifier based on the scenePhase changes within the app declaration located in the SiriDemoApp.swift file as follows:

import SwiftUI
import Intents
 
@main
struct SiriDemoApp: App {
    
    @Environment(\.scenePhase) private var scenePhase
    
    var body: some Scene {
        WindowGroup {
            ContentView()
        }
        .onChange(of: scenePhase) { phase in
            INPreferences.requestSiriAuthorization({status in
                // Handle errors here
            })
        }
    }
}

Before proceeding, compile and run the app on an iOS device or simulator. When the app loads, a dialog will appear requesting authorization to use Siri. Select the OK button in the dialog to provide authorization.

Adding the Intents Extension

The next step is to add the Intents Extension to the project ready to begin the SiriKit integration. Select the Xcode File -> New -> Target… menu option and add an Intents Extension to the project. Name the product SiriDemoIntent, set the Starting Point menu to Messaging and make sure that the Include UI Extension option is turned off (this will be added in the next chapter) before clicking on the Finish button. When prompted to do so, activate the build scheme for the Intents Extension.

Supported Intents

In order to work with Siri, an extension must specify the intent types it is able to support. These declarations are made in the Info.plist files of the extension folders. Within the Project Navigator panel, select the Info.plist file located in the SiriDemoIntent folder and unfold the NSExtension -> NSExtensionAttributes section. This will show that the IntentsSupported key has been assigned an array of intent class names:

Figure 42-5

Note that entries are available for intents that are supported and intents that are supported but restricted when the lock screen is enabled. It might be wise, for example, for a payment based intent to be restricted when the screen is locked. As currently configured, the extension supports all of the messaging intent types without restrictions. To support a different domain, change these intents or add additional intents accordingly. For example, a photo search extension might only need to specify INSearchForPhotosIntent as a supported intent. When the Intents UI Extension is added in the next chapter, it too will contain an Info.plist file with these supported intent value declarations. Note that the intents supported by the Intents UI Extension can be a subset of those declared in the UI Extension. This allows the UI Extension to be used only for certain intent types.

Trying the Example

Before exploring the structure of the project it is worth running the app and experiencing the Siri integration. The example simulates searching for and sending messages, so can be safely used without any messages actually being sent.

Make sure that the SiriDemoIntent option is selected as the run target in the toolbar as illustrated in Figure 42-6 and click on the run button.

Figure 42-6

When prompted, select Siri as the app within which the extension is to run. When Siri launches experiment with phrases such as the following:

“Send a message with SiriDemo.”

“Send a message to John with SiriDemo.”

“Use SiriDemo to say Hello to John and Kate.”

“Find Messages with SiriDemo.”

If Siri indicates that SiriDemo has not yet been set up, tap the button located on the Siri screen to open the SiriDemo app. Once the app has launched, press and hold the home button to relaunch Siri and try the above phrases again.

In each case, all of the work involved in understanding the phrases and converting them into structured representations of the request is performed by Siri. All the intent handler needs to do is work with the resulting intent object.

Specifying a Default Phrase

A useful option when repeatedly testing SiriKit behavior is to configure a phrase to be passed to Siri each time the app is launched from within Xcode. This avoids having to repeatedly speak to Siri each time the app is relaunched. To specify the test phrase, select the SiriDemoIntent run target in the Xcode toolbar and select Edit scheme… from the resulting menu as illustrated in Figure 42-7:

Figure 42-7

In the scheme panel, select the Run entry in the left-hand panel followed by the Info tab in the main panel. Within the Info settings, enter a query phrase into the Siri Intent Query text box before closing the panel:

Figure 42-8

Run the extension once again and note that the phrase is automatically passed to Siri to be handled:

Figure 42-9

Reviewing the Intent Handler

The Intent Handler is declared in the IntentHandler.swift file in the SiriDemoIntent folder. Load the file into the editor and note that the class declares that it supports a range of intent handling protocols for the messaging domain:

class IntentHandler: INExtension, INSendMessageIntentHandling, 
  INSearchForMessagesIntentHandling, INSetMessageAttributeIntentHandling {
.
.
}

The above declaration declares the class as supporting all three of the intents available in the messaging domain.

As an alternative to listing all of the protocol names individually, the above code could have achieved the same result by referencing the INMessagesDomainHandling protocol which encapsulates all three protocols.

If this template were to be re-purposed for a different domain, these protocol declarations would need to be replaced. For a payment extension, for example, the declaration might read as follows:

class IntentHandler: INExtension, INSendPaymentIntentHandling, 
    INRequestPaymentIntent {
.
.
}

The class also contains the handler method, resolution methods for the intent parameters and the confirm method. The resolveRecipients method is of particular interest since it demonstrates the use of the resolution process to provide the user with a range of options from which to choose when a parameter is ambiguous.

The implementation also contains multiple handle methods for performing tasks for message search, message send and message attribute change intents. Take some time to review these methods before proceeding.

Summary

This chapter has provided a walk-through of the sample messaging-based extension provided by Xcode when creating a new Intents Extension. This has highlighted the steps involved in adding both Intents and UI Extensions to an existing project, and enabling and seeking SiriKit integration authorization for the project. The chapter also outlined the steps necessary for the extensions to declare supported intents and provided an opportunity to gain familiarity with the methods that make up a typical intent handler. The next chapter will outline the mechanism for implementing and configuring a UI Extension.

An Introduction to SwiftUI and SiriKit

Although Siri has been part of iOS for a number of years, it was not until the introduction of iOS 10 that some of the power of Siri was made available to app developers through SiriKit. Initially limited to particular categories of app, SiriKit has since extended to allow Siri functionality to be built into apps of any type.

The purpose of SiriKit is to allow key areas of application functionality to be accessed via voice commands through the Siri interface. An app designed to send messages, for example, may be integrated into Siri to allow messages to be composed and sent using voice commands. Similarly, a time management app might use SiriKit to allow entries to be made in the Reminders app.

This chapter will provide an overview of SiriKit and outline the ways in which apps are configured to integrate SiriKit support.

Siri and SiriKit

Most iOS users will no doubt be familiar with Siri, Apple’s virtual digital assistant. Pressing and holding the home button, or saying “Hey Siri” launches Siri and allows a range of tasks to be performed by speaking in a conversational manner. Selecting the playback of a favorite song, asking for turn-by-turn directions to a location or requesting information about the weather are all examples of tasks that Siri can perform in response to voice commands.

When an app integrates with SiriKit, Siri handles all of the tasks associated with communicating with the user and interpreting the meaning and context of the user’s words. Siri then packages up the user’s request into an intent and passes it to the iOS app. It is then the responsibility of the iOS app to verify that enough information has been provided in the intent to perform the task and to instruct Siri to request any missing information. Once the intent contains all of the necessary data, the app performs the requested task and notifies Siri of the results. These results will be presented either by Siri or within the iOS app itself.

SiriKit Domains

When initially introduced, SiriKit could only be used with apps to perform tasks that fit into narrowly defined categories, also referred to as domains. With the release of iOS 10, Siri could only be used by apps when performing tasks that fit into one or more of the following domains:

  • Messaging
  • Notes and Lists
  • Payments
  • Visual Codes
  • Photos
  • Workouts
  • Ride Booking
  • CarPlay
  • Car Commands
  • VoIP Calling
  • Restaurant Reservations
  • Media

If your app fits into one of these domains then this is still the recommended approach to performing Siri integration. If, on the other hand, your app does not have a matching domain, SiriKit can now be integrated using custom Siri Shortcuts.

Siri Shortcuts

Siri Shortcuts allow frequently performed activities within an app to be stored as a shortcut and triggered via Siri using a pre-defined phrase. If a user regularly checked a specific stock price within a financial app, for example, that task could be saved as a shortcut and performed at any time via Siri voice command without the need to manually launch the app. Although lacking the power and flexibility of SiriKit domain-based integration, Siri Shortcuts provide a way for key features to be made accessible via Siri for apps that would otherwise be unable to provide any Siri integration.

An app can provide an “Add to Siri” button that allows a particular task to be configured as a shortcut. Alternatively, an app can make shortcut suggestions by donating actions to Siri. The user can review any shortcut suggestions within the Shortcuts app and choose those to be added as shortcuts.

Based on user behavior patterns, Siri will also suggest shortcuts to the user in the Siri Suggestions and Search panel that appears when making a downward swiping motion on the device home screen.

Siri Shortcuts will be covered in detail in the chapters entitled “An Overview of Siri Shortcut App Integration” and “A SwiftUI Siri Shortcut Tutorial”. Be sure to complete this chapter before looking at the Siri Shortcut chapters. Much of the content in this chapter applies equally to SiriKit domains and Siri Shortcuts.

SiriKit Intents

Each domain allows a predefined set of tasks, or intents, to be requested by the user for fulfillment by an app. An intent represents a specific task of which Siri is aware and for which SiriKit expects an integrated iOS app to be able to perform. The Messaging domain, for example, includes intents for sending and searching for messages, while the Workout domain contains intents for choosing, starting and finishing workouts. When the user makes a request of an app via Siri, the request is placed into an intent object of the corresponding type and passed to the app for handling.

In the case of Siri Shortcuts, a SiriKit integration is implemented by using a custom intent combined with an intents definition file describing how the app will interact with Siri.

How SiriKit Integration Works

Siri integration is performed via the iOS extension mechanism. Extensions are added as targets to the app project within Xcode in the same way as other extension types. SiriKit provides two types of extension, the key one being the Intents Extension. This extension contains an intent handler which is subclassed from the INExtension class of the Intents framework and contains the methods called by Siri during the process of communicating with the user. It is the responsibility of the intent handler to verify that Siri has collected all of the required information from the user, and then to execute the task defined in the intent.

The second extension type is the UI Extension. This extension is optional and comprises a storyboard file and a subclass of the IntentViewController class. When provided, Siri will use this UI when presenting information to the user. This can be useful for including additional information within the Siri user interface or for bringing the branding and theme of the main iOS app into the Siri environment.

When the user makes a request of an app via Siri, the first method to be called is the handler(forIntent:) method of the intent handler class contained in the Intents Extension. This method is passed the current intent object and returns a reference to the object that will serve as the intent handler. This can either be the intent handler class itself or another class that has been configured to implement one or more intent handling protocols.

The intent handler declares the types of intent it is able to handle and must then implement all of the protocol methods required to support those particular intent types. These methods are then called as part of a sequence of phases that make up the intent handling process as illustrated in Figure 41-1:

Figure 41-1

The first step after Siri calls the handler method involves calls to a series of methods to resolve the parameters associated with the intent.

Resolving Intent Parameters

Each intent domain type has associated with it a group of parameters that are used to provide details about the task to be performed by the app. While many parameters are mandatory, some are optional. The intent to send a message must, for example, contain a valid recipient parameter in order for a message to be sent. A number of parameters for a Photo search intent, on the other hand, are optional. A user might, for example, want to search for photos containing particular people, regardless of the date that the photos were taken.

When working with Siri domains, Siri knows all of the possible parameters for each intent type, and for each parameter Siri will ask the app extension’s intent handler to resolve the parameter via a corresponding method call. If Siri already has a parameter, it will ask the intent handler to verify that the parameter is valid. If Siri does not yet have a value for a parameter it will ask the intent handler if the parameter is required. If the intent handler notifies Siri that the parameter is not required, Siri will not ask the user to provide it. If, on the other hand, the parameter is needed, Siri will ask the user to provide the information.

Consider, for example, a photo search app called CityPicSearch that displays all the photos taken in a particular city. The user might begin by saying the following:

“Hey Siri. Find photos using CityPicSearch.”

From this sentence, Siri will infer that a photo search using the CityPicSearch app has been requested. Siri will know that CityPicSearch has been integrated with SiriKit and that the app has registered that it supports the InSearchForPhotosIntent intent type. Siri also knows that the InSearchForPhotosIntent intent allows photos to be searched for based on date created, people in the photo, the location of the photo and the photo album An Introduction to SiriKit in which the photo resides. What Siri does not know, however, is which of these parameters the CityPicSearch app actually needs to perform the task. To find out this information, Siri will call the resolve method for each of these parameters on the app’s intent handler. In each case the intent handler will respond indicating whether or not the parameter is required. In this case, the intent handler’s resolveLocationCreated method will return a status indicating that the parameter is mandatory. On receiving this notification, Siri will request the missing information from the user by saying:

“Find pictures from where?”

The user will then provide a location which Siri will pass to the app by calling resolveLocationCreated once again, including the selection in the intent object. The app will verify the validity of the location and indicate to Siri that the parameter is valid. This process will repeat for each parameter supported by the intent type until all necessary parameter requirements have been satisfied.

Techniques are also available to assist Siri and the user clarify ambiguous parameters. The intent handler can, for example, return a list of possible options for a parameter which will then be presented to the user for selection. If the user were to ask an app to send a message to “John”, the resolveRecipients method would be called by Siri. The method might perform a search of the contacts list and find multiple entries where the contact’s first name is John. In this situation the method could return a list of contacts with the first name of John. Siri would then ask the user to clarify which “John” is the intended recipient by presenting the list of matching contacts.

Once the parameters have either been resolved or indicated as not being required, Siri will call the confirm method of the intent handler.

The Confirm Method

The confirm method is implemented within the extension intent handler and is called by Siri when all of the intent parameters have been resolved. This method provides the intent handler with an opportunity to make sure that it is ready to handle the intent. If the confirm method reports a ready status, Siri calls the handle method.

The Handle Method

The handle method is where the activity associated with the intent is performed. Once the task is completed, a response is passed to Siri. The form of the response will depend on the type of activity performed. For example, a photo search activity will return a count of the number of matching photos, while a send message activity will indicate whether the message was sent successfully.

The handle method may also return a continueInApp response. This tells Siri that the remainder of the task is to be performed within the main app. On receiving this response, Siri will launch the app, passing in an NSUserActivity object. NSUserActivity is a class that enables the status of an app to be saved and restored. In iOS 10 and later, the NSUserActivity class has an additional property that allows an NSInteraction object to be stored along with the app state. Siri uses this interaction property to store the NSInteraction object for the session and passes it to the main iOS app. The interaction object, in turn, contains a copy of the intent object which the app can extract to continue processing the activity. A custom NSUserActivity object can be created by the extension and passed to the iOS app. Alternatively, if no custom object is specified, SiriKit will create one by default.

A photo search intent, for example, would need to use the continueInApp response and user activity object so that photos found during the search can be presented to the user (SiriKit does not currently provide a mechanism for displaying the images from a photo search intent within the Siri user interface).

It is important to note that an intent handler class may contain more than one handle method to handle different intent types. A messaging app, for example, would typically have different handler methods for send message and message search intents.

Custom Vocabulary

Clearly Siri has a broad knowledge of vocabulary in a wide range of languages. It is quite possible, however, that your app or app users might use certain words or terms which have no meaning or context for Siri. These terms can be added to your app so that they are recognized by Siri. These custom vocabulary terms are categorized as either user-specific or global.

User specific terms are terms that only apply to an individual user. This might be a photo album with an unusual name or the nicknames the user has entered for contacts in a messaging app. User specific terms are registered with Siri from within the main iOS app (not the extension) at application runtime using the setVocabularyStrings(oftype:) method of the NSVocabulary class and must be provided in the form of an ordered list with the most commonly used terms listed first. User-specific custom vocabulary terms may only be specified for contact and contact group names, photo tag and album names, workout names and CarPlay car profile names. When calling the setVocabularyStrings(oftype:) method with the ordered list, the category type specified must be one of the following:

  • contactName
  • contactGroupName
  • photoTag
  • photoAlbumName
  • workoutActivityName
  • carProfileName

Global vocabulary terms are specific to your app but apply to all app users. These terms are supplied with the app bundle in the form of a property list file named AppInventoryVocabulary.plist. These terms are only applicable to work out and ride sharing names.

The Siri User Interface

Each SiriKit domain has a standard user interface layout that is used by default to convey information to the user during the Siri integration. The Ride Booking extension, for example, will display information such as the destination and price. These default user interfaces can be customized by adding an intent UI app extension to the project. This topic is covered in the chapter entitled “Customizing the SiriKit Intent User Interface”. In the case of a Siri Shortcut, the same technique can be used to customize the user interface that appears within Siri when the shortcut is used.

Summary

SiriKit brings some of the power of Siri to third-party apps, allowing the functionality of an app to be accessed by the user using the Siri virtual assistant interface. Siri integration was originally only available when performing tasks that fall into narrowly defined domains such as messaging, photo searching and workouts. This has now been broadened to provide support for apps of just about any type. Siri integration uses the standard iOS extensions mechanism. The Intents Extension is responsible for interacting with Siri, while the optional UI Extension provides a way to control the appearance of any results presented to the user within the Siri environment.

All of the interaction with the user is handled by Siri, with the results structured and packaged into an intent. This intent is then passed to the intent handler of the Intents Extension via a series of method calls designed to verify that all the required information has been gathered. The intent is then handled, the requested task performed and the results presented to the user either via Siri or the main iOS app.

A SwiftUI DocumentGroup Tutorial

The previous chapter provided an introduction to the DocumentGroup scene type provided with SwiftUI and explored the architecture that makes it possible to add document browsing and management to apps.

This chapter will demonstrate how to take the standard Xcode Multiplatform Document App template and modify it to work with image files instead of plain text documents. On completion of the tutorial, the app will allow image files to be opened, modified using a sepia filter and then saved back to the original file.

Creating the ImageDocDemo Project

Begin by launching Xcode and create a new project named ImageDocDemo using the Multiplatform Document App template.

Modifying the Info.plist File

Since the app will be working with image files instead of plain text, some changes need to be made to the type identifiers declared in the Info.plist file. To make these changes, select the ImageDocDemo entry at the top of the project navigator window (marked A in Figure 40-1), followed by the ImageDocDemo (iOS) target (B) before clicking on the Info tab (C).

Figure 40-1

Scroll down to the Document Types section within the Info screen and change the Types field from com.example. plain-text to com.ebookfrenzy.image:

Figure 40-2

Next, locate the Imported Type Identifiers section and make the following changes:

  • Description – Example Image
  • Identifier – com.ebookfrenzy.image
  • Conforms To – public.image
  • Extensions – png

Once these changes have been made, the settings should match those shown in Figure 40-3:

Figure 40-3

Adding an Image Asset

If the user decides to create a new document instead of opening an existing one, a sample image will be displayed from the project asset catalog. For this purpose the cascadefalls.png file located in the project_images folder of the sample code archive will be added to the asset catalog. If you do not already have the source code downloaded, it can be downloaded from the following URL: https://www.ebookfrenzy.com/retail/swiftui-ios14/

Once the image file has been located in a Finder window, select the Assets.xcassets entry in the Xcode project navigator and drag and drop the image as shown in Figure 40-4:

Figure 40-4

Modifying the ImageDocDemoDocument.swift File

Although we have changed the type identifiers to support images instead of plain text, the document declaration is still implemented for handling text-based content. Select the ImageDocDemoDocument.swift file to load it into the editor and begin by modifying the UTType extension so that it reads as follows:

extension UTType {
    static var exampleImage: UTType {
        UTType(importedAs: "com.ebookfrenzy.image")
    }
}

Next, locate the readableContentTypes variable and modify it to use the new UTType:

static var readableContentTypes: [UTType] { [.exampleImage] }

With the necessary type changes made, the next step is to modify the structure to work with images instead of string data. Remaining in the ImageDocDemoDocument.swift file, change the text variable from a string to an image and modify the first initializer to use the cascadefalls image:

.
.
struct ImageDocDemoDocument: FileDocument {
    
    var image: UIImage = UIImage()
 
    init() {
        if let image = UIImage(named: "cascadefalls") {
            self.image = image
        }
    }
.
.

Moving on to the second init() method, make the following modifications to decode image instead of string data:

init(configuration: ReadConfiguration) throws {
    guard let data = configuration.file.regularFileContents,
          let decodedImage: UIImage = UIImage(data: data)
    else {
        throw CocoaError(.fileReadCorruptFile)
    }
    image = decodedImage
}

Finally, modify the write() method to encode the image to Data format so that it can be saved to the document:

func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper {
    let data = image.pngData()!
    return .init(regularFileWithContents: data)
}

Designing the Content View

Before performing some initial tests on the project so far, the content view needs to be modified to display an image instead of text content. We will also take this opportunity to add a Button view to the layout to apply the sepia filter to the image. Edit the ContentView.swift file and modify it so that it reads as follows:

import SwiftUI
 
struct ContentView: View {
    
    @Binding var document: ImageDocDemoDocument
 
    var body: some View {
        VStack {
            Image(uiImage: document.image)
                .resizable()
                .aspectRatio(contentMode: .fit)
                .padding()
            Button(action: {
                
            }, label: {
                Text("Filter Image")
            })
            .padding()
        }
    }
}

With the changes made, run the app on a device or simulator, use the browser to navigate to a suitable location and then click on the Create Document item. The app will create a new image document containing the sample image from the asset catalog and then display it in the content view:

Figure 40-5

Tap the back arrow in the top left-hand corner to return to the browser where the new document should be A SwiftUI DocumentGroup Tutorial

listed with an icon containing a thumbnail image:

Figure 40-6

Filtering the Image

The final step in this tutorial is to apply the sepia filter to the image when the Button in the content view is tapped. This will make use of the CoreImage Framework and involves converting the UIImage to a CIImage and applying the sepia tone filter before being converted back to a UIImage. Edit the ContentView.swift file and make the following changes:

import SwiftUI
import CoreImage
import CoreImage.CIFilterBuiltins
 
struct ContentView: View {
    
    @Binding var document: ImageDocDemoDocument
    @State private var ciFilter = CIFilter.sepiaTone()
    
    let context = CIContext()
    
    var body: some View {
        VStack {
            Image(uiImage: document.image)
                .resizable()
                .aspectRatio(contentMode: .fit)
                .padding()
            Button(action: {
                filterImage()
            }, label: {
                Text("Filter Image")
            })
            .padding()
        }
    }
    
    func filterImage() {
        ciFilter.intensity = Float(1.0)
 
        let ciImage = CIImage(image: document.image)
        
        ciFilter.setValue(ciImage, forKey: kCIInputImageKey)
        
        guard let outputImage = ciFilter.outputImage else { return }
 
        if let cgImage = context.createCGImage(outputImage, 
                                       from: outputImage.extent) {
            document.image = UIImage(cgImage: cgImage)
        }
    }
}

Testing the App

Run the app once again and either create a new image document, or select the existing image to display the content view. Within the content view, tap the Filter Image button and wait while the sepia filter is applied to the image. Tap the back arrow to return to the browser where the thumbnail image will now appear in sepia tones. Select the image to load it into the content view and verify that the sepia changes were indeed saved to the document.

Summary

This chapter has demonstrated how to modify the Xcode Document App template to work with different content types. This involved changing the type identifiers, modifying the document declaration and adapting the content view to handle image content.

An Overview of SwiftUI DocumentGroup Scenes

The chapter entitled SwiftUI Architecture introduced the concept of SwiftUI scenes and explained that the SwiftUI framework, in addition to allowing you to build your own scenes, also includes two pre-built scene types in the form of WindowGroup and DocumentGroup. So far, the examples in this book have made exclusive use of the WindowGroup scene. This chapter will introduce the DocumentGroup scene and explain how it can be used to build document-based apps in SwiftUI.

Documents in Apps

If you have used iOS for an appreciable amount of time, the chances are good that you will have encountered the built-in Files app. The Files app provides a way to browse, select and manage the Documents stored both on the local device file system and iCloud storage in addition to third-party providers such as Google Drive. Documents in this context can include just about any file type including plain text, image, data and binary files. Figure 39-1 shows a typical browsing session within the iOS Files app:

Figure 39-1

The purpose of the DocumentGroup scene is to allow the same capabilities provided by the Files app to be built into SwiftUI apps, in addition to the ability to create new files.

Document support can be built into an app with relatively little work. In fact, Xcode includes a project template specifically for this task which performs much of the setup work for you. Before attempting to work with DocumentGroups, however, there are some basic concepts which first need to be covered. A good way to traverse this learning curve is to review the Document App project template generated by Xcode.

Creating the DocDemo App

Begin by launching Xcode and creating a new project using the Multiplatform Document App template option as shown in Figure 39-2 below:

Figure 39-2

Click the Next button, name the project DocDemo and save the project to a suitable location.

The DocumentGroup Scene

The DocumentGroup scene contains most of the infrastructure necessary to provide app users with the ability to create, delete, move, rename and select files and folders from within an app. An initial document group scene is declared by Xcode within the DocDemoApp.swift file as follows:

import SwiftUI
 
@main
struct DocDemoApp: App {
    var body: some Scene {
        DocumentGroup(newDocument: DocDemoDocument()) { file in
            ContentView(document: file.$document)
        }
    }
}

As currently implemented, the first scene presented to the user when the app starts will be the DocumentGroup user interface which will resemble Figure 39-1 above. Passed through to the DocumentGroup is a DocDemoDocument instance which, along with some additional configuration settings, contains the code to create, read and write files. When a user either selects an existing file, or creates a new one, the content view is displayed and passed the DocDemoDocument instance for the selected file from which the content may be extracted and presented to the user:

ContentView(document: file.$document)

The DocDemoDocument.swift file generated by Xcode is designed to support plain text files and may be used as the basis for supporting other file types. Before exploring this file in detail, we first need to understand file types.

Declaring File Type Support

A key step in implementing document support is declaring the file types which the app supports. The DocumentGroup user interface uses this information to ensure that only files of supported types are selectable when browsing. A user browsing documents in an app which only supports image files, for example, would see documents of other types (such as plain text) grayed out and unselectable within the document list. This can be separated into the following components:

Document Content Type Identifier

Defining the types of file supported by an app begins by declaring a document content type identifier. This is declared using Uniform Type Identifier (UTI) syntax which typically takes the form of a reverse domain name combined with a common type identifier. A document identifier for an app which supports plain text files, for example, might be declared as follows:

com.ebookfrenzy.plain-text

Handler Rank

The document content type may also declare a handler rank value. This value declares to the system how the app relates to the file type. If the app uses its own custom file type, this should be set to Owner. If the app is to be opened as the default app for files of this type, the value should be set to Default. If, on the other hand, the app can handle files of this type but is not intended to be the default handler a value of Alternate should be used. Finally, None should be used if the app is not to be associated with the file type.

Type Identifiers

Having declared a document content type identifier, this identifier must have associated with it a list of specific data types to which it conforms. This is achieved using type identifiers. These type identifiers can be chosen from an extensive list of built-in types provided by Apple and are generally prefixed with “public.”. For example the UTI for a plain text document is public.plain-text, while that for any type of image file is public.image. Similarly, if an app only supports JPEG image files, the public.jpeg UTI would be used.

Each of the built-in UTI types has associated with it a UTType equivalent which can be used when working with types programmatically. The public.plain-text UTI, for example, has a UTType instance named plainText while the UTType instance for public.mpeg4move is named mpeg4Movie. A full list of supported UTType declarations can be found at the following URL:

https://developer.apple.com/documentation/uniformtypeidentifiers/uttype/system_declared_types

Filename Extensions

In addition to declaring the type identifiers, filename extensions for which support is provided may also be specified (for example .txt, .png, .doc, .mydata etc.). Note that many of the built-in type identifiers are already configured to support associated file types. The public.png type, for example, is pre-configured to recognize .png filename extensions.

The extension declared here will also be appended to the filename of any new documents created by the app.

Custom Type Document Content Identifiers

When working with proprietary data formats (perhaps your app has its own database format), it is also possible to declare your own document content identifier without using one of the common identifiers. A document type identifier for a custom type might, therefore, be declared as follows:

com.ebookfrenzy.mydata

Exported vs. Imported Type Identifiers

When a built-in type is used (such as plain.image), it is said to be an imported type identifier (since it is imported into the app from the range of identifiers already known to the system). A custom type identifier, on the other hand, is described as an exported type identifier because it originates from within the app and is exported to the system so that the browser can recognize files of that type as being associated with the app.

Configuring File Type Support in Xcode

All of the above settings are configured within the project’s Info.plist file. Although these changes can be made with the Xcode property list editor, a better option is to access the settings via the Xcode Info screen of the app target. To review the settings for the example project using this approach, select the DocDemo entry at the top of the project navigator window (marked A in Figure 39-3), followed by the DocDemo (iOS) target (B) before clicking on the Info tab (C).

Figure 39-3

Scroll down to the Document Types section within the Info screen and note that Xcode has created a single document content type identifier set to com.example.plain-text with the handler rank set to Default:

Figure 39-4

Next, scroll down to the Imported Type Identifiers section where we can see that our document content type identifier (com.example.plain-text) has been declared as conforming to the public.plain-text type with a single filename extension of exampletext:

Figure 39-5

Type identifiers for custom types are declared in the Exported Type Identifiers section of the Info screen. For example a binary custom file might be declared as conforming to public.data while the file names for this type might have a mydata filename extension:

Figure 39-6

Note that in both cases, icons may be added to represent the files within the document browser user interface.

The Document Structure

When the example project was created, Xcode generated a file named DocDemoDocument.swift, an instance of which is passed to ContentView within the App declaration. As generated, this file reads as follows:

import SwiftUI
import UniformTypeIdentifiers
 
extension UTType {
    static var exampleText: UTType {
        UTType(importedAs: "com.example.plain-text")
    }
}
 
struct DocDemoDocument: FileDocument {
    var text: String
 
    init(text: String = "Hello, world!") {
        self.text = text
    }
 
    static var readableContentTypes: [UTType] { [.exampleText] }
 
    init(configuration: ReadConfiguration) throws {
        guard let data = configuration.file.regularFileContents,
              let string = String(data: data, encoding: .utf8)
        else {
            throw CocoaError(.fileReadCorruptFile)
        }
        text = string
    }
    
    func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper {
        let data = text.data(using: .utf8)!
        return .init(regularFileWithContents: data)
    }
}

The structure is based on the FileDocument class and begins by declaring a new UTType named exampleText which imports our com.example.plain-text identifier. This is then referenced in the readableContentTypes array to indicate which types of file can be opened by the app:

extension UTType {
    static var exampleText: UTType {
        UTType(importedAs: "com.example.plain-text")
    }
}
.
.
    static var readableContentTypes: [UTType] { [.exampleText] }
.
.

The structure also includes two initializers, the first of which will be called when the creation of a new document is requested by the user and simply configures a sample text string as the initial data:

init(text: String = "Hello, world!") {
    self.text = text
}

The second initializer, on the other hand, is called when the user opens an existing document and is passed a ReadConfiguration instance:

init(configuration: ReadConfiguration) throws {
    guard let data = configuration.file.regularFileContents,
          let string = String(data: data, encoding: .utf8)
    else {
        throw CocoaError(.fileReadCorruptFile)
    }
    text = string
}

The ReadConfiguration instance holds the content of the file in Data format which may be accessed via the regularFileContents property. Steps are then taken to decode this data and convert it to a String so that it can be displayed to the user. The exact steps to decode the data will depend on how the data was originally encoded within the fileWrapper() method. In this case, the method is designed to work with String data:

func fileWrapper(configuration: WriteConfiguration) throws -> FileWrapper {
    let data = text.data(using: .utf8)!
    return .init(regularFileWithContents: data)
}

The fileWrapper() method is passed a WriteConfiguration instance for the selected file and is expected to return a FileWrapper instance initialized with the data to be written. In order for the content to be written to the file it must first be converted to data and stored in a Data object. In this case the text String value is simply encoded to data. The steps involved to achieve this in your own apps will depend on the type of content being stored in the document.

The Content View

As we have seen early in the chapter, the ContentView is passed an instance of the DocDemoDocument structure from within the App declaration:

ContentView(document: file.$document)

In the case of the DocDemo example, the ContentView binds to this property and references it as the content for a TextEditor view:

.
.
struct ContentView: View {
    @Binding var document: DocDemoDocument
 
    var body: some View {
        TextEditor(text: $document.text)
    }
}
.
.

When the view appears it will display the current string assigned to the text property of the document instance and, as the user edits the text, the changes will be stored. When the user navigates back to the document browser, a call to the fileWrapper() method will be triggered automatically and the changes saved to the document.

Running the Example App

Having explored the internals of the example DocDemo app, the final step is to experience the app in action. With this in mind, compile and run the app on a device or simulator and, once running, select the Browse tab located at the bottom of the screen:

Figure 39-7

Navigate to a suitable location either on the device or within your iCloud storage and click on the Create Document entry as shown in Figure 39-8:

Figure 39-8

The new file will be created and the content loaded into the ContentView. Edit the sample text and return to the document browser where the document (named untitled) will now be listed. Open the document once again so that it loads into the ContentView and verify that the changes were saved.

Summary

The SwiftUI DocumentGroup scene allows the document browsing and management capabilities available within the built-in Files app to be integrated into apps with relatively little effort. The core element of DocumentGroup implementation is the document declaration which acts as the interface between the document browser and views that make up the app and is responsible for encoding and decoding document content. In addition, the Info.plist file for the app must include information about the types of files the app is able to support.

Creating a Customized SwiftUI ProgressView

The SwiftUI ProgressView, as the name suggests, provides a way to visually indicate the progress of a task within an app. An app might, for example, need to display a progress bar while downloading a large file. This chapter will work through an example project demonstrating how to implement a ProgressView-based interface in a SwiftUI app including linear, circular and indeterminate styles in addition to creating your own custom progress views.

ProgressView Styles

The ProgressView can be displayed in three different styles. The linear style displays progress in the form of a horizontal line as shown in Figure 38-1 below:

Figure 38-1

Alternatively, progress may be displayed using the circular style as shown in Figure 38-2:

Figure 38-2

Finally, for indeterminate progress, the spinning animation shown in Figure 38-3 below is used. This style is useful for indicating to the user that progress is being made on a task when the percentage of work completed is unknown.

Figure 38-3

As we will see later in the chapter, it is also possible to design a custom style by creating declarations conforming to the ProgressViewStyle protocol.

Creating the ProgressViewDemo Project

Launch Xcode and create a new project named ProgressViewDemo using the Multiplatform App template.

Adding a ProgressView

The content view for this example app will consist of a ProgressView and a Slider. The Slider view will serve as a way to change the value of a State property variable, such that changes to the slider position will be reflected by the ProgressView.

Edit the ContentView.swift file and modify the view as follows:

struct ContentView: View {
    
    @State private var progress: Double = 1.0
    
    var body: some View {
 
        VStack {
            ProgressView("Task Progress", value: progress, total: 100)
                .progressViewStyle(LinearProgressViewStyle())                
            Slider(value: $progress, in: 1...100, step: 0.1)
        }
        .padding()
    }
}

Note that the ProgressView is passed a string to display as the title, a value indicating the current progress and a total used to define when the task is complete. Similarly, the Slider is configured to adjust the progress state property between 1 and 100 in increments of 0.1.

Use Live Preview to test the view and verify that the progress bar moves in unison with the slider:

Figure 38-4

The color of the progress line may be changed using the tint argument as follows:

ProgressView("Task Progress", value: progress, total: 100)
    .progressViewStyle(LinearProgressViewStyle(tint: Color.red))

Using the Circular ProgressView Style

To display a circular ProgressView, the progressViewStyle() modifier needs to be called and passed an instance of CircularProgressViewStyle as follows:

struct ContentView: View {
    
    @State private var progress: Double = 1.0
    
    var body: some View {
 
        VStack {
            ProgressView("Task Progress", value: progress, total: 100)
                .progressViewStyle(CircularProgressViewStyle())
            Slider(value: $progress, in: 1...100, step: 0.1)
        }
        .padding()
    }
}

When the app is now previewed, the progress will be shown using the circular style. Note that a bug in all versions of iOS 14 up to and including iOS 14.2 causes the circular style to appear using the intermediate style. This bug has been reported to Apple and will hopefully be resolved in a future release. In the meantime, the behavior can be tested by targeting macOS instead of iOS when running the app.

Although the progressViewStyle() modifier was applied directly to the ProgressView in the above example, it may also be applied to a container view such as VStack. When used in this way, the style will be applied to all child ProgressView instances. In the following example, therefore, all three ProgressView instances will be displayed using the circular style:

VStack {
    ProgressView("Task 1 Progress", value: progress, total: 100)
        .progressViewStyle(CircularProgressViewStyle())
    ProgressView("Task 2 Progress", value: progress, total: 100)
        .progressViewStyle(CircularProgressViewStyle())
    ProgressView("Task 3 Progress", value: progress, total: 100)
        .progressViewStyle(CircularProgressViewStyle())
}
.progressViewStyle(CircularProgressViewStyle())

Declaring an Indeterminate ProgressView

The indeterminate ProgressView displays the spinning indicator shown previously in Figure 38-3 and is declared using the ProgressView without including a value binding to indicate progress:

ProgressView()

If required, text may be assigned to appear alongside the view:

Progress("Working...")

ProgressView Customization

The appearance of a ProgressView may be changed by declaring a structure conforming to the ProgressViewStyle protocol and passing an instance through to the progressViewStyle() modifier.

To conform with the ProgressViewStyle protocol, the style declaration must be structured as follows:

struct MyCustomProgressViewStyle: ProgressViewStyle {
    func makeBody(configuration: Configuration) -> some View {
        ProgressView(configuration)
            // Modifiers here to customize view
    }
}

The structure contains a makeBody() method which is passed the configuration information for the ProgressView on which the custom style is being applied. One option is to simply return a modified ProgressView instance. The following style, for example, applies accent color and shadow effects to the ProgressView:

import SwiftUI
 
struct ContentView: View {
    
    @State private var progress: Double = 1.0
    
    var body: some View {
 
        VStack {
            ProgressView("Task Progress", value: progress, total: 100)
                 .progressViewStyle(ShadowProgressViewStyle())
            
            Slider(value: $progress, in: 1...100, step: 0.1)
        }
        .padding()  
    }
}
 
struct ShadowProgressViewStyle: ProgressViewStyle {
    func makeBody(configuration: Configuration) -> some View {
        ProgressView(configuration)
            .accentColor(.red)
            .shadow(color: Color(red: 0, green: 0.7, blue: 0),
                    radius: 5.0, x: 2.0, y: 2.0)
    }
}
.
.

The ProgressView will now appear with a green shadow with the progress line appearing in red. A closer inspection of the makeBody() method will reveal that it can return a View instance of any type, meaning that the method is not limited to returning a ProgressView instance. We could, for example, return a Text view as shown below. The Configuration instance passed to the makeBody() method contains a property named fractionComplete, we can use this to display the progress percentage in the Text view:

.
.
        VStack {
            ProgressView("Task Progress", value: progress, total: 100)
                 .progressViewStyle(MyCustomProgressViewStyle())
.
.
    }
}
 
struct MyCustomProgressViewStyle: ProgressViewStyle {
    func makeBody(configuration: Configuration) -> some View {        
        let percent = Int(configuration.fractionCompleted! * 100)
        return  Text("Task \(percent)% Complete")
    }
}

When previewed, the custom style will appear as shown in Figure 38-5:

Figure 38-5

In fact, custom progress views of any level of complexity may be designed using this technique. Consider, for example, the following custom progress view implementation:

Figure 38-6

The above example was created using a Shape declaration to draw a dashed circular path based on the fractionComplete property:

struct MyCustomProgressViewStyle: ProgressViewStyle {
    func makeBody(configuration: Configuration) -> some View {
        
        let degrees = configuration.fractionCompleted! * 360
        let percent = Int(configuration.fractionCompleted! * 100)
        
        return VStack {
            
            MyCircle(startAngle: .degrees(1), endAngle: .degrees(degrees))
                .frame(width: 200, height: 200)
                .padding(50)
            Text("Task \(percent)% Complete")
        }
    }
}
 
struct MyCircle: Shape {
    var startAngle: Angle
    var endAngle: Angle
 
    func path(in rect: CGRect) -> Path {
        var path = Path()
        path.addArc(center: CGPoint(x: rect.midX, y: rect.midY), 
                 radius: rect.width / 2, startAngle: startAngle, 
                              endAngle: endAngle, clockwise: true)
 
        return path.strokedPath(.init(lineWidth: 100, dash: [5, 3], 
                 dashPhase: 10))
    }
}

Summary

The SwiftUI ProgressView provides a way for apps to visually convey to the user the progress of a long running task such as a large download transaction. ProgressView instances may be configured to display progress either as a straight bar or using a circular style, while the indeterminate style displays a spinning icon which indicates the task is running but without providing progress information. The prevailing style is assigned using the progressViewStyle() modifier which may be applied either to individual ProgressView instances, or to all of the instances within a container view such as a VStack

By adopting the ProgressViewStyle protocol, custom progress view designs of almost any level of complexity can be created.

Working with Gesture Recognizers in SwiftUI

The term gesture is used to describe an interaction between the touch screen and the user which can be detected and used to trigger an event in the app. Drags, taps, double taps, pinching, rotation motions and long presses are all considered to be gestures in SwiftUI. The goal of this chapter is to explore the use of SwiftUI gesture recognizers within a SwiftUI based app.

Creating the GestureDemo Example Project

To try out the examples in this chapter, create a new Multiplatform App Xcode project named GestureDemo.

Basic Gestures

Gestures performed within the bounds of a view can be detected by adding a gesture recognizer to that view. SwiftUI provides recognizers for tap, long press, rotation, magnification (pinch) and drag gestures.

A gesture recognizer is added to a view using the gesture() modifier, passing through the gesture recognizer to be added.

In the simplest form, a recognizer will include one or more action callbacks containing the code to be executed when a matching gesture is detected on the view. The following example adds a tap gesture detector to an Image view and implements the onEnded callback containing the code to be performed when the gesture is completed successfully:

struct ContentView: View {
    var body: some View {
        Image(systemName: "hand.point.right.fill")
            .gesture(
                TapGesture()
                    .onEnded { _ in
                        print("Tapped")
                    }
            )
    }
}

Using Live Preview in debug mode, test the above view declaration, noting the appearance of the “Tapped” message in the debug console panel when the image is clicked (if the message does not appear, try running the app in a simulator session instead of using the Live Preview).

When working with gesture recognizers, it is usually preferable to assign the recognizer to a variable and then reference that variable in the modifier. This makes for tidier view body declarations and encourages reuse:

var body: some View {
 
    let tap = TapGesture()
                .onEnded { _ in
                print("Tapped")
              }
 

    return Image(systemName: "hand.point.right.fill")
        .gesture(tap)
}

When using the tap gesture recognizer, the number of taps required to complete the gesture may also be specified. The following, for example, will only detect double taps:

let tap = TapGesture(count: 2)
                .onEnded { _ in
                print("Tapped")
              }

The long press gesture recognizer is used in a similar way and is designed to detect when a view is touched for an extended length of time. The following declaration detects when a long press is performed on an Image view using the default time duration:

var body: some View {
 
    let longPress = LongPressGesture()
        .onEnded { _ in
            print("Long Press")
        }
 

    return Image(systemName: "hand.point.right.fill")
        .gesture(longPress)
}

To adjust the duration necessary to qualify as a long press, simply pass through a minimum duration value (in seconds) to the LongPressGesture() call. It is also possible to specify a maximum distance from the view from which the point of contact with the screen can move outside of the view during the long press. If the touch moves beyond the specified distance, the gesture will cancel and the onEnded action will not be called:

let longPress = LongPressGesture(minimumDuration: 10, 
                                    maximumDistance: 25)
    .onEnded { _ in
        print("Long Press")
    }

A gesture recognizer can be removed from a view by passing a nil value to the gesture() modifier:

.gesture(nil)

The onChange Action Callback

In the previous examples, the onEnded action closure was used to detect when a gesture completes. Many of the gesture recognizers (except for TapGesture) also allow the addition of an onChange action callback. The onChange callback will be called when the gesture is first recognized, and each time the underlying values of the gesture change, up until the point that the gesture ends.

The onChange action callback is particularly useful when used with gestures involving motion across the device display (as opposed to taps and long presses). The magnification gesture, for example, can be used to detect the movement of touches on the screen.

struct ContentView: View {
 
    var body: some View {
     
        let magnificationGesture = 
                  MagnificationGesture(minimumScaleDelta: 0)
           .onEnded { _ in
               print("Gesture Ended")
           }
 
        return Image(systemName: "hand.point.right.fill")
            .resizable()
            .font(.largeTitle)
            .gesture(magnificationGesture)
            .frame(width: 100, height: 90)
    }
}

The above implementation will detect a pinching motion performed over the Image view but will only report the detection after the gesture ends. Within the preview canvas, pinch gestures can be simulated by holding down the keyboard Option key while clicking in the Image view and dragging.

To receive notifications for the duration of the gesture, the onChanged callback action can be added:

let magnificationGesture = 
                  MagnificationGesture(minimumScaleDelta: 0)
    .onChanged( { _ in
        print("Magnifying")
    })
    .onEnded { _ in
        print("Gesture Ended")
    }

Now when the gesture is detected, the onChanged action will be called each time the values associated with the pinch operation change. Each time the onChanged action is called, it will be passed a MagnificationGesture. Value instance which contains a CGFloat value representing the current scale of the magnification.

With access to this information about the magnification gesture scale, interesting effects can be implemented such as configuring the Image view to resize in response to the gesture:

struct ContentView: View {
 
    @State private var magnification: CGFloat = 1.0
 
    var body: some View {
     
        let magnificationGesture = 
                MagnificationGesture(minimumScaleDelta: 0)
            .onChanged({ value in
                self.magnification = value
            })
            .onEnded({ _ in
                print("Gesture Ended")
            })
 
        return Image(systemName: "hand.point.right.fill")
            .resizable()
            .font(.largeTitle)
            .scaleEffect(magnification)
            .gesture(magnificationGesture)
            .frame(width: 100, height: 90)
    }
}

The updating Callback Action

The updating callback action is like onChanged with the exception that it works with a special property wrapper named @GestureState. GestureState is like the standard @State property wrapper but is designed exclusively for use with gestures. The key difference, however, is that @GestureState properties automatically reset to the original state when the gesture ends. As such, the updating callback is ideal for storing transient state that is only needed while a gesture is being performed.

Each time an updating action is called, it is passed the following three arguments:

  • DragGesture.Value instance containing information about the gesture.
  • A reference to the @GestureState property to which the gesture has been bound.
  • A Transaction object containing the current state of the animation corresponding to the gesture. The DragGesture.Value instance is particularly useful and contains the following properties:
  • location (CGPoint) – The current location of the drag gesture.
  • predictedEndLocation (CGPoint) – Predicted final location, based on the velocity of the drag if dragging stops.
  • predictedEndTranslation (CGSize) – A prediction of what the final translation would be if dragging stopped now based on the current drag velocity.
  • startLocation (CGPoint) – The location at which the drag gesture started.
  • time (Date) – The time stamp of the current drag event.
  • translation (CGSize) – The total translation from the start of the drag gesture to the current event (essentially the offset from the start position to the current drag location).

Typically, a drag gesture updating callback will extract the translation value from the DragGesture.Value object and assign it to a @GestureState property and will typically resemble the following:

let drag = DragGesture()
    .updating($offset) { dragValue, state, transaction in
        state = dragValue.translation
    }

The following example adds a drag gesture to an Image view and then uses the updating callback to keep a @ GestureState property updated with the current translation value. An offset() modifier is applied to the Image view using the @GestureState offset property. This has the effect of making the Image view follow the drag gesture as it moves across the screen.

struct ContentView: View {
 
    @GestureState private var offset: CGSize = .zero
 
    var body: some View {
        
        let drag = DragGesture()
            .updating($offset) { dragValue, state, transaction in
                state = dragValue.translation
            }
        
        return Image(systemName: "hand.point.right.fill")
            .font(.largeTitle)
            .offset(offset)
            .gesture(drag)
    }
}

If it is not possible to drag the image this may be because of a problem with the live view in the current Xcode 12 release. The example should work if tested on a simulator or physical device. Note that once the drag gesture ends, the Image view returns to the original location. This is because the offset gesture property was automatically reverted to its original state when the drag ended.

Composing Gestures

So far in this chapter we have looked at adding a single gesture recognizer to a view in SwiftUI. Though a less common requirement, it is also possible to combine multiple gestures and apply them to a view. Gestures can be combined so that they are detected simultaneously, in sequence or exclusively. When gestures are composed simultaneously, both gestures must be detected at the same time for the corresponding action to be performed. In the case if sequential gestures, the first gestures must be completed before the second gesture will be detected. For exclusive gestures, the detection of one gesture will be treated as all gestures being detected.

Gestures are composed using the simultaneously(), sequenced() and exclusively() modifiers. The following view declaration, for example, composes a simultaneous gesture consisting of a long press and a drag:

struct ContentView: View {
 
    @GestureState private var offset: CGSize = .zero
    @GestureState private var longPress: Bool = false
    
    var body: some View {
        
        let longPressAndDrag = LongPressGesture(minimumDuration: 1.0)
            .updating($longPress) { value, state, transition in
                state = value
            }
            .simultaneously(with: DragGesture())
            .updating($offset) { value, state, transaction in
                state = value.second?.translation ?? .zero
             }
 
            return Image(systemName: "hand.point.right.fill")
                .foregroundColor(longPress ? Color.red : Color.blue)
                .font(.largeTitle)
                .offset(offset)
                .gesture(longPressAndDrag)
    }
}

In the case of the following view declaration, a sequential gesture is configured which requires the long press gesture to be completed before the drag operation can begin. When executed, the user will perform a long press on the image until it turns green, at which point the drag gesture can be used to move the image around the screen.

struct ContentView: View {
    
    @GestureState private var offset: CGSize = .zero
    @State private var dragEnabled: Bool = false
 
    var body: some View {
    
        let longPressBeforeDrag = LongPressGesture(minimumDuration: 2.0)
            .onEnded( { _ in
                self.dragEnabled = true
            })
            .sequenced(before: DragGesture())
            .updating($offset) { value, state, transaction in
               
                switch value {
                
                    case .first(true):
                        print("Long press in progress")
                    
                    case .second(true, let drag):
                        state = drag?.translation ?? .zero
        
                    default: break
                }
            }
            .onEnded { value in
                self.dragEnabled = false
            }
        
            return Image(systemName: "hand.point.right.fill")
                .foregroundColor(dragEnabled ? Color.green : Color.blue)
                .font(.largeTitle)
                .offset(offset)
                .gesture(longPressBeforeDrag)
    }
}

Summary

Gesture detection can be added to SwiftUI views using gesture recognizers. SwiftUI includes recognizers for drag, pinch, rotate, long press and tap gestures. Gesture detection notification can be received from the recognizers by implementing onEnded, updated and onChange callback methods. The updating callback works with a special property wrapper named @GestureState. A GestureState property is like the standard state property wrapper but is designed exclusively for use with gestures and automatically resets to its original state when the gesture ends.

Gesture recognizers may be combined so that they are recognized simultaneously, sequentially or exclusively.

SwiftUI Animation and Transitions

This chapter is intended to provide an overview and examples of animating views and implementing transitions within a SwiftUI app. Animation can take a variety of forms including the rotation, scaling and motion of a view on the screen.

Transitions, on the other hand, define how a view will appear as it is added to or removed from a layout, for example whether a view slides into place when it is added, or shrinks from view when it is removed.

Creating the AnimationDemo Example Project

To try out the examples in this chapter, create a new Multiplatform App Xcode project named AnimationDemo.

Implicit Animation

Many of the built-in view types included with SwiftUI contain properties that control the appearance of the view such as scale, opacity, color and rotation angle. Properties of this type are animatable, in that the change from one property state to another can be animated instead of occurring instantly. One way to animate these changes to a view is to use the animation() modifier (a concept referred to as implicit animation because the animation is implied for any modifiers applied to the view that precede the animation modifier).

To experience basic animation using this technique, modify the ContentView.swift file in the AnimationDemo project so that it contains a Button view configured to rotate in 60 degree increments each time it is tapped:

struct ContentView : View {
    
    @State private var rotation: Double = 0
 
    var body: some View {
       Button(action: {
           self.rotation = 
                  (self.rotation < 360 ? self.rotation + 60 : 0)
           }) {
           Text("Click to animate")
               .rotationEffect(.degrees(rotation))
       }
    }
}

When tested using Live Preview, each click causes the Button view to rotate as expected, but the rotation is immediate. Similarly, when the rotation reaches a full 360 degrees, the view actually rotates counter-clockwise 360 degrees, but so quickly the effect is not visible. These effects can be slowed down and smoothed out by adding the animation() modifier with an optional animation curve to control the timing of the animation:

var body: some View {
   Button(action: {
       self.rotation = 
              (self.rotation < 360 ? self.rotation + 60 : 0)
   }) {    
       Text("Click to Animate")
           .rotationEffect(.degrees(rotation))
           .animation(.linear)
   }
}

The optional animation curve defines the linearity of the animation timeline. This setting controls whether the animation is performed at a constant speed or whether it starts out slow and speeds up. SwiftUI provides the following basic animation curves:

  • linear – The animation is performed at constant speed for the specified duration and is the option declared in the above code example.
  • easeOut – The animation starts out fast and slows as the end of the sequence approaches.
  • easeIn – The animation sequence starts out slow and speeds up as the end approaches.
  • easeInOut – The animation starts slow, speeds up and then slows down again.

Preview the animation once again and note that the rotation now animates smoothly. When defining an animation, the duration may also be specified. Change the animation modifier so that it reads as follows:

.animation(.linear(duration: 1))

Now the animation will be performed more slowly each time the Button is clicked.

As previously mentioned, an animation can apply to more than one modifier. The following changes, for example, animate both rotation and scaling effects:

.
.
@State private var scale: CGFloat = 1
 
var body: some View {
   Button(action: {
        self.rotation = 
               (self.rotation < 360 ? self.rotation + 60 : 0)
        self.scale = (self.scale < 2.8 ? self.scale + 0.3 : 1)
   }) {
       Text("Click to Animate")
        .scaleEffect(scale)
        .rotationEffect(.degrees(rotation))
        .animation(.linear(duration: 1))
   }
}

These changes will cause the button to increase in size with each rotation, then scale back to its original size during the return rotation.

Figure 36-1

A variety of spring effects may also be added to the animation using the spring() modifier, for example:

Text("Click to Animate")
    .scaleEffect(scale)
    .rotationEffect(.degrees(rotation))
    .animation(.spring(response: 1, dampingFraction: 0.2, blendDuration: 0))

This will cause the rotation and scale effects to go slightly beyond the designated setting, then bounce back and forth before coming to rest at the target angle and scale.

When working with the animation() modifier, it is important to be aware that the animation is only implicit for modifiers that are applied before the animation modifier itself. In the following implementation, for example, only the rotation effect is animated since the scale effect is applied after the animation modifier:

Text("Click to Animate")
    .rotationEffect(.degrees(rotation))
    .animation(.spring(response: 1, dampingFraction: 0.2, blendDuration: 0))
    .scaleEffect(scale)

Repeating an Animation

By default, an animation will be performed once each time it is initiated. An animation may, however, be configured to repeat one or more times. In the following example, the animation is configured to repeat a specific number of times:

.animation(Animation.linear(duration: 1).repeatCount(10))

Each time an animation repeats, it will perform the animation in reverse as the view returns to its original state. If the view is required to instantly revert to its original appearance before repeating the animation, the autoreverses parameter must be set to false:

.animation(Animation.linear(duration: 1).repeatCount(10, autoreverses: false))

An animation may also be configured to repeat indefinitely using the repeatForever() modifier as follows:

.repeatForever(autoreverses: true))

Explicit Animation

As previously discussed, implicit animation using the animation() modifier implements animation on any of the animatable properties on a view that appear before the animation modifier. SwiftUI provides an alternative approach referred to as explicit animation which is implemented using the withAnimation() closure. When using explicit animation, only the property changes that take place within the withAnimation() closure will be animated. To experience this in action, modify the example so that the rotation effect is performed within a withAnimation() closure and remove the animation() modifier:

var body: some View {
    Button(action: { withAnimation(.linear (duration: 2)) {
            self.rotation =
               (self.rotation < 360 ? self.rotation + 60 : 0)
        }
        self.scale = (self.scale < 2.8 ? self.scale + 0.3 : 1) 
       }) {
          
       Text("Click to Animate")
          .rotationEffect(.degrees(rotation))
          .scaleEffect(scale)
    }
}

With the changes made, preview the layout and note that only the rotation is now animated. By using explicit animation, animation can be limited to specific properties of a view without having to worry about the ordering of modifiers.

.
.
@State private var visibility = false
 
var body: some View {
   VStack {
        Toggle(isOn: $visibility) {
           Text("Toggle Text Views")
        }
        .padding()
 
        if visibility {
            Text("Hello World")
                .font(.largeTitle)
        }
 
        if !visibility {
            Text("Goodbye World")
                .font(.largeTitle)
        }
    }
}
.
.

Animation and State Bindings

Animations may also be applied to state property bindings such that any view changes that occur as a result of that state value changing will be animated. If the state of a Toggle view causes one or more other views to become visible to the user, for example, applying an animation to the binding will cause the appearance and disappearance of all those views to be animated.

Within the ContentView.swift file, implement the following layout which consists of a VStack, Toggle view and two Text views. The Toggle view is bound to a state property named visible, the value of which is used to control which of the two Text views is visible at one time:

When previewed, switching the toggle on and off will cause one or other of the Text views to appear instantly. To add an animation to this change, simply apply a modifier to the state binding as follows:

.
.
var body: some View {
   VStack {
       Toggle(isOn: $visibility.animation(.linear(duration: 5))) {
           Text("Toggle Text Views")
       }
       .padding()
.
.

Now when the toggle is switched, one Text view will gradually fade from view as the other gradually fades in (unfortunately, at the time of writing this and other transition effects were only working when running on a simulator or physical device). The same animation will also be applied to any other views in the layout where the appearance changes as a result of the current state of the visibility property.

Automatically Starting an Animation

So far in this chapter, all the animations have been triggered by an event such as a button click. Often an animation will need to start without user interaction, for example when a view is first displayed to the user. Since an animation is triggered each time an animatable property of a view changes, this can be used to automatically start an animation when a view appears.

To see this technique in action, modify the example ContentView.swift file as follows:

struct ContentView : View {
    
    var body: some View {
        
        ZStack {
            Circle()
                .stroke(lineWidth: 2)
                .foregroundColor(Color.blue)
                .frame(width: 360, height: 360)
               
            Image(systemName: "forward.fill")
               .font(.largeTitle)
               .offset(y: -180)           
        } 
    }
}

The content view uses a ZStack to overlay an Image view over a circle drawing where the offset of the Image view has been adjusted to position the image on the circumference of the circle. When previewed, the view should match that shown in Figure 36-2:

Figure 36-2

Adding a rotation effect to the Image view will give the appearance that the arrows are following the circle. Add this effect and an animation to the Image view as follows:

Image(systemName: "forward.fill")
   .font(.largeTitle)
   .offset(y: -180)
   .rotationEffect(.degrees(360))
   .animation(Animation.linear(duration: 5)
                           .repeatForever(autoreverses: false))

As currently implemented the animation will not trigger when the view is tested in a Live Preview. This is because no action is taking place to change an animatable property, thereby initiating the animation.

This can be solved by making the angle of rotation subject to a Boolean state property, and then toggling that property when the ZStack first appears via the onAppear() modifier. In terms of implementing this behavior for our circle example, the content view declarations need to read as follows:

import SwiftUI
 
struct ContentView : View {
    
    @State private var isSpinning: Bool = true
    
    var body: some View {
        
       ZStack {
            Circle()
                .stroke(lineWidth: 2)
                .foregroundColor(Color.blue)
                .frame(width: 360, height: 360)
               
            Image(systemName: "forward.fill")
               .font(.largeTitle)
               .offset(y: -180)
               .rotationEffect(.degrees(isSpinning ? 0 : 360))
               .animation(Animation.linear(duration: 5)
                           .repeatForever(autoreverses: false))
       }
       .onAppear() {
          self.isSpinning.toggle()
       }
    }
}

When SwiftUI initializes the content view, but before it appears on the screen, the isSpinning state property will be set to false and, based on the ternary operator, the rotation angle set to zero. After the view has appeared, however, the onAppear() modifier will toggle the isSpinning state property to true which will, in turn, cause the ternary operator to change the rotation angle to 360 degrees. As this is an animatable property, the animation modifier will activate and animate the rotation of the Image view through 360 degrees. Since this animation has been configured to repeat indefinitely, the image will continue to move around the circle.

Figure 36-3

SwiftUI Transitions

A transition occurs in SwiftUI whenever a view is made visible or invisible to the user. To make this process more visually appealing than having the view instantly appear and disappear, SwiftUI allows these transitions to be animated in several ways using either individual effects or by combining multiple effects.

Begin by implementing a simple layout consisting of a Toggle button and a Text view. The toggle is bound to a state property which is then used to control whether the text view is visible. To make the transition more noticeable, animation has been applied to the state property binding:

struct ContentView : View {
 
    @State private var isButtonVisible: Bool = true
 
    var body: some View {
       VStack {
            Toggle(isOn:$isButtonVisible.animation(
                                   .linear(duration: 2))) {
                Text("Show/Hide Button")
            }
            .padding()
 
            if isButtonVisible {
                Button(action: {}) {
                    Text("Example Button")
                }
                .font(.largeTitle)
            }
        }
    }
}

After making the changes, use the Live Preview or a device or simulator to switch the toggle button state and note that the Text view fades in and out of view as the state changes (keeping in mind that some effects may not work in the Live Preview). This fading effect is the default transition used by SwiftUI. This default can be changed by passing a different transition to the transition() modifier, for which the following options are available:

  • scale – The view increases in size as it is made visible and shrinks as it disappears.
  • slide – The view slides in and out of view. • move(edge: edge) – As the view is added or removed it does so by moving either from or toward the specified edge.
  • opacity – The view retains its size and position while fading from view (the default transition behavior). To configure the Text view to slide into view, change the example as follows:
if isButtonVisible {
    Button(action: {}) {
        Text("Example Button")
    }
    .font(.largeTitle)
    .transition(.slide)
}

Alternatively, the view can be made to shrink from view and then grow in size when inserted and removed:

.transition(.scale)

The move() transition can be used as follows to move the view toward a specified edge of the containing view. In the following example, the view moves from bottom to top when disappearing and from top to bottom when appearing:

.transition(.move(edge: .top))

When previewing the above move transition, you may have noticed that after completing the move, the Button disappears instantly. This somewhat jarring effect can be improved by combining the move with another transition.

Combining Transitions

SwiftUI transitions are combined using an instance of AnyTransition together with the combined(with:) method. To combine, for example, movement with opacity, a transition could be configured as follows:

.transition(AnyTransition.opacity.combined(with: .move(edge: .top)))

When the above example is implemented, the Text view will include a fading effect while moving.

To remove clutter from layout bodies and to promote re-usability, transitions can be implemented as extensions to the AnyTransition class. The above combined transition, for example, can be implemented as follows:

extension AnyTransition {
    static var fadeAndMove: AnyTransition {
        AnyTransition.opacity.combined(with: .move(edge: .top))
    }
}

When implemented as an extension, the transition can simply be passed as an argument to the transition() modifier, for example:

.transition(.fadeAndMove)

Asymmetrical Transitions

By default, SwiftUI will simply reverse the specified insertion transition when removing a view. To specify a different transition for adding and removing views, the transition can be declared as being asymmetric. The following transition, for example, uses the scale transition for view insertion and sliding for removal:

.transition(.asymmetric(insertion: .scale, removal: .slide))

Summary

This chapter has explored the implementation of animation when changes are made to the appearance of a view. In the case of implicit animation, changes to a view caused by modifiers can be animated through the application of the animated() modifier. Explicit animation allows only specified properties of a view to be animated in response to appearance changes. Animation may also be applied to state property bindings such that any view changes that occur as a result of that state value changing will be animated.

A transition occurs when a view is inserted into, or removed from, a layout. SwiftUI provides several options for animating these transitions including fading, scaling and sliding. SwiftUI also provides the ability to both combine transitions and define asymmetric transitions where different animation effects are used for insertion and removal of a view.

Basic SwiftUI Graphics Drawing

The goal of this chapter is to introduce SwiftUI 2D drawing techniques. In addition to a group of built-in shape and gradient drawing options, SwiftUI also allows custom drawing to be performed by creating entirely new views that conform to the Shape and Path protocols.

Creating the DrawDemo Project

Launch Xcode and select the option to create a new Multiplatform App named DrawDemo.

SwiftUI Shapes

SwiftUI includes a set of five pre-defined shapes that conform to the Shape protocol which can be used to draw circles, rectangles, rounded rectangles and ellipses. Within the DrawDemo project, open the ContentView.swift file and add a single rectangle:

struct ContentView: View {
    var body: some View {
        Rectangle()
}

By default, a shape will occupy all the space available to it within the containing view and will be filled with the foreground color of the parent view (by default this will be black). Within the preview canvas, a black rectangle will fill the entire safe area of the screen.

The color and size of the shape may be adjusted using the fill() modifier and by wrapping it in a frame. Delete the Rectangle view and replace it with the declaration which draws a red filled 200×200 circle:

Circle()
    .fill(Color.red)
    .frame(width: 200, height: 200)

When previewed, the above circle will appear as illustrated in Figure 35-1:

Figure 35-1

To draw an unfilled shape with a stroked outline, the stroke() modifier can be applied, passing through an optional line width value. By default, a stroked shape will be drawn using the default foreground color which may be altered using the foregroundColor() modifier. Remaining in the ContentView.swift file, replace the circle with the following:

Capsule()
    .stroke(lineWidth: 10)
    .foregroundColor(.blue)
    .frame(width: 200, height: 100)

Note that the frame for the above Capsule shape is rectangular. A Capsule contained in a square frame simply draws a circle. The above capsule declaration appears as follows when rendered:

Figure 35-2

The stroke modifier also supports different style types using a StrokeStyle instance. The following declaration, for example, draws a rounded rectangle using a dashed line:

RoundedRectangle(cornerRadius: CGFloat(20))
    .stroke(style: StrokeStyle(lineWidth: 8, dash: [CGFloat(10)]))
    .foregroundColor(.blue)
    .frame(width: 200, height: 100)

The above shape will be rendered as follows:

Figure 35-3

By providing additional dash values to a StrokeStyle() instance and adding a dash phase value, a range of different dash effects can be achieved, for example:

Ellipse()
    .stroke(style: StrokeStyle(lineWidth: 20, 
             dash: [CGFloat(10), CGFloat(5), CGFloat(2)], 
             dashPhase: CGFloat(10)))
    .foregroundColor(.blue)
    .frame(width: 250, height: 150)

When run or previewed, the above declaration will draw the following ellipse:

Figure 35-4

Using Overlays

When drawing a shape, it is not possible to combine the fill and stroke modifiers to render a filled shape with a stroked outline. This effect can, however, be achieved by overlaying a stroked view on top of the filled shape, for example:

Ellipse()
    .fill(Color.red)
    .overlay(Ellipse()
        .stroke(Color.blue, lineWidth: 10))
    .frame(width: 250, height: 150)

The above example draws a red filled ellipse with a blue stroked outline as illustrated in Figure 35-5:

Figure 35-5

Drawing Custom Paths and Shapes

The shapes used so far in this chapter are essentially structure objects that conform to the Shape protocol. To conform with the shape protocol, a structure must implement a function named path() which accepts a rectangle in the form of a CGRect value and returns a Path object that defines what is to be drawn in that rectangle.

A Path instance provides the outline of a 2D shape by specifying coordinate points and defining the lines drawn between those points. Lines between points in a path can be drawn using straight lines, cubic and quadratic Bézier curves, arcs, ellipses and rectangles.

In addition to being used in a custom shape implementation, paths may also be drawn directly within a view. Try modifying the ContentView.swift file so that it reads as follows:

struct ContentView: View {
    var body: some View {
        Path { path in
            path.move(to: CGPoint(x: 10, y: 0))
            path.addLine(to: CGPoint(x: 10, y: 350))
            path.addLine(to: CGPoint(x: 300, y: 300))
            path.closeSubpath()
        }
    }
}

A path begins with the coordinates of the start point using the move() method. Methods are then called to add additional lines between coordinates. In this case, the addLine() method is used to add straight lines. Lines may be drawn in a path using the following methods. In each case, the drawing starts at the current point in the path and ends at the specified end point:

  • addArc – Adds an arc based on radius and angle values.
  • addCurve – Adds a cubic Bézier curve using the provided end and control points.
  • addLine – Adds a straight line ending at the specified point.
  • addLines – Adds straight lines between the provided array of end points.
  • addQuadCurve – Adds a quadratic Bézier curve using the specified control and end points.
  • closeSubPath – Closes the path by connecting the end point to the start point.

A full listing of the line drawing methods and supported arguments can be found online at:

https://developer.apple.com/documentation/swiftui/path

When rendered in the preview canvas, the above path will appear as shown in Figure 35-6:

Figure 35-6

The custom drawing may also be adapted by applying modifiers, for example with a green fill color:

Path { path in
    path.move(to: CGPoint(x: 10, y: 0))
    path.addLine(to: CGPoint(x: 10, y: 350))
    path.addLine(to: CGPoint(x: 300, y: 300))
    path.closeSubpath()
}
.fill(Color.green)

Although it is possible to draw directly within a view, it generally makes more sense to implement custom shapes as reusable components. Within the ContentView.swift file, implement a custom shape as follows:

struct MyShape: Shape {
    func path(in rect: CGRect) -> Path {
        var path = Path()
        
        path.move(to: CGPoint(x: rect.minX, y: rect.minY))
        path.addQuadCurve(to: CGPoint(x: rect.minX, y: rect.maxY), 
            control: CGPoint(x: rect.midX, y: rect.midY))
        path.addLine(to: CGPoint(x: rect.minX, y: rect.maxY))
        path.addLine(to: CGPoint(x: rect.maxX, y: rect.maxY))
        path.closeSubpath()
        return path
    }
}

The custom shape structure conforms to the Shape protocol by implementing the required path() function. The

CGRect value passed to the function is used to define the boundaries into which a triangle shape is drawn, with one of the sides drawn using a quadratic curve.

Now that the custom shape has been declared, it can be used in the same way as the built-in SwiftUI shapes, including the use of modifiers. To see this in action, change the body of the main view to read as follows:

struct ContentView: View {
    var body: some View {      
        MyShape()
            .fill(Color.red)
            .frame(width: 360, height: 350)
    }
}

When rendered, the custom shape will appear in the designated frame as illustrated in Figure 35-7 below:

Figure 35-7

Drawing Gradients

SwiftUI provides support for drawing gradients including linear, angular (conic) and radial gradients. In each case, the gradient is provided with a Gradient object initialized with an array of colors to be included in the gradient and values that control the way in which the gradient is rendered.

The following declaration, for example, generates a radial gradient consisting of five colors applied as the fill pattern for a Circle:

struct ContentView: View {
    
    let colors = Gradient(colors: [Color.red, Color.yellow, 
                   Color.green, Color.blue, Color.purple])
    
    var body: some View {
            Circle()
                .fill(RadialGradient(gradient: colors, 
                      center: .center,
                      startRadius: CGFloat(0), 
                      endRadius: CGFloat(300)))
    }
}

When rendered the above gradient will appear as follows:

Figure 35-8

The following declaration, on the other hand, generates an angular gradient with the same color range:

Circle()
    .fill(AngularGradient(gradient: colors, center: .center))

The angular gradient will appear as illustrated in the following figure:

Figure 35-9

Similarly, a LinearGradient running diagonally would be implemented as follows:

Rectangle()
    .fill(LinearGradient(gradient: colors, 
                       startPoint: .topLeading,
                         endPoint: .bottomTrailing))
    .frame(width: 360, height: 350)

The above linear gradient will be rendered as follows:

Figure 35-10

The final step in the DrawingDemo project is to apply gradients for the fill and background modifiers for our MyShape instance as follows:

MyShape()
    .fill(RadialGradient(gradient: colors,
                           center: .center,
                      startRadius: CGFloat(0),
                        endRadius: CGFloat(300)))
     .background(LinearGradient(gradient: Gradient(colors:        
                               [Color.black, Color.white]), 
                       startPoint: .topLeading, 
                         endPoint: .bottomTrailing))
     .frame(width: 360, height: 350)

With the gradients added, the MyShape rendering should match the figure below:

Figure 35-11

Summary

SwiftUI includes a built-in set of views that conform to the Shape protocol for drawing standard shapes such as rectangles, circles and ellipses. Modifiers can be applied to these views to control stroke, fill and color properties.

Custom shapes are created by specifying paths which consist of sets of points joined by straight or curved lines.

SwiftUI also includes support for drawing radial, linear and angular gradient patterns.