An iOS 16 CloudKit Sharing Tutorial

The chapter entitled An Introduction to iOS 16 CloudKit Sharing provided an overview of how CloudKit sharing works and the steps involved in integrating sharing into an iOS app. The intervening chapters have focused on creating a project that demonstrates the integration of CloudKit data storage into iOS apps. This chapter will extend the project started in the previous chapter to add CloudKit sharing to the CloudKitDemo app.

Preparing the Project for CloudKit Sharing

Launch Xcode and open the CloudKitDemo project created in this book’s chapter entitled An Introduction to iOS 16 CloudKit Sharing. If you have not completed the tasks in the previous chapter and are only interested in learning about CloudKit sharing, a snapshot of the project is included as part of the sample code archive for this book on the following web page:

https://www.ebookfrenzy.com/web/ios16/

Once the project has been loaded into Xcode, the CKSharingSupported key needs to be added to the project Info.plist file with a Boolean value of true. Select the CloudKitDemo target at the top of the Project Navigator panel, followed by the Info tab in the main panel. Next, locate the bottom entry in the Custom iOS Target Properties list, and hover the mouse pointer over the item. When the plus button appears, click it to add a new entry to the list. Complete the new property with the key field set to CKSharingSupported, the type to Boolean, and the value to YES, as illustrated in Figure 53-1:

Figure 53-1

Adding the Share Button

The user interface for the app now needs to be modified to add a share button to the toolbar. First, select the Main.storyboard file, locate the Bar Button Item in the Library panel, and drag and drop an instance onto the toolbar to position it to the right of the existing delete button.

Once added, select the button item, display the Attributes inspector, and select the square and arrow image:

Figure 53-2

Once the new button has been added, the toolbar should match Figure 53-3:

Figure 53-3

With the new share button item still selected, display the Assistant Editor panel and establish an Action connection to a method named shareRecord.

Creating the CloudKit Share

The next step is to add some code to the shareRecord action method to initialize and display the UICloudSharingController and to create and save the CKShare object. Next, select the ViewController.swift file, locate the stub shareRecord method, and modify it so that it reads as follows:

@IBAction func shareRecord(_ sender: Any) {

    let controller = UICloudSharingController { controller,
        prepareCompletionHandler in
        
        if let thisRecord = self.currentRecord {
            let share = CKShare(rootRecord: thisRecord)
            
            share[CKShare.SystemFieldKey.title] = 
                             "An Amazing House" as CKRecordValue
            share.publicPermission = .readOnly
            
            let modifyRecordsOperation = CKModifyRecordsOperation(
                recordsToSave: [thisRecord, share],
                recordIDsToDelete: nil)
            
            let configuration = CKOperation.Configuration()
            
            configuration.timeoutIntervalForResource = 10
            configuration.timeoutIntervalForRequest = 10
                         
            modifyRecordsOperation.modifyRecordsResultBlock = {
                result in
                switch result {
                case .success:
                    prepareCompletionHandler(share, CKContainer.default(), nil)
                case .failure(let error):
                    print(error.localizedDescription)
                }
            }     
            self.privateDatabase?.add(modifyRecordsOperation)
        } else {
            print("User error: No record selected")
        }
    }
    
    controller.availablePermissions = [.allowPublic, .allowReadOnly,
            .allowReadWrite, .allowPrivate]
    controller.popoverPresentationController?.barButtonItem =
        sender as? UIBarButtonItem
    
    present(controller, animated: true)
}

The code added to this method follows the steps outlined in the chapter entitled An Introduction to iOS 16 CloudKit Sharing to display the CloudKit sharing view controller, create a share object initialized with the currently selected record and save it to the user’s private database.

Accepting a CloudKit Share

Now that the user can create a CloudKit share, the app needs to be modified to accept a share and display it to the user. The first step in this process is implementing the userDidAcceptCloudKitShareWith method within the project’s scene delegate class. Edit the SceneDelegate.swift file and implement this method as follows:

.
.
import CloudKit
.
.
func windowScene(_ windowScene: UIWindowScene,
    userDidAcceptCloudKitShareWith cloudKitShareMetadata: CKShare.Metadata) {
   
    acceptCloudKitShare(metadata: cloudKitShareMetadata) { [weak self] result in
        switch result {
        case .success:
            DispatchQueue.main.async {
                let viewController: ViewController = 
                     self?.window?.rootViewController as! ViewController
                viewController.fetchShare(cloudKitShareMetadata)
            }
        case .failure(let error):
            print(error.localizedDescription )
        }
    }
}
.
.

When the user clicks on a CloudKit share link, for example, in an email or text message, the operating system will call the above method to notify the app that shared CloudKit data is available. The above implementation of this method calls a method named acceptCloudKitShare and passes it the CloudKitShareMetadata object it received from the operating system. If the acceptCloudKitShare method returns a successful result, the delegate method obtains a reference to the app’s root view controller and calls a method named fetchShare (which we will write in the next section) to extract the shared record from the CloudKit database and display it. Next, we need to add the acceptCloudKitShare method as follows:

Fetching the Shared Record

At this point, the share has been accepted and a CKShare.Metadata object provided, from which information about the shared record may be extracted. All that remains before the app can be tested is to implement the fetchShare method within the ViewController.swift file:

func fetchShare(_ metadata: CKShare.Metadata) {

    let operation = CKFetchRecordsOperation(recordIDs: 
                            [metadata.hierarchicalRootRecordID!])

    operation.perRecordResultBlock = { recordId, result in
        switch result {
        case .success(let record):
            DispatchQueue.main.async() {
                self.currentRecord = record
                self.addressField.text =
                    record.object(forKey: "address") as? String
                self.commentsField.text =
                    record.object(forKey: "comment") as? String
                let photo =
                    record.object(forKey: "photo") as! CKAsset
                let image = UIImage(contentsOfFile:
                                        photo.fileURL!.path)
                self.imageView.image = image
                self.photoURL = self.saveImageToFile(image!)
            }
        case .failure(let error):
            print(error.localizedDescription)
        }
    }
    
    operation.fetchRecordsResultBlock = { result in
        switch result {
        case .success:
            break
        case .failure(let error):
            print(error.localizedDescription)
        }
    }
    CKContainer.default().sharedCloudDatabase.add(operation)
}

The method prepares a standard CloudKit fetch operation based on the record ID contained within the share metadata object and performs the fetch using the sharedCloudDatabase instance. On a successful fetch, the completion handler extracts the data from the shared record and displays it in the user interface.

Testing the CloudKit Share Example

To thoroughly test CloudKit sharing, two devices with different Apple IDs must be used. If you have access to two devices, create a second Apple ID for testing purposes and sign in using that ID on one of the devices. Once logged in, make sure that the devices can send and receive iMessage or email messages between each other and install and run the CloudKitDemo app on both devices. Once the testing environment is set up, launch the CloudKitDemo app on one of the devices and add a record to the private database. Once added, tap the Share button and use the share view controller interface to send a share link message to the Apple ID associated with the second device. When the message arrives on the second device, tap the share link and accept the share when prompted. Once the share has been accepted, the CloudKitDemo app should launch and display the shared record.

Summary

This chapter puts the theory of CloudKit sharing outlined in the chapter entitled An Introduction to iOS 16 CloudKit Sharing into practice by enhancing the CloudKitDemo project to include the ability to share CloudKit-based records with other app users. This involved creating and saving a CKShare object, using the UICloudSharingController class, and adding code to handle accepting and fetching a shared CloudKit database record.

An Introduction to iOS 16 CloudKit Sharing

Before the release of iOS 10, the only way to share CloudKit records between users was to store those records in a public database. With the introduction of CloudKit sharing, individual app users can now share private database records with other users.

This chapter aims to provide an overview of CloudKit sharing and the classes used to implement sharing within an iOS app. The techniques outlined in this chapter will be put to practical use in the An iOS CloudKit Sharing Tutorial chapter.

Understanding CloudKit Sharing

CloudKit sharing provides a way for records within a private database to be shared with other app users, entirely at the discretion of the database owner. When a user decides to share CloudKit data, a share link in the form of a URL is sent to the person with whom the data is to be shared. This link can be sent in various ways, including text messages, email, Facebook, or Twitter. When the recipient taps on the share link, the app (if installed) will be launched and provided with the shared record information ready to be displayed.

The level of access to a shared record may also be defined to control whether a recipient can view and modify the record. It is important to be aware that when a share recipient accepts a share, they are receiving a reference to the original record in the owner’s private database. Therefore, a modification performed on a share will be reflected in the original private database.

Preparing for CloudKit Sharing

Before an app can take advantage of CloudKit sharing, the CKSharingSupported key needs to be added to the project Info.plist file with a Boolean true value. Also, a CloudKit record may only be shared if it is stored in a private database and is a member of a record zone other than the default zone.

The CKShare Class

CloudKit sharing is made possible primarily by the CKShare class. This class is initialized with the root CKRecord instance that is to be shared with other users together with the permission setting. The CKShare object may also be configured with title and icon information to be included in the share link message. The CKShare and associated CKRecord objects are then saved to the private database. The following code, for example, creates a CKShare object containing the record to be shared and configured for read-only access:

let share = CKShare(rootRecord: myRecord)
share[CKShare.SystemFieldKey.title] = "My First Share" as CKRecordValue
share.publicPermission = .readOnly

Once the share has been created, it is saved to the private database using a CKModifyRecordsOperation object. Note the recordsToSave: argument is declared as an array containing both the share and record objects:

let modifyRecordsOperation = CKModifyRecordsOperation(
    recordsToSave: [myRecord, share], recordIDsToDelete: nil)

Next, a CKConfiguration instance needs to be created, configured with optional settings, and assigned to the operation:

let configuration = CKOperation.Configuration()
        
configuration.timeoutIntervalForResource = 10
configuration.timeoutIntervalForRequest = 10

Next, a lambda must be assigned to the modifyRecordsResultBlock property of the modifyRecordsOperation object. The code in this lambda is called when the operation completes to let your app know whether the share was successfully saved:

modifyRecordsOperation.modifyRecordsResultBlock = { result in
    switch result {
    case .success:
        // Handle completion
    case .failure(let error):
        print(error.localizedDescription)
    }
}

Finally, the operation is added to the database to begin execution:

self.privateDatabase?.add(modifyRecordsOperation)

The UICloudSharingController Class

To send a share link to another user, CloudKit needs to know both the identity of the recipient and the method by which the share link is to be transmitted. One option is to manually create CKShareParticipant objects for each participant and add them to the CKShare object. Alternatively, the CloudKit framework includes a view controller specifically for this purpose. When presented to the user (Figure 51-1), the UICloudSharingController class provides the user with a variety of options for sending the share link to another user:

Figure 51-1

The app is responsible for creating and presenting the controller to the user, the template code for which is outlined below:

let controller = UICloudSharingController { 
	controller, prepareCompletionHandler in

	// Code here to create the CKShare and save it to the database
}

controller.availablePermissions = 
        [.allowPublic, .allowReadOnly, .allowReadWrite, .allowPrivate]

controller.popoverPresentationController?.barButtonItem =
    sender as? UIBarButtonItem

present(controller, animated: true)

Note that the above code fragment also specifies the permissions to be provided as options within the controller user interface. These options are accessed and modified by tapping the link in the Collaboration section of the sharing controller (in Figure 51-1 above, the link reads “Only invited people can edit”). Figure 51-2 shows an example share options settings screen:

Figure 51-2

Once the user selects a method of communication from the cloud-sharing controller, the completion handler assigned to the controller will be called. As outlined in the previous section, the CKShare object must be created and saved within this handler. After the share has been saved to the database, the cloud-sharing controller must be notified that the share is ready to be sent. This is achieved by a call to the prepareCompletionHandler method that was passed to the completion handler in the above code. When prepareCompletionHandler is called, it must be passed the share object and a reference to the app’s CloudKit container. Bringing these requirements together gives us the following code:

let controller = UICloudSharingController { controller,
    prepareCompletionHandler in

let share = CKShare(rootRecord: thisRecord)

        share[CKShare.SystemFieldKey.title]
                 = "An Amazing House" as CKRecordValue
        share.publicPermission = .readOnly

        // Create a CKModifyRecordsOperation object and configure it
        // to save the CKShare instance and the record to be shared.
        let modifyRecordsOperation = CKModifyRecordsOperation(
            recordsToSave: [myRecord, share],
            recordIDsToDelete: nil)

        // Create a CKOperation instance
        let configuration = CKOperation.Configuration()

        // Set configuration properties to provide timeout limits
        configuration.timeoutIntervalForResource = 10
        configuration.timeoutIntervalForRequest = 10

        // Apply the configuration options to the operation
        modifyRecordsOperation.configuration = configuration

        // Assign a completion block to the CKModifyRecordsOperation. This will
        // be called the modify records operation completes or fails. 
                     
        modifyRecordsOperation.modifyRecordsResultBlock = { result in
            switch result {
            case .success:
                // The share operation was successful. Call the completion
                // handler
                prepareCompletionHandler(share, CKContainer.default(), nil)
            case .failure(let error):
                print(error.localizedDescription)
            }
        }
        
        // Start the operation by adding it to the database
        self.privateDatabase?.add(modifyRecordsOperation)
}

Once the prepareCompletionHandler method has been called, the app for the chosen form of communication (Messages, Mail, etc.) will launch preloaded with the share link. All the user needs to do at this point is enter the contact details for the intended share recipient and send the message. Figure 51-3, for example, shows a share link loaded into the Mail app ready to be sent:

Figure 51-3

Accepting a CloudKit Share

When the recipient user receives a share link and selects it, a dialog will appear, providing the option to accept the share and open it in the corresponding app. When the app opens, the userDidAcceptCloudKitShareWith method is called on the scene delegate class located in the project’s SceneDelegate.swift file:

func windowScene(_ windowScene: UIWindowScene,
    userDidAcceptCloudKitShareWith cloudKitShareMetadata: CKShare.Metadata) {
}

When this method is called, it is passed a CKShare.Metadata object containing information about the share. Although the user has accepted the share, the app must also accept the share using a CKAcceptSharesOperation object. As the acceptance operation is performed, it will report the results of the process via two result blocks assigned to it. The following example shows how to create and configure a CKAcceptSharesOperation instance to accept a share:

let container = CKContainer(identifier: metadata.containerIdentifier)
let operation = CKAcceptSharesOperation(shareMetadatas: [metadata])     
var rootRecordID: CKRecord.ID!

operation.acceptSharesResultBlock = { result in
    switch result {
    case .success:
        // The share was accepted successfully. Call the completion handler.
        completion(.success(rootRecordID))
    case .failure(let error):
        completion(.failure(error))
    }
}

operation.perShareResultBlock = { metadata, result in
    switch result {
    case .success:
        // The shared record ID was successfully obtained from the metadata.
        // Save a local copy for later. 
        rootRecordID = metadata.hierarchicalRootRecordID

        // Display the appropriate view controller and use it to fetch, and 
        // display the shared record.
        DispatchQueue.main.async {
            let viewController: ViewController = 
                    self.window?.rootViewController as! ViewController
            viewController.fetchShare(metadata)
        }        
    case .failure(let error):
        print(error.localizedDescription)
    }
}

The final step in accepting the share is to add the configured CKAcceptSharesOperation object to the CKContainer instance to accept share the share:

container.add(operation) 

Fetching a Shared Record

Once a share has been accepted by both the user and the app, the shared record needs to be fetched and presented to the user. This involves the creation of a CKFetchRecordsOperation object using the root record ID contained within a CKShare.Metadata instance that has been configured with result blocks to be called with the results of the fetch operation. It is essential to be aware that this fetch operation must be executed on the shared cloud database instance of the app instead of the recipient’s private database. The following code, for example, fetches the record associated with a CloudKit share:

let operation = CKFetchRecordsOperation(
                     recordIDs: [metadata.hierarchicalRootRecordID!])

operation.perRecordResultBlock = { recordId, result in
    switch result {
    case .success(let record):
        DispatchQueue.main.async() {
             // Shared record successfully fetched. Update user 
             // interface here to present to the user. 
        }
    case .failure(let error):
        print(error.localizedDescription)
    }
}

operation.fetchRecordsResultBlock = { result in
    switch result {
    case .success:
        break
    case .failure(let error):
        print(error.localizedDescription)
    }
}

CKContainer.default().sharedCloudDatabase.add(operation)

Once the record has been fetched, it can be presented to the user within the perRecordResultBlock code, taking the steps above to perform user interface updates asynchronously on the main thread.

Summary

CloudKit sharing allows records stored within a private CloudKit database to be shared with other app users at the discretion of the record owner. An app user could, for example, make one or more records accessible to other users so that they can view and, optionally, modify the record. When a record is shared, a share link is sent to the recipient user in the form of a URL. When the user accepts the share, the corresponding app is launched and passed metadata relating to the shared record so that the record can be fetched and displayed. CloudKit sharing involves the creation of CKShare objects initialized with the record to be shared. The UICloudSharingController class provides a pre-built view controller which handles much of the work involved in gathering the necessary information to send a share link to another user. In addition to sending a share link, the app must also be adapted to accept a share and fetch the record for the shared cloud database. This chapter has covered the basics of CloudKit sharing, a topic that will be covered further in a later chapter entitled An iOS CloudKit Sharing Tutorial.

An iOS 16 Sprite Kit Particle Emitter Tutorial

In this, the last chapter dedicated to the Sprite Kit framework, the use of the Particle Emitter class and editor to add special effects to Sprite Kit-based games will be covered. Having provided an overview of the various elements that make up particle emitter special effects, the SpriteKitDemo app will be extended using particle emitter features to make the balls burst when an arrow hits. This will also involve the addition of an audio action.

What is the Particle Emitter?

The Sprite Kit particle emitter is designed to add special effects to games. It comprises the SKEmitterNode class and the Particle Emitter Editor bundled with Xcode. A particle emitter special effect begins with an image file representing the particle. The emitter generates multiple instances of the particle on the scene and animates each particle subject to a set of properties. These properties control aspects of the special effect, such as the rate of particle generation, the angle, and speed of motion of particles, whether or not particles rotate, and how the particles blend in with the background.

With some time and experimentation, a wide range of special effects, from smoke to explosions, can be created using particle emitters.

The Particle Emitter Editor

The Particle Emitter Editor is built into Xcode and provides a visual environment to design particle emitter effects. In addition to providing a platform for developing custom effects, the editor also offers a collection of pre-built particle-based effects, including rain, fire, magic, snow, and sparks. These template effects also provide an excellent starting point on which to base other special effects.

Within the editor environment, a canvas displays the current particle emitter configuration. A settings panel allows the various properties of the emitter node to be changed, with each modification reflected in the canvas in real time, thereby making creating and refining special effects much easier. Once the design of the special effect is complete, the effect is saved in a Sprite Kit particle file. This file actually contains an archived SKEmitterNode object configured to run the particle effects designed in the editor.

The SKEmitterNode Class

The SKEmitterNode displays and runs the particle emitter effect within a Sprite Kit game. As with other Sprite Node classes, the SKEmitterNode class has many properties and behaviors of other classes in the Sprite Kit family. Generally, an SKEmitterNode class is created and initialized with a Sprite Kit particle file created using the Particle Emitter editor. The following code fragment, for example, initializes an SKEmitterNode instance with a particle file, configures it to appear at a specific position within the current scene, and adds it to the scene so that it appears within the game:

if let burstNode = SKEmitterNode(fileNamed: "BurstParticle.sks") {
    burstNode.position = CGPoint(x: target_x, y: target_y)
    secondNode.removeFromParent()
    self.addChild(burstNode)
}

Once created, all of the emitter properties available within the Particle Emitter Editor are also controllable from within the code, allowing the effect’s behavior to be changed in real time. The following code, for example, adjusts the number of particles the emitter is to emit before ending:

burstNode.numParticlesToEmit = 400

In addition, actions may be assigned to particles from within the app code to add additional behavior to a special effect. The particles can, for example, be made to display an animation sequence.

Using the Particle Emitter Editor

By far, the easiest and most productive approach to designing particle emitter-based special effects is to use the Particle Emitter Editor tool bundled with Xcode. To experience the editor in action, launch Xcode and create a new iOS Game-based project named ParticleDemo with the Language menu set to Swift.

Once the new project has been created, select the File -> New -> File… menu option. Then, in the resulting panel, choose the SpriteKit Particle File template option as outlined in Figure 95-1:

Figure 95-1

Click Next and choose a Particle template on which to base the special effect. For this example, we will use the Fire template. Click Next and name the file RocketFlame before clicking on Create.

At this point, Xcode will have added two files to the project. One is an image file named spark.png representing the particle, and the other is the RocketFlame.sks file containing the particle emitter configuration. In addition, Xcode should also have pre-loaded the Particle Emitter Editor panel with the fire effect playing in the canvas, as shown in Figure 95-2 (the editor can be accessed at any time by selecting the corresponding sks file in the project navigator panel).

Figure 95-2

The right-hand panel of the editor provides access to and control of all of the properties associated with the emitter node. To access these property settings, click the right-hand toolbar button in the right-hand panel.

Much about particle emitter special effects can be learned through experimentation with the particle editor. However, before modifying the fire effects in this example, it will be helpful to provide an overview of what these properties do.

Particle Emitter Node Properties

A range of property settings controls the behavior of a particle emitter and associated particles. These properties can be summarized as follows:

Background

Though presented as an option within the editor, this is not actually a property of the emitter node. This option is provided so that the appearance of the effect can be tested against different backgrounds. This is particularly important when the particles are configured to blend with the background. Use this setting to test the particle effects against any background colors the effect is likely to appear with within the game.

Particle Texture

The image file containing the texture that will be used to represent the particles within the emitter.

Particle Birthrate

The birthrate defines the rate at which the node emits new particles. The greater the value, the faster new particles are generated. However, it is recommended that the minimum number of particles needed to achieve the desired effect be used to avoid performance degradation. Therefore, the total number of particles emitted may also be specified. A value of zero causes particles to be emitted indefinitely. If a limit is specified, the node will stop emitting particles when that value is reached.

Particle Life Cycle

The lifetime property controls the time in seconds a particle lives (and is therefore visible) before disappearing from view. The range property may be used to introduce variance in the lifetime from one particle to the next based on a random time value between 0 and the specified range value.

Particle Position Range

The position properties define the location from which particles are created. For example, the X and Y values can be used to declare an area around the center of the node location from which particles will be created randomly.

Angle

The angle at which a newly emitted particle will travel away from the creation point in counter-clockwise degrees, where a value of 0 degrees equates to rightward movement. Random variance in direction can be introduced via the range property.

Particle Speed

The speed property specifies the particles’ initial speed at the creation time. The speed can be randomized by specifying a range value.

Particle Acceleration

The acceleration properties control the degree to which a particle accelerates or decelerates after emission in terms of both X and Y directions.

Particle Scale

The size of the particles can be configured to change using the scale properties. These settings cause the particles to grow or shrink throughout their lifetimes. Random resizing behavior can be implemented by specifying a range value. The speed setting controls the speed with which the size changes take place.

Particle Rotation

The rotation properties control the speed and amount of rotation applied to the particles after creation. Values are specified in degrees, with positive and negative values correlating to clockwise and counter-clockwise rotation. In addition, the speed of rotation may be specified in degrees per second.

Particle Color

The particles created by an emitter can be configured to transition through a range of colors during a lifetime. To add a new color in the lifecycle timeline, click on the color ramp at the location where the color is to change and select a new color. Change an existing color by double-clicking the marker to display the color selection dialog.

Figure 95-3, for example, shows a color ramp with three color transitions specified:

Figure 95-3

To remove a color from the color ramp, click and drag it downward out of the editor panel.

The color blend settings control the amount by which the colors in the particle’s texture blend with the prevailing color in the color ramp at any given time during the particle’s life. The greater the Factor property, the greater the colors blend, with 0 indicating no blending. By adjusting the speed property, the blending factor can be randomized by specifying a range and the speed at which the blend is performed.

Particle Blend Mode

The Blend Mode property governs how particles blend with other images, colors, and graphics in Sprite Kit game scenes. Options available are as follows:

  • Alpha – Blends transparent pixels in the particle with the background.
  • Add – Adds the particle pixels to the corresponding background image pixels.
  • Subtract – Subtracts the particle pixels from the corresponding background image pixels.
  • Multiply – Multiplies the particle pixels by the corresponding background image pixels—resulting in a darker particle effect.
  • MultiplyX2 – This creates a darker particle effect than the standard Multiply mode.
  • Screen – Inverts pixels, multiplies, and inverts a second time—resulting in lighter particle effects. • Replace – No blending with the background. Only the particle’s colors are used.

Experimenting with the Particle Emitter Editor

Creating compelling special effects with the particle emitter is largely a case of experimentation. As an example of adapting a template effect for another purpose, we will now modify the fire effect in the RocketFlame.sks file so that instead of resembling a campfire, it could be attached to the back of a sprite to represent the flame of a rocket launching into space.

Within Xcode, select the previously created RocketFlame.sks file so that it loads into the Particle Emitter Editor. The animation should appear and resemble a campfire, as illustrated in Figure 95-2.

  1. The first step in modifying the effect is to change the angle of the flames so that they burn downwards. To achieve this, change the Start property of the Angle setting to 270 degrees. The fire should now be inverted.
  2. Change the X value of the Position Range property to 5 so that the flames become narrower and more intense.
  3. Increase the Start value of the Speed property to 450.
  4. Change the Lifetime start property to 7.

The effect now resembles the flames a user might expect to see shooting out of the back of a rocket against a nighttime sky (Figure 95-4). Note also that the effects of the motion of the emitter node may be simulated by clicking and dragging the node around the canvas.

Figure 95-4

Bursting a Ball using Particle Emitter Effects

The final task is to update the SpriteKitDemo game so that the balls burst when they are hit by an arrow shot by the archer sprite.

The particles for the bursting ball will be represented by the BallFragment.png file located in the sample code download archive in the sprite_images folder. Open the SpriteKitDemo project within Xcode, locate the BallFragment.png file in a Finder window, and drag and drop it onto the list of image sets in the Assets file.

Select the File -> New -> File… menu option and, in the resulting panel, select the SpriteKit Particle File template option. Click Next, and on the template screen, select the Spark template. Click Next, name the file BurstParticle, and click Create.

The Particle Emitter Editor will appear with the spark effect running. Since the scene on which the effect will run has a white background, click on the black swatch next to Background in the Attributes Inspector panel and change the color to white.

Click on the Particles Texture drop-down menu, select the BallFragment image, and change the Blend Mode menu to Alpha.

Many ball fragments should now be visible, blended with the yellow color specified in the ramp. Set the Emitter Birthrate property to 15 to reduce the number of particles emitted. Click on the yellow marker at the start of the color ramp and change the color to White in the resulting color dialog. The particles should now look like fragments of the ball used in the game.

The fragments of a bursting ball would be expected to originate from any part of the ball. As such, the Position Range X and Y values need to match the dimensions of the ball. Set both of these values to 86 accordingly.

Finally, limit the number of particles by changing the Emitter Maximum property in the Particles section to 8. The burst particle effect is now ready to be incorporated into the game logic.

Adding the Burst Particle Emitter Effect

When an arrow scores a hit on a ball node, the ball node will be removed from the scene and replaced with a BurstParticle SKEmitterNode instance. To implement this behavior, edit the ArcheryScene.swift file and modify the didBegin(contact:) method to add a new method call to extract the SKEmitterNode from the archive in the BurstParticle file, remove the ball node from the scene and replace it at the same position with the emitter:

func didBegin(_ contact: SKPhysicsContact) {
    let secondNode = contact.bodyB.node as! SKSpriteNode

    if (contact.bodyA.categoryBitMask == arrowCategory) &&
        (contact.bodyB.categoryBitMask == ballCategory) {

        let contactPoint = contact.contactPoint
        let contact_y = contactPoint.y
        let target_x = secondNode.position.x
        let target_y = secondNode.position.y
        let margin = secondNode.frame.size.height/2 - 25

        if (contact_y > (target_y - margin)) &&
            (contact_y < (target_y + margin)) {

            if let burstNode = SKEmitterNode(fileNamed: "BurstParticle.sks")
            {
                burstNode.position = CGPoint(x: target_x, y: target_y)
                secondNode.removeFromParent()
                self.addChild(burstNode)
            }
            score += 1
        }
    }
}

Compile and run the app. For example, when an arrow hits a ball, it should now be replaced by the particle emitter effect:

Figure 95-5

Adding an Audio Action

The final effect to add to the game is a bursting sound when an arrow hits the ball. We will again use the Xcode Action Editor to add this effect.

Begin by adding the sound file to the project. This file is named burstsound.mp3 and is located in the audiofiles folder of the book code samples download. Locate this file in a Finder window and drag it onto the Project Navigator panel. In the resulting panel, enable the Copy items if needed option and click on Finish.

Within the Project Navigator panel, select the ArcherScene.sks file. Then, from the Library panel, locate the PlaySound-File-Named-Action object and drag and drop it onto the timeline so that it is added to the archerNode object:

Figure 95-6

Select the new action object in the timeline and use the Attributes Inspector panel to set the Filename property to the burstsound file.

Right-click on the sound action and select the Convert to Reference menu option. Name the reference audioAction and click on the Create button. The action has now been saved to the ArcherActions.sks file. Next, select the object in the timeline, right-click, and select the Delete option to remove it from the scene file. Finally, modify the didBegin(contact:) method to play the sound action when a ball bursts:

func didBegin(_ contact: SKPhysicsContact) {
    let secondNode = contact.bodyB.node as! SKSpriteNode
    
    if (contact.bodyA.categoryBitMask == arrowCategory) &&
        (contact.bodyB.categoryBitMask == ballCategory) {
        
        let contactPoint = contact.contactPoint
        let contact_y = contactPoint.y
        let target_x = secondNode.position.x
        let target_y = secondNode.position.y
        let margin = secondNode.frame.size.height/2 - 25
        
        if (contact_y > (target_y - margin)) &&
            (contact_y < (target_y + margin)) {
            print("Hit")
            
            if let burstNode = SKEmitterNode(fileNamed: "BurstParticle.sks") 
            {
                burstNode.position = CGPoint(x: target_x, y: target_y)
                secondNode.removeFromParent()
                self.addChild(burstNode)
                if let audioAction = SKAction(named: "audioAction") {
                    burstNode.run(audioAction)
                }
            }
            score += 1
        }
    }
}

Run the app and verify that the sound file plays when a hit is registered on a ball.

Summary

The particle emitter allows special effects to be added to Sprite Kit games. All that is required is an image file to represent the particles and some configuration of the particle emitter properties. This work can be simplified using the Particle Emitter Editor included with Xcode. The editor is supplied with a set of pre-configured special effects, such as smoke, fire, and rain, which can be used as supplied or modified to meet many special effects needs.

An iOS 16 Sprite Kit Collision Handling Tutorial

In this chapter, the game created in the previous chapter, entitled An iOS 16 Sprite Kit Level Editor Game Tutorial, will be extended to implement collision detection. The objective is to detect when an arrow node collides with a ball node and increase a score count in the event of such a collision. In the next chapter, this collision detection behavior will be further extended to add audio and visual effects so that the balls appear to burst when an arrow hits.

Defining the Category Bit Masks

Start Xcode and open the SpriteKitDemo project created in the previous chapter if not already loaded.

When detecting collisions within a Sprite Kit scene, a delegate method is called each time a collision is detected. However, this method will only be called if the colliding nodes are configured appropriately using category bit masks.

Only collisions between the arrow and ball sprite nodes are of interest for this demonstration game. The first step, therefore, is to declare collision masks for these two node categories. Begin by editing the ArcheryScene. swift file and adding these declarations at the top of the class implementation:

import UIKit
import SpriteKit

class ArcheryScene: SKScene {

    let arrowCategory: UInt32 = 0x1 << 0
    let ballCategory: UInt32 = 0x1 << 1
.
.

Assigning the Category Masks to the Sprite Nodes

Having declared the masks, these need to be assigned to the respective node objects when they are created within the game. This is achieved by assigning the mask to the categoryBitMask property of the physics body assigned to the node. In the case of the ball node, this code can be added in the createBallNode method as follows:

func createBallNode() {
    let ball = SKSpriteNode(imageNamed: "BallTexture.png")
    let screenWidth = self.size.width

    ball.position = CGPoint(x: randomBetween(-screenWidth/2, max:
        screenWidth/2-200), y: self.size.height-50)

    ball.name = "ballNode"
    ball.physicsBody = SKPhysicsBody(circleOfRadius:
                        (ball.size.width/2))

    ball.physicsBody?.usesPreciseCollisionDetection = true
    ball.physicsBody?.categoryBitMask = ballCategory
    self.addChild(ball)
}

Repeat this step to assign the appropriate category mask to the arrow node in the createArrowNode method:

func createArrowNode() -> SKSpriteNode {
    
    let arrow = SKSpriteNode(imageNamed: "ArrowTexture.png")
    
    if let archerNode = self.childNode(withName: "archerNode"),
        let archerPosition = archerNode.position as CGPoint?,
        let archerWidth = archerNode.frame.size.width as CGFloat? {
    
        arrow.position = CGPoint(x: archerPosition.x + archerWidth,
                             y: archerPosition.y)
    
        arrow.name = "arrowNode"
        arrow.physicsBody = SKPhysicsBody(rectangleOf:
                            arrow.frame.size)
        arrow.physicsBody?.usesPreciseCollisionDetection = true
        arrow.physicsBody?.categoryBitMask = arrowCategory
    }
    return arrow
}

Configuring the Collision and Contact Masks

Having assigned category masks to the arrow and ball nodes, these nodes are ready to be included in collision detection handling. However, before this can be implemented, code needs to be added to indicate whether the app needs to know about collisions, contacts, or both. When contact occurs, two nodes can touch or even occupy the same space in a scene. It might be valid, for example, for one sprite node to pass over another node, and the game logic needs to be notified when this happens. On the other hand, a collision involves contact between two nodes that cannot occupy the same space in the scene. The two nodes will typically bounce away from each other in such a situation (subject to prevailing physics body properties).

The type of contact for which notification is required is specified by assigning contact and collision bit masks to the physics body of one of the node categories involved in the contact. For this example, we will specify that notification is required for both contact and collision between the arrow and ball categories:

func createArrowNode() -> SKSpriteNode {
    
    let arrow = SKSpriteNode(imageNamed: "ArrowTexture.png")
    
    if let archerNode = self.childNode(withName: "archerNode"),
        let archerPosition = archerNode.position as CGPoint?,
        let archerWidth = archerNode.frame.size.width as CGFloat? {
    
        arrow.position = CGPoint(x: archerPosition.x + archerWidth,
                             y: archerPosition.y)
    
        arrow.name = "arrowNode"
        arrow.physicsBody = SKPhysicsBody(rectangleOf:
                            arrow.frame.size)
        arrow.physicsBody?.usesPreciseCollisionDetection = true
        arrow.physicsBody?.categoryBitMask = arrowCategory
        arrow.physicsBody?.collisionBitMask = arrowCategory | ballCategory
        arrow.physicsBody?.contactTestBitMask =
            arrowCategory | ballCategory
    }
    return arrow
}

Implementing the Contact Delegate

When the Sprite Kit physics system detects a collision or contact for which appropriate masks have been configured, it needs a way to notify the app code that such an event has occurred.

It does this by calling methods on the class instance registered as the contact delegate for the physics world object associated with the scene where the contact occurred. The system can notify the delegate at both the beginning and end of the contact if both the didBegin(contact:) and didEnd(contact:) methods are implemented. Passed as an argument to these methods is an SKPhysicsContact object containing information about the location of the contact and references to the physical bodies of the two nodes involved in the contact.

For this tutorial, we will use the ArcheryScene instance as the contact delegate and implement only the didBegin(contact:) method. Begin, therefore, by modifying the didMove(to view:) method in the ArcheryScene. swift file to declare the class as the contact delegate:

override func didMove(to view: SKView) {
    let archerNode = self.childNode(withName: "archerNode")
    archerNode?.position.y = 0
    archerNode?.position.x = -self.size.width/2 + 40
    self.physicsWorld.gravity = CGVector(dx: 0, dy: -1.0)       
    self.physicsWorld.contactDelegate = self
    self.initArcheryScene() 
}

Having made the ArcheryScene class the contact delegate, the ArcheryScene.swift file needs to be modified to indicate that the class now implements the SKPhysicsContactDelegate protocol:

import UIKit
import SpriteKit

class ArcheryScene: SKScene, SKPhysicsContactDelegate {
.
.
.

Remaining within the ArcheryScene.swift file, implement the didBegin(contact:) method as follows:

func didBegin(_ contact: SKPhysicsContact) {
    let secondNode = contact.bodyB.node as! SKSpriteNode

    if (contact.bodyA.categoryBitMask == arrowCategory) &&
        (contact.bodyB.categoryBitMask == ballCategory) {

        let contactPoint = contact.contactPoint
        let contact_y = contactPoint.y
        let target_y = secondNode.position.y
        let margin = secondNode.frame.size.height/2 - 25

        if (contact_y > (target_y - margin)) &&
            (contact_y < (target_y + margin)) {
            print("Hit")
            score += 1
        }
    }

The code starts by extracting references to the two nodes that have collided. It then checks that the first node is an arrow and the second a ball (no points are scored if a ball falls onto an arrow). Next, the point of contact is identified, and some rudimentary mathematics is used to check that the arrow struck the side of the ball (for a game of app store quality, more rigorous checking might be required to catch all cases). Finally, assuming the hit was within the defined parameters, a message is output to the console, and the game score variable is incremented.

Run the game and test the collision handling by ensuring that the “Hit” message appears in the Xcode console when an arrow hits the side of a ball.

Game Over

All that now remains is to display the score to the user when all the balls have been released. This will require a new label node and a small change to an action sequence followed by a transition to the welcome scene so the user can start a new game. Begin by adding the method to create the label node in the ArcheryScene.swift file:

func createScoreNode() -> SKLabelNode {
    let scoreNode = SKLabelNode(fontNamed: "Bradley Hand")
    scoreNode.name = "scoreNode"

    let newScore = "Score \(score)"

    scoreNode.text = newScore
    scoreNode.fontSize = 60
    scoreNode.fontColor = SKColor.red
    scoreNode.position = CGPoint(x: self.frame.midX,
                                 y: self.frame.midY)
    return scoreNode
}

Next, implement the gameOver method, which will display the score label node and then transition back to the welcome scene:

func gameOver() {
    let scoreNode = self.createScoreNode()
    self.addChild(scoreNode)
    let fadeOut = SKAction.sequence([SKAction.wait(forDuration: 3.0),
                                     SKAction.fadeOut(withDuration: 3.0)])
    let welcomeReturn =  SKAction.run({
        let transition = SKTransition.reveal(
            with: SKTransitionDirection.down, duration: 1.0)
        if let welcomeScene = GameScene(fileNamed: "GameScene") {
            self.scene?.view?.presentScene(welcomeScene,
                                       transition: transition)
        }
    })
    
    let sequence = SKAction.sequence([fadeOut, welcomeReturn])
    self.run(sequence)
}

Finally, add a completion handler that calls the gameOver method after the ball release action in the initArcheryScene method:

func initArcheryScene() {
    let releaseBalls = SKAction.sequence([SKAction.run({
    self.createBallNode() }),
    SKAction.wait(forDuration: 1)])

    self.run(SKAction.repeat(releaseBalls,
                        count: ballCount), completion: {
        let sequence =
                   SKAction.sequence([SKAction.wait(forDuration: 5.0),
                        SKAction.run({ self.gameOver() })])
        self.run(sequence)
    })
}

Compile, run, and test. Also, feel free to experiment by adding other features to the game to gain familiarity with the capabilities of Sprite Kit. The next chapter, entitled An iOS 16 Sprite Kit Particle Emitter Tutorial, will cover using the Particle Emitter to add special effects to Sprite Kit games.

Summary

The Sprite Kit physics engine detects when two nodes within a scene come into contact with each other. Collision and contact detection is configured through the use of category masks together with contact and collision masks. When appropriately configured, the didBegin(contact:) and didEnd(contact:) methods of a designated delegate class are called at the start and end of contact between two nodes for which detection is configured. These methods are passed references to the nodes involved in the contact so that appropriate action can be taken within the game.

An iOS 16 Sprite Kit Level Editor Game Tutorial

In this chapter of iOS 16 App Development Essentials, many of the Sprite Kit Framework features outlined in the previous chapter will be used to create a game-based app. In particular, this tutorial will demonstrate the practical use of scenes, textures, sprites, labels, and actions. In addition, the app created in this chapter will also use physics bodies to demonstrate the use of collisions and simulated gravity.

This tutorial will also demonstrate using the Xcode Sprite Kit Level, Live, and Action editors combined with Swift code to create a Sprite Kit-based game.

About the Sprite Kit Demo Game

The game created in this chapter consists of a single animated character that shoots arrows across the scene when the screen is tapped. For the game’s duration, balls fall from the top of the screen, with the objective being to hit as many balls as possible with the arrows.

The completed game will comprise the following two scenes:

  • GameScene – The scene which appears when the game is first launched. The scene will announce the game’s name and invite the user to touch the screen to begin the game. The game will then transition to the second scene.
  • ArcheryScene – The scene where the gameplay takes place. Within this scene, the archer and ball sprites are animated, and the physics behavior and collision detection are implemented to make the game work.

In terms of sprite nodes, the game will include the following:

  • Welcome Node – An SKLabelNode instance that displays a msage to the user on the Welcome Scene.
  • Archer Node – An SKSpriteNode instance to represent the archer game character. The animation frames that cause the archer to load and launch an arrow are provided via a sequence of image files contained within a texture atlas.
  • Arrow Node – An SKSpriteNode instance used to represent the arrows as the archer character shoots them. This node has associated with it a physics body so that collisions can be detected and to make sure it responds to gravity.
  • Ball Node – An SKSpriteNode represents the balls that fall from the sky. The ball has associated with it a physics body for gravity and collision detection purposes.
  • Game Over Node – An SKLabelNode instance that displays the score to the user at the end of the game. The overall architecture of the game can be represented hierarchically, as outlined in Figure 93-1:
Figure 93-1

In addition to the nodes outlined above, the Xcode Live and Action editors will be used to implement animation and audio actions, which will be triggered from within the app’s code.

Creating the SpriteKitDemo Project

To create the project, launch Xcode and select the Create a new Xcode project option from the welcome screen (or use the File -> New -> Project…) menu option. Next, on the template selection panel, choose the iOS Game template option. Click on the Next button to proceed and on the resulting options screen, name the product SpriteKitDemo and choose Swift as the language in which the app will be developed. Finally, set the Game Technology menu to SpriteKit. Click Next and choose a suitable location for the project files. Once selected, click Create to create the project.

Reviewing the SpriteKit Game Template Project

The selection of the SpriteKit Game template has caused Xcode to create a template project with a demonstration incorporating some pre-built Sprite Kit behavior. This template consists of a View Controller class (GameViewController.swift), an Xcode Sprite Kit scene file (GameScene.sks), and a corresponding GameScene class file (GameScene.swift). The code within the GameViewController.swift file loads the scene design contained within the GameScene.sks file and presents it on the view to be visible to the user. This, in turn, triggers a call to the didMove(to view:) method of the GameScene class as implemented in the GameScene.swift file. This method creates an SKLabelNode displaying text that reads “Hello, World!”.

The GameScene class also includes a variety of touch method implementations that create SKShapeNode instances into which graphics are drawn when triggered. These nodes, in turn, are displayed in response to touches and movements on the device screen. To see the template project in action, run it on a physical device or the iOS simulator and perform tapping and swiping motions on the display.

As impressive as this may be, given how little code is involved, this bears no resemblance to the game that will be created in this chapter, so some of this functionality needs to be removed to provide a clean foundation on which to build. Begin the tidying process by selecting and editing the GameScene.swift file to remove the code to create and present nodes in the scene. Once modified, the file should read as follows:

class GameScene: SKScene {

    override func didMove(to view: SKView) {

    }

    override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {

    }

    override func update(_ currentTime: TimeInterval) {
        // Called before each frame is rendered
    }
}

With these changes, it is time to start creating the SpriteKitDemo game.

Restricting Interface Orientation

The game created in this tutorial assumes that the device on which it is running will be in landscape orientation. Therefore, to prevent the user from attempting to play the game with a device in portrait orientation, the Device Orientation properties for the project need to be restricted. To achieve this, select the SpriteKitDemo entry located at the top of the Project Navigator and, in the resulting General settings panel, change the device orientation settings so that only the Landscape options are selected both for iPad and iPhone devices:

Figure 93-2

Modifying the GameScene SpriteKit Scene File

As previously outlined, Xcode has provided a SpriteKit scene file (GameScene.sks) for a scene named GameScene together with a corresponding class declaration contained within the GameScene.swift file. The next task is to repurpose this scene to act as the welcome screen for the game. Begin by selecting the GameScene.sks file so that it loads into the SpriteKit Level Editor, as shown in Figure 93-3:

Figure 93-3

When working with the Level Editor to design SpriteKit scenes, there are several key areas of importance, each of which has been labeled in the above figure:

  • A – Scene Canvas – This is the canvas onto which nodes may be placed, positioned, and configured.
  • B – Attribute Inspector Panel – This panel provides a range of configuration options for the currently selected item in the editor panel. This allows SKNode and SKAction objects to be customized within the editor environment.
  • C – Library Button – This button displays the Library panel containing a range of node and effect types that can be dragged and dropped onto the scene.
  • D – Animate/Layout Button – Toggles between the editor’s simulation and layout editing modes. Simulate mode provides a useful mechanism for previewing the scene behavior without compiling and running the app.
  • E – Zoom Buttons – Buttons to zoom in and out of the scene canvas.
  • F – Live Editor – The live editor allows actions and animations to be placed within a timeline and simulated within the editor environment. It is possible, for example, to add animation and movement actions within the live editor and play them back live within the scene canvas.
  • G Timeline View Slider – Pans back and forth through the view of the live editor timeline.
  • H – Playback Speed – When in Animation mode, this control adjusts the playback speed of the animations and actions contained within the live editor panel.
  • I – Scene Graph View – This panel provides an overview of the scene’s hierarchy and can be used to select, delete, duplicate and reposition scene elements within the hierarchy.

Within the scene editor, click on the “Hello, World!” Label node and press the keyboard delete key to remove it from the scene. With the scene selected in the scene canvas, click on the Color swatch in the Attribute Inspector panel and use the color selection dialog to change the scene color to a shade of green. Remaining within the Attributes Inspector panel, change the Size setting from Custom to iPad 9.7” in Landscape mode:

Figure 93-4

Click on the button (marked C in Figure 93-3 above) to display the Library panel, locate the Label node object, and drag and drop an instance onto the center of the scene canvas. With the label still selected, change the Text property in the inspector panel to read “SpriteKitDemo – Tap Screen to Play”. Remaining within the inspector panel, click on the T next to the font name and use the font selector to assign a 56-point Marker Felt Wide font to the label from the Fun font category. Finally, set the Name property for the label node to “welcomeNode”. Save the scene file before proceeding.

With these changes complete, the scene should resemble that of Figure 93-5:

Figure 93-5

Creating the Archery Scene

As previously outlined, the game’s first scene is a welcome screen on which the user will tap to begin playing within a second scene. Add a new class to the project to represent this second scene by selecting the File -> New -> File… menu option. In the file template panel, make sure that the Cocoa Touch Class template is selected in the main panel. Click on the Next button and configure the new class to be a subclass of SKScene named ArcheryScene. Click on the Next button and create the new class file within the project folder.

The new scene class will also require a corresponding SpriteKit scene file. Select File -> New -> File… once again, this time selecting SpriteKit Scene from the Resource section of the main panel (Figure 93-6). Click Next, name the scene ArcheryScene and click the Create button to add the scene file to the project.

Figure 93-6

Edit the newly added ArcheryScene.swift file and modify it to import the SpriteKit Framework as follows:

import UIKit
import SpriteKit

class ArcheryScene: SKScene {

}

Transitioning to the Archery Scene

Clearly, having instructed the user to tap the screen to play the game, some code needs to be written to make this happen. This behavior will be added by implementing the touchesBegan method in the GameScene class. Rather than move directly to ArcheryScene, some effects will be added as an action and transition.

When implemented, the SKAction will cause the node to fade from view, while an SKTransition instance will be used to animate the transition from the current scene to the archery scene using a “doorway” style of animation. Implement these requirements by adding the following code to the touchesBegan method in the GameScene. swift file:

override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
    if let welcomeNode = childNode(withName: "welcomeNode") {
        let fadeAway = SKAction.fadeOut(withDuration: 1.0)
        
        welcomeNode.run(fadeAway, completion: {
            let doors = SKTransition.doorway(withDuration: 1.0)
            if let archeryScene = ArcheryScene(fileNamed: "ArcheryScene") {
                self.view?.presentScene(archeryScene, transition: doors)
            }
        })
    }
}

Before moving on to the next steps, we will take some time to provide more detail on the above code.

From within the context of the touchesBegan method, we have no direct reference to the welcomeNode instance. However, we know that when it was added to the scene in the SpriteKit Level Editor, it was assigned the name “welcomeNode”. Using the childNode(withName:) method of the scene instance, therefore, a reference to the node is being obtained within the touchesBegan method as follows:

if let welcomeNode = childNode(withName: "welcomeNode") {

The code then checks that the node was found before creating a new SKAction instance configured to cause the node to fade from view over a one-second duration:

let fadeAway = SKAction.fadeOut(withDuration: 1.0)

The action is then executed on the welcomeNode. A completion block is also specified to be executed when the action completes. This block creates an instance of the ArcheryScene class preloaded with the scene contained within the ArcheryScene.sks file and an appropriately configured SKTransition object. The transition to the new scene is then initiated:

let fadeAway = SKAction.fadeOut(withDuration: 1.0)

welcomeNode.run(fadeAway, completion: {
    let doors = SKTransition.doorway(withDuration: 1.0)
    if let archeryScene = ArcheryScene(fileNamed: "ArcheryScene") {
        self.view?.presentScene(archeryScene, transition: doors)
    }
})

Compile and run the app on an iPad device or simulator in landscape orientation. Once running, tap the screen and note that the label node fades away and that after the transition to the ArcheryScene takes effect, we are presented with a gray scene that now needs to be implemented.

Adding the Texture Atlas

Before textures can be used on a sprite node, the texture images must first be added to the project. Textures take the form of image files and may be added individually to the project’s asset catalog. However, for larger numbers of texture files, it is more efficient (both for the developer and the app) to create a texture atlas. In the case of the archer sprite, this will require twelve image files to animate an arrow’s loading and subsequent shooting. A texture atlas will be used to store these animation frame images. The images for this project can be found in the sample code download, which can be obtained from the following web page:

https://www.ebookfrenzy.com/web/ios16/

Within the code sample archive, locate the folder named sprite_images. Located within this folder is the archer. atlas sub-folder, which contains the animation images for the archer sprite node.

To add the atlas to the project, select the Assets catalog file in the Project Navigator to display the image assets panel. Locate the archer.atlas folder in a Finder window and drag and drop it onto the asset catalog panel so that it appears beneath the existing AppIcon entry, as shown in the following figure:

Figure 93-7

Designing the Archery Scene

The layout for the archery scene is contained within the ArcheryScene.sks file. Select this file so that it loads into the Level Editor environment. With the scene selected in the canvas, use the Attributes Inspector panel to change the color property to white and the Size property to landscape iPad 9.7”.

From within the SpriteKit Level Editor, the next task is to add the sprite node representing the archer to the scene. Display the Library panel, select the Media Library tab as highlighted in Figure 93-8 below, and locate the archer001.png texture image file:

Figure 93-8

Once located, change the Size property in the Attributes Inspector to iPad 9.7”, then drag and drop the texture onto the canvas and position it so that it is located in the vertical center of the scene at the left-hand edge, as shown in the following figure:

Figure 93-9

With the archer node selected, use the Attributes Inspector panel to assign the name “archerNode” to the sprite. The next task is to define the physical outline of the archer sprite. The SpriteKit system will use this outline when deciding whether the sprite has been involved in a collision with another node within the scene. By default, the physical shape is assumed to be a rectangle surrounding the sprite texture (represented by the blue boundary around the node in the scene editor). Another option is to define a circle around the sprite to represent the physical shape. A much more accurate approach is to have SpriteKit define the physical shape of the node based on the outline of the sprite texture image. With the archer node selected in the scene, scroll down within the Attribute Inspector panel until the Physics Definition section appears. Then, using the Body Type menu, change the setting to Alpha mask:

Figure 93-10

Before proceeding with the next phase of the development process, test that the scene behaves as required by clicking on the Animate button located along the bottom edge of the editor panel. Note that the archer slides down and disappears off the bottom edge of the scene. This is because the sprite is configured to be affected by gravity. For the game’s purposes, the archer must be pinned to the same location and not subject to the laws of gravity. Click on the Layout button to leave simulation mode, select the archer sprite and, within the Physical Definition section, turn the Pinned option on and the Dynamic, Allows Rotation, and Affected by Gravity options off. Re-run the animation to verify that the archer sprite now remains in place.

Preparing the Archery Scene

Select the ArcheryScene.swift file and modify it as follows to add some private variables and implement the didMove(to:) method:

import UIKit
import SpriteKit

class ArcheryScene: SKScene {

    var score = 0
    var ballCount = 20

    override func didMove(to view: SKView) {
        let archerNode = self.childNode(withName: "archerNode")
        archerNode?.position.y = 0
        archerNode?.position.x = -self.size.width/2 + 40
        self.initArcheryScene()
    }
.
.
}

When the archer node was added to the ArcheryScene, it was positioned using absolute X and Y coordinates. This means the node will be positioned correctly on an iPad with a 9.7” screen but not on any other screen sizes. Therefore, the first task performed by the didMove method is to position the archer node correctly relative to the screen size. Regarding the scene, position 0, 0 corresponds to the screen’s center point. Therefore, to position the archer node in the vertical center of the screen, the y-coordinate is set to zero. The code then obtains the screen’s width, performs a basic calculation to identify a position 40 points in from the screen’s left-hand edge, and assigns it to the x-coordinate of the node.

The above code then calls another method named initArcheryScene which now needs to be implemented as follows within the ArcheryScene.swift file ready for code which will be added later in the chapter:

func initArcheryScene() {
}

Preparing the Animation Texture Atlas

When the user touches the screen, the archer sprite node will launch an arrow across the scene. For this example, we want the sprite character’s loading and shooting of the arrow to be animated. The texture atlas already contains the animation frames needed to implement this (named sequentially from archer001.png through to archer012.png), so the next step is to create an action to animate this sequence of frames. One option would be to write some code to perform this task. A much easier option, however, is to create an animation action using the SpriteKit Live Editor.

Begin by selecting the ArcheryScene.sks file so that it loads into the editor. Once loaded, the first step is to add an AnimateWithTextures action within the timeline of the live editor panel. Next, within the Library panel, scroll down the list of objects until the AnimateWithTextures Action object appears. Once located, drag and drop an instance of the object onto the live editor timeline for the archerNode as indicated in Figure 93-11:

Figure 93-11

With the animation action added to the timeline, the action needs to be configured with the texture sequence to be animated. With the newly added action selected in the timeline, display the Media Library panel so that the archer texture images are listed. Next, use the Command-A keyboard sequence to select all of the images in the library and then drag and drop those images onto the Textures box in the Animate with Textures attributes panel, as shown in Figure 93-12:

Figure 93-12

Test the animation by clicking on the Animate button. The archer sprite should animate through the sequence of texture images to load and shoot the arrow.

Compile and run the app and tap on the screen to enter the archery scene. On appearing, the animation sequence will execute once. The animation sequence should only run when the user taps the screen to launch an arrow. Having this action within the timeline, therefore, does not provide the required behavior for the game. Instead, the animation action needs to be converted to a named action reference, placed in an action file, and triggered from within the touchesBegan method of the archer scene class.

Creating the Named Action Reference

With the ArcherScene.sks file loaded into the level editor, right-click on the Animate with Textures action in the timeline and select the Convert to Reference option from the popup menu:

Figure 93-13

In the Create Action panel, name the action animateArcher and change the File menu to Create New File. Next, click on the Create button and, in the Save As panel, navigate to the SpriteKitDemo subfolder of the main project folder and enter ArcherActions into the Save As: field before clicking on Create.

Since the animation action is no longer required in the timeline of the archer scene, select the ArcherScene.sks file, right-click on the Animate with Texture action in the timeline, and select Delete from the menu.

Triggering the Named Action from the Code

With the previous steps completed, the project now has a named action (named animateArcher) which can be triggered each time the screen is tapped by adding some code to the touchesBegan method of the ArcheryScene. swift file. With this file selected in the Project Navigator panel, implement this method as follows:

override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
    
    if let archerNode = self.childNode(withName: "archerNode"),
        let animate = SKAction(named: "animateArcher") {
        archerNode.run(animate)
    }
}

Run the app and touch the screen within the Archery Scene. Each time a touch is detected, the archer sprite will run through the animation sequence of shooting an arrow.

Creating the Arrow Sprite Node

At this point in the tutorial, the archer sprite node goes through an animation sequence of loading and shooting an arrow, but no actual arrow is being launched across the scene. To implement this, a new sprite node must be added to the ArcheryScene. This node will be textured with an arrow image and placed to the right of the archer sprite at the end of the animation sequence. Then, a physics body will be associated with the arrow, and an impulse force will be applied to it to propel it across the scene as though shot by the archer’s bow. This task will be performed entirely in code to demonstrate the alternative to using the action and live editors.

Begin by locating the ArrowTexture.png file in the sprite_images folder of the sample code archive and drag and drop it onto the left-hand panel of the Assets catalog screen beneath the archer texture atlas entry. Next, add a new method named createArrowNode within the ArcheryScene.swift file so that it reads as follows:

func createArrowNode() -> SKSpriteNode {
    
    let arrow = SKSpriteNode(imageNamed: "ArrowTexture.png")
    
    if let archerNode = self.childNode(withName: "archerNode"),
        let archerPosition = archerNode.position as CGPoint?,
         let archerWidth = archerNode.frame.size.width as CGFloat? {
    
        arrow.position = CGPoint(x: archerPosition.x + archerWidth,
                             y: archerPosition.y)
    
        arrow.name = "arrowNode"
        arrow.physicsBody = SKPhysicsBody(rectangleOf:
                            arrow.frame.size)
        arrow.physicsBody?.usesPreciseCollisionDetection = true
    }
    return arrow
}

The code creates a new SKSpriteNode object, positions it to the right of the archer sprite node, and assigns the name arrowNode. A physics body is then assigned to the node, using the node’s size as the boundary of the body and enabling precision collision detection. Finally, the node is returned.

Shooting the Arrow

A physical force needs to be applied to propel the arrow across the scene. The arrow sprite’s creation and propulsion must be timed to occur at the end of the archer animation sequence. This timing can be achieved via some minor modifications to the touchesBegan method:

override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
    
    if let archerNode = self.childNode(withName: "archerNode"),
        let animate = SKAction(named: "animateArcher") {
        let shootArrow = SKAction.run({
            let arrowNode = self.createArrowNode()
            self.addChild(arrowNode)
            arrowNode.physicsBody?.applyImpulse(CGVector(dx: 60, dy: 0))
        })
        
        let sequence = SKAction.sequence([animate, shootArrow])

        archerNode.run(sequence)
    }
}

A new SKAction object is created, specifying a block of code to be executed. This run block calls the createArrowNode method, adds the new node to the scene, and then applies an impulse force of 60.0 on the

X-axis of the scene. An SKAction sequence comprises the previously created animation action and the new run block action. This sequence is then run on the archer node.

When executed with these changes, touching the screen should now cause an arrow to be launched after the archer animation completes. Then, as the arrow flies across the scene, it gradually falls toward the bottom of the display. This behavior is due to gravity’s effect on the physics body assigned to the node.

Adding the Ball Sprite Node

The game’s objective is to score points by hitting balls with arrows. So, the next logical step is adding the ball sprite node to the scene. Begin by locating the BallTexture.png file in the sprite_images folder of the sample code package and drag and drop it onto the Assets.xcassets catalog.

Next, add the corresponding createBallNode method to the ArcheryScene.swift file as outlined in the following code fragment:

func createBallNode() {
    let ball = SKSpriteNode(imageNamed: "BallTexture.png")

    let screenWidth = self.size.width

    ball.position = CGPoint(x: CGFloat.random(
                     in: -screenWidth/2 ..< screenWidth/2-100), 
                         y: self.size.height-50)

    ball.name = "ballNode"
    ball.physicsBody = SKPhysicsBody(circleOfRadius:
                        (ball.size.width/2))

    ball.physicsBody?.usesPreciseCollisionDetection = true
    self.addChild(ball)
}

This code creates a sprite node using the ball texture and then sets the initial position at the top of the scene but a random position on the X-axis. Since position 0 on the X-axis corresponds to the horizontal center of the screen (as opposed to the far left side), some calculations are performed to ensure that the balls can fall from most of the screen’s width using random numbers for the X-axis values.

The node is assigned a name and a circular physics body slightly less than the radius of the ball texture image. Finally, precision collision detection is enabled, and the ball node is added to the scene.

Next, modify the initArcheryScene method to create an action to release a total of 20 balls at one-second intervals:

func initArcheryScene() {

    let releaseBalls = SKAction.sequence([SKAction.run({
    self.createBallNode() }),
    SKAction.wait(forDuration: 1)])

    self.run(SKAction.repeat(releaseBalls,
                             count: ballCount))
}

Run the app and verify that the balls now fall from the top of the scene. Then, attempt to hit the balls as they fall by tapping the background to launch arrows. Note, however, that when an arrow hits a ball, it simply bounces off:

Figure 93-14

The goal for the completed game is to have the balls burst with a sound effect when hit by the arrow and for a score to be presented at the end of the game. The steps to implement this behavior will be covered in the next chapters.

The balls fall from the top of the screen because they have been assigned a physics body and are subject to the simulated forces of gravity within the Sprite Kit physical world. To reduce the effects of gravity on both the arrows and balls, modify the didMove(to view:) method to change the current gravity setting on the scene’s physicsWorld object:

override func didMove(to view: SKView) {
    let archerNode = self.childNode(withName: "archerNode")
    archerNode?.position.y = 0
    archerNode?.position.x = -self.size.width/2 + 40
    self.physicsWorld.gravity = CGVector(dx: 0, dy: -1.0)
    self.initArcheryScene()
}

Summary

The goal of this chapter has been to create a simple game for iOS using the Sprite Kit framework. In creating this game, topics such as using sprite nodes, actions, textures, sprite animations, and physical forces have been used to demonstrate the use of the Xcode Sprite Kit editors and Swift code.

In the next chapter, this game example will be further extended to demonstrate the detection of collisions.

An Introduction to iOS 16 Sprite Kit Programming

Suppose you have ever had an idea for a game but didn’t create it because you lacked the skills or time to write complex game code and logic; look no further than Sprite Kit. Introduced as part of the iOS 7 SDK, Sprite Kit allows 2D games to be easily developed.

Sprite Kit provides almost everything needed to create 2D games for iOS, watchOS, tvOS, and macOS with minimum coding. Sprite Kit’s features include animation, physics simulation, collision detection, and special effects. These features can be harnessed within a game with just a few method calls.

In this and the next three chapters, the topic of games development with Sprite Kit will be covered to bring the reader up to a level of competence to begin creating games while also providing a knowledge base on which to develop further Sprite Kit development skills.

What is Sprite Kit?

Sprite Kit is a programming framework that makes it easy for developers to implement 2D-based games that run on iOS, macOS, tvOS, and watchOS. It provides a range of classes that support the rendering and animation of graphical objects (otherwise known as sprites) that can be configured to behave in specific programmer-defined ways within a game. Through actions, various activities can be run on sprites, such as animating a character so that it appears to be walking, making a sprite follow a specific path within a game scene, or changing the color and texture of a sprite in real-time.

Sprite Kit also includes a physics engine allowing physics-related behavior to be imposed on sprites. For example, a sprite can, amongst other things, be made to move by subjecting it to a pushing force, configured to behave as though affected by gravity, or to bounce back from another sprite as the result of a collision.

In addition, the Sprite Kit particle emitter class provides a useful mechanism for creating special effects within a game, such as smoke, rain, fire, and explosions. A range of templates for existing special effects is provided with Sprite Kit and an editor built into Xcode for creating custom particle emitter-based special effects.

The Key Components of a Sprite Kit Game

A Sprite Kit game will typically consist of several different elements.

Sprite Kit View

Every Sprite Kit game will have at least one SKView class. An SKView instance sits at the top of the component hierarchy of a game and is responsible for displaying the game content to the user. It is a subclass of the UIView class and, as such, has many of the traits of that class, including an associated view controller.

Scenes

A game will also contain one or more scenes. One scene might, for example, display a menu when the game starts, while additional scenes may represent multiple levels within the game. Scenes are represented in a game by the SKScene class, a subclass of the SKNode class.

Nodes

Each scene within a Sprite Kit game will have several Sprite Kit node children. These nodes fall into several different categories, each of which has a dedicated Sprite Kit node class associated with it. These node classes are all subclasses of the SKNode class and can be summarized as follows:

  • SKSpriteNode – Draws a sprite with a texture. These textures will typically be used to create image-based characters or objects in a game, such as a spaceship, animal, or monster.
  • SKLabelNode – Used to display text within a game, such as menu options, the prevailing score, or a “game over” message.
  • SKShapeNode – Allows nodes to be created containing shapes defined using Core Graphics paths. If a sprite is required to display a circle, for example, the SKShapeNode class could be used to draw the circle as an alternative to texturing an SKSpriteNode with an image of a circle.
  • SKEmitterNode – The node responsible for managing and displaying particle emitter-based special effects.
  • SKVideoNode – Allows video playback to be performed within a game node.
  • SKEffectNode – Allows Core Image filter effects to be applied to child nodes. A sepia filter effect, for example, could be applied to all child nodes of an SKEffectNode.
  • SKCropNode – Allows the pixels in a node to be cropped subject to a specified mask.
  • SKLightNode – The lighting node is provided to add light sources to a SpriteKit scene, including casting shadows when the light falls on other nodes in the same scene.
  • SK3DNode – The SK3DNode allows 3D assets created using the Scene Kit Framework to be embedded into 2D Sprite Kit games.
  • SKFieldNode – Applies physics effects to other nodes within a specified area of a scene.
  • SKAudioNode – Allows an audio source using 3D spacial audio effects to be included in a Sprite Kit scene.
  • SKCameraNode – Provides the ability to control the position from which the scene is viewed. The camera node may also be adjusted dynamically to create panning, rotation, and scaling effects.

Physics Bodies

Each node within a scene can have associated with it a physics body. Physics bodies are represented by the SKPhysicsBody class. Assignment of a physics body to a node brings a wide range of possibilities in terms of the behavior associated with a node. When a node is assigned a physics body, it will, by default, behave as though subject to the prevailing forces of gravity within the scene. In addition, the node can be configured to behave as though having a physical boundary. This boundary can be defined as a circle, a rectangle, or a polygon of any shape.

Once a node has a boundary, collisions between other nodes can be detected, and the physics engine is used to apply real-world physics to the node, such as causing it to bounce when hitting other nodes. The use of contact bit masks can be employed to specify the types of nodes for which contact notification is required.

The physics body also allows forces to be applied to nodes, such as propelling a node in a particular direction across a scene using either a constant or one-time impulse force. Physical bodies can also be combined using various join types (sliding, fixed, hinged, and spring-based attachments).

The properties of a physics body (and, therefore, the associated node) may also be changed. Mass, density, velocity, and friction are just a few of the properties of a physics body available for modification by the game developer.

Physics World

Each scene in a game has its own physics world object in the form of an instance of the SKPhysicsWorld class. A reference to this object, which is created automatically when the scene is initialized, may be obtained by accessing the physicsWorld property of the scene. The physics world object is responsible for managing and imposing the rules of physics on any nodes in the scene with which a physics body has been associated. Properties are available on the physics world instance to change the default gravity settings for the scene and also to adjust the speed at which the physics simulation runs.

Actions

An action is an activity performed by a node in a scene. Actions are the responsibility of SKAction class instances which are created and configured with the action to be performed. That action is then run on one or more nodes. An action might, for example, be configured to perform a rotation of 90 degrees. That action would then be run on a node to make it rotate within the scene. The SKAction class includes various action types, including fade in, fade out, rotation, movement, and scaling. Perhaps the most interesting action involves animating a sprite node through a series of texture frames.

Actions can be categorized as sequence, group, or repeating actions. An action sequence specifies a series of actions to be performed consecutively, while group actions specify a set of actions to be performed in parallel. Repeating actions are configured to restart after completion. An action may be configured to repeat several times or indefinitely.

Transitions

Transitions occur when a game changes from one scene to another. While it is possible to switch immediately from one scene to another, a more visually pleasing result might be achieved by animating the transition in some way. This can be implemented using the SKTransition class, which provides several different pre-defined transition animations, such as sliding the new scene down over the top of the old scene or presenting the effect of doors opening to reveal the new scene.

Texture Atlas

A large part of developing games involves handling images. Many of these images serve as textures for sprites. Although adding images to a project individually is possible, Sprite Kit also allows images to be grouped into a texture atlas. Not only does this make it easier to manage the images, but it also brings efficiencies in terms of image storage and handling. For example, the texture images for a particular sprite animation sequence would typically be stored in a single texture atlas. In contrast, another atlas might store the images for the background of a particular scene.

Constraints

Constraints allow restrictions to be imposed on nodes within a scene in terms of distance and orientation in relation to a point or another node. A constraint can, for example, be applied to a node such that its movement is restricted to within a certain distance of another node. Similarly, a node can be configured so that it is oriented to point toward either another node or a specified point within the scene. Constraints are represented by instances of the SKConstraint class and are grouped into an array and assigned to the constraints property of the node to which they are to be applied.

An Example Sprite Kit Game Hierarchy

To aid in visualizing how the various Sprite Kit components fit together, Figure 92-1 outlines the hierarchy for a simple game:

Figure 92-1

In this hypothetical game, a single SKView instance has two SKScene children, each with its own SKPhysicsWorld object. Each scene, in turn, has two node children. In the case of both scenes, the SKSpriteNode instances have been assigned SKPhysicsBody instances.

The Sprite Kit Game Rendering Loop

When working with Sprite Kit, it helps to understand how the animation and physics simulation process works. This process can best be described by looking at the Sprite Kit frame rendering loop.

Sprite Kit performs the work of rendering a game using a game rendering loop. Within this loop, Sprite Kit performs various tasks to render the visual and behavioral elements of the currently active scene, with an iteration of the loop performed for each successive frame displayed to the user.

Figure 92-2 provides a visual representation of the frame rendering sequence performed in the loop:

Figure 92-2

When a scene is displayed within a game, Sprite Kit enters the rendering loop and repeatedly performs the same sequence of steps as shown above. At several points in this sequence, the loop will make calls to your game, allowing the game logic to respond when necessary.

Before performing any other tasks, the loop begins by calling the update method of the corresponding SKScene instance. Within this method, the game should perform any tasks before the frame is updated, such as adding additional sprites or updating the current score.

The loop then evaluates and implements any pending actions on the scene, after which the game can perform more tasks via a call to the didEvaluateActions method.

Next, physics simulations are performed on the scene, followed by a call to the scene’s didSimulatePhysics method, where the game logic may react where necessary to any changes resulting from the physics simulation.

The scene then applies any constraints configured on the nodes in the scene. Once this task has been completed, a call is made to the scene’s didApplyConstraints method if it has been implemented. Finally, the SKView instance renders the new scene frame before the loop sequence repeats.

The Sprite Kit Level Editor

Integrated into Xcode, the Sprite Kit Level Editor allows scenes to be designed by dragging and dropping nodes onto a scene canvas and setting properties on those nodes using the SKNode Inspector. Though code writing is still required for anything but the most basic scene requirements, the Level Editor provides a useful alternative to writing code for some of the less complex aspects of SpriteKit game development. The editor environment also includes both live and action editors, allowing for designing and testing animation and action sequences within a Sprite Kit game.

Summary

Sprite Kit provides a platform for creating 2D games on iOS, tvOS, watchOS, and macOS. Games comprise an SKView instance with an SKScene object for each game scene. Scenes contain nodes representing the game’s characters, objects, and items. Various node types are available, all of which are subclassed from the SKNode class. In addition, each node can have associated with it a physics body in the form of an SKPhysicsBody instance. A node with a physics body will be subject to physical forces such as gravity, and when given a physical boundary, collisions with other nodes may also be detected. Finally, actions are configured using the SKAction class, instances of which are then run by the nodes on which the action is to be performed.

The orientation and movement of a node can be restricted by implementing constraints using the SKConstraint class.

The rendering of a Sprite Kit game takes place within the game loop, with one loop performed for each game frame. At various points in this loop, the app can perform tasks to implement and manage the underlying game logic.

Having provided a high-level overview in this chapter, the next three chapters will take a more practical approach to exploring the capabilities of Sprite Kit by creating a simple game.

An iOS 16 Real-Time Speech Recognition Tutorial

The previous chapter, entitled An iOS 16 Speech Recognition Tutorial, introduced the Speech framework and the speech recognition capabilities available to app developers since the introduction of the iOS 10 SDK. The chapter also provided a tutorial demonstrating using the Speech framework to transcribe a pre-recorded audio file into text.

This chapter will build on this knowledge to create an example project that uses the speech recognition Speech framework to transcribe speech in near real-time.

Creating the Project

Begin by launching Xcode and creating a new single view-based app named LiveSpeech using the Swift programming language.

Designing the User Interface

Select the Main.storyboard file, add two Buttons and a Text View component to the scene, and configure and position these views so that the layout appears as illustrated in Figure 91-1 below:

Figure 91-1

Display the Resolve Auto Layout Issues menu, select the Reset to Suggested Constraints option listed under All Views in View Controller, select the Text View object, display the Attributes Inspector panel, and remove the sample Latin text.

Display the Assistant Editor panel and establish outlet connections for the Buttons named transcribeButton and stopButton, respectively. Next, repeat this process to connect an outlet for the Text View named myTextView. Then, with the Assistant Editor panel still visible, establish action connections from the Buttons to methods named startTranscribing and stopTranscribing.

Adding the Speech Recognition Permission

Select the LiveSpeech entry at the top of the Project navigator panel and select the Info tab in the main panel. Next, click on the + button contained with the last line of properties in the Custom iOS Target Properties section. Then, select the Privacy – Speech Recognition Usage Description item from the resulting menu. Once the key has been added, double-click in the corresponding value column and enter the following text:

Speech recognition services are used by this app to convert speech to text.

Repeat this step to add a Privacy – Microphone Usage Description entry.

Requesting Speech Recognition Authorization

The code to request speech recognition authorization is the same as that for the previous chapter. For this example, the code to perform this task will, once again, be added as a method named authorizeSR within the ViewController.swift file as follows, remembering to import the Speech framework:

.
.
import Speech
.
.
func authorizeSR() {
    SFSpeechRecognizer.requestAuthorization { authStatus in

        OperationQueue.main.addOperation {
            switch authStatus {
            case .authorized:
                self.transcribeButton.isEnabled = true

            case .denied:
                self.transcribeButton.isEnabled = false
                self.transcribeButton.setTitle("Speech recognition access denied by user", for: .disabled)

            case .restricted:
                self.transcribeButton.isEnabled = false
                self.transcribeButton.setTitle(
                  "Speech recognition restricted on device", for: .disabled)

            case .notDetermined:
                self.transcribeButton.isEnabled = false
                self.transcribeButton.setTitle(
                  "Speech recognition not authorized", for: .disabled)
            @unknown default:
                print("Unknown state")
            }
        }
    }
}

Remaining in the ViewController.swift file, locate and modify the viewDidLoad method to call the authorizeSR method:

override func viewDidLoad() {
    super.viewDidLoad()
    authorizeSR()
}

Declaring and Initializing the Speech and Audio Objects

To transcribe speech in real-time, the app will require instances of the SFSpeechRecognizer, SFSpeechAudioBufferRecognitionRequest, and SFSpeechRecognitionTask classes. In addition to these speech recognition objects, the code will also need an AVAudioEngine instance to stream the audio into an audio buffer for transcription. Edit the ViewController.swift file and declare constants and variables to store these instances as follows:

import UIKit
import Speech

class ViewController: UIViewController {

    @IBOutlet weak var transcribeButton: UIButton!
    @IBOutlet weak var stopButton: UIButton!
    @IBOutlet weak var myTextView: UITextView!

    private let speechRecognizer = SFSpeechRecognizer(locale: 
			Locale(identifier: "en-US"))!

    private var speechRecognitionRequest: 
		SFSpeechAudioBufferRecognitionRequest?
    private var speechRecognitionTask: SFSpeechRecognitionTask?
    private let audioEngine = AVAudioEngine()
.
.

Starting the Transcription

The first task in initiating speech recognition is to add some code to the startTranscribing action method. Since several method calls that will be made to perform speech recognition have the potential to throw exceptions, a second method with the throws keyword needs to be called by the action method to perform the actual work (adding the throws keyword to the startTranscribing method will cause a crash at runtime because action methods signatures are not recognized as throwing exceptions). Therefore, within the ViewController.swift file, modify the startTranscribing action method and add a new method named startSession:

.
.
.
@IBAction func startTranscribing(_ sender: Any) {
    transcribeButton.isEnabled = false
    stopButton.isEnabled = true
    
    do {
        try startSession()
    } catch {
        // Handle Error
    }
}

func startSession() throws {

    if let recognitionTask = speechRecognitionTask {
        recognitionTask.cancel()
        self.speechRecognitionTask = nil
    }

    let audioSession = AVAudioSession.sharedInstance()
    try audioSession.setCategory(AVAudioSession.Category.record, 
						mode: .default)

    speechRecognitionRequest = SFSpeechAudioBufferRecognitionRequest()

    guard let recognitionRequest = speechRecognitionRequest else { 
      fatalError(
	"SFSpeechAudioBufferRecognitionRequest object creation failed") }

    let inputNode = audioEngine.inputNode

    recognitionRequest.shouldReportPartialResults = true

    speechRecognitionTask = speechRecognizer.recognitionTask(
		with: recognitionRequest) { result, error in

        var finished = false

        if let result = result {
            self.myTextView.text = 
			result.bestTranscription.formattedString
            finished = result.isFinal
        }

        if error != nil || finished {
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)

            self.speechRecognitionRequest = nil
            self.speechRecognitionTask = nil

            self.transcribeButton.isEnabled = true
        }
    }

    let recordingFormat = inputNode.outputFormat(forBus: 0)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { 
     (buffer: AVAudioPCMBuffer, when: AVAudioTime) in

        self.speechRecognitionRequest?.append(buffer)
    }

    audioEngine.prepare()
    try audioEngine.start()
}
.
.
.

The startSession method performs various tasks, each of which needs to be broken down and explained for this to begin to make sense.

The first tasks to be performed within the startSession method are to check if a previous recognition task is running and, if so, cancel it. The method also needs to configure an audio recording session and assign an SFSpeechAudioBufferRecognitionRequest object to the speechRecognitionRequest variable declared previously. A test is then performed to ensure that an SFSpeechAudioBufferRecognitionRequest object was successfully created. If the creation fails, an exception is thrown:

if let recognitionTask = speechRecognitionTask {
    recognitionTask.cancel()
    self.speechRecognitionTask = nil
}

let audioSession = AVAudioSession.sharedInstance()
try audioSession.setCategory(AVAudioSession.Category.record, mode: .default)

speechRecognitionRequest = SFSpeechAudioBufferRecognitionRequest()

guard let recognitionRequest = speechRecognitionRequest else { fatalError("SFSpeechAudioBufferRecognitionRequest object creation failed") }

Next, the code needs to obtain a reference to the inputNode of the audio engine and assign it to a constant. If an input node is not available, a fatal error is thrown. Finally, the recognitionRequest instance is configured to return partial results, enabling transcription to occur continuously as speech audio arrives in the buffer. If this property is not set, the app will wait until the end of the audio session before starting the transcription process.

let inputNode = audioEngine.inputNode

recognitionRequest.shouldReportPartialResults = true

Next, the recognition task is initialized:

speechRecognitionTask = speechRecognizer.recognitionTask(
    with: recognitionRequest) { result, error in

    var finished = false

    if let result = result {
        self.myTextView.text = result.bestTranscription.formattedString
        finished = result.isFinal
    }

    if error != nil || finished {
        self.audioEngine.stop()
        inputNode.removeTap(onBus: 0)

        self.speechRecognitionRequest = nil
        self.speechRecognitionTask = nil

        self.transcribeButton.isEnabled = true
    }
}

The above code creates the recognition task initialized with the recognition request object. A closure is then specified as the completion handler, which will be called repeatedly as each block of transcribed text is completed. Each time the handler is called, it is passed a result object containing the latest version of the transcribed text and an error object. As long as the isFinal property of the result object is false (indicating that live audio is still streaming into the buffer) and no errors occur, the text is displayed on the Text View. Otherwise, the audio engine is stopped, the tap is removed from the audio node, and the recognition request and recognition task objects are set to nil. The transcribe button is also enabled in preparation for the next session.

Having configured the recognition task, all that remains in this phase of the process is to install a tap on the input node of the audio engine, then start the engine running:

let recordingFormat = inputNode.outputFormat(forBus: 0)
inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer: AVAudioPCMBuffer, when: AVAudioTime) in

    self.speechRecognitionRequest?.append(buffer)
}

audioEngine.prepare()
try audioEngine.start()

Note that the installTap method of the inputNode object also uses a closure as a completion handler. Each time it is called, the code for this handler appends the latest audio buffer to the speechRecognitionRequest object, where it will be transcribed and passed to the completion handler for the speech recognition task, where it will be displayed on the Text View.

Implementing the stopTranscribing Method

Except for the stopTranscribing method, the app is almost ready to be tested. Within the ViewController.swift file, locate and modify this method to stop the audio engine and configure the status of the buttons ready for the next session:

@IBAction func stopTranscribing(_ sender: Any) {
    if audioEngine.isRunning {
        audioEngine.stop()
        speechRecognitionRequest?.endAudio()
        transcribeButton.isEnabled = true
        stopButton.isEnabled = false
    }
}

Testing the App

Compile and run the app on a physical iOS device, grant access to the microphone and permission to use speech recognition, and tap the Start Transcribing button. Next, speak into the device and watch as the audio is transcribed into the Text View. Finally, tap the Stop Transcribing button to end the session.

Summary

Live speech recognition is provided by the iOS Speech framework and allows speech to be transcribed into text as it is being recorded. This process taps into an AVAudioEngine input node to stream the audio into a buffer and appropriately configured SFSpeechRecognizer, SFSpeechAudioBufferRecognitionRequest, and SFSpeechRecognitionTask objects to perform the recognition. This chapter worked through creating an example app designed to demonstrate how these various components work together to implement near-real-time speech recognition.

An iOS 16 Speech Recognition Tutorial

When Apple introduced speech recognition for iOS devices, it was always assumed that this capability would one day be available to iOS app developers. That day finally arrived with the introduction of iOS 10.

The iOS SDK now includes the Speech framework, which can implement speech-to-text transcription within any iOS app. Speech recognition can be implemented with relative ease using the Speech framework and, as demonstrated in this chapter, may be used to transcribe both real-time and previously recorded audio.

An Overview of Speech Recognition in iOS

The speech recognition feature of iOS allows speech to be converted to text and supports a wide range of spoken languages. Most iOS users will no doubt be familiar with the microphone button that appears within the keyboard when entering text into an app. This dictation button is perhaps most commonly used to enter text into the Messages app.

Before the introduction of the Speech framework in iOS 10, app developers could still take advantage of the keyboard dictation button. Tapping a Text View object within any app displays the keyboard containing the button. Once tapped, any speech picked up by the microphone is transcribed into text and placed within the Text View. For basic requirements, this option is still available within iOS, though there are several advantages to performing a deeper integration using the Speech framework.

One of the key advantages of the Speech framework is the ability to trigger voice recognition without needing to display the keyboard and wait for the user to tap the dictation button. In addition, while the dictation button can only transcribe live speech, the Speech framework allows speech recognition to be performed on pre-recorded audio files.

Another advantage over the built-in dictation button is that the app can define the spoken language that is to be transcribed where the dictation button is locked into the prevailing device-wide language setting.

Behind the scenes, the service uses the same speech recognition technology as Siri. However, it is also important to know that the audio is typically transferred from the local device to Apple’s remote servers, where the speech recognition process is performed. The service is, therefore, only likely to be available when the device on which the app is running has an active internet connection.

When working with speech recognition, it is important to note that the length of audio that can be transcribed in a single session is restricted to one minute at the time of writing. In addition, Apple also imposes undeclared limits on the total amount of time an app can freely use of the speech recognition service, the implication being that Apple will begin charging heavy users of the service at some point in the future.

Speech Recognition Authorization

As outlined in the previous chapter, an app must seek permission from the user before being authorized to record audio using the microphone. This is also the case when implementing speech recognition, though the app must also specifically request permission to perform speech recognition. This is particularly important given that the audio will be transmitted to Apple for processing. Therefore, in addition to an NSMicrophoneUsageDescription entry in the Info.plist file, the app must include the NSSpeechRecognitionUsageDescription entry if speech recognition is to be performed.

The app must also specifically request speech recognition authorization via a call to the requestAuthorization method of the SFSpeechRecognizer class. This results in a completion handler call which is, in turn, passed a status value indicating whether authorization has been granted. Note that this step also includes a test to verify that the device has an internet connection.

Transcribing Recorded Audio

Once the appropriate permissions and authorizations have been obtained, speech recognition can be performed on an existing audio file with just a few lines of code. All that is required is an instance of the SFSpeechRecognizer class together with a request object in the form of an SFSpeechURLRecognitionRequest instance initialized with the URL of the audio file. Next, a recognizer task is created using the request object, and a completion handler is called when the audio has been transcribed. For example, the following code fragment demonstrates these steps:

let recognizer = SFSpeechRecognizer()
let request = SFSpeechURLRecognitionRequest(url: fileUrl)
    recognizer?.recognitionTask(with: request, resultHandler: { 
		(result, error) in
            print(result?.bestTranscription.formattedString)
})

Transcribing Live Audio

Live audio speech recognition makes use of the AVAudioEngine class. The AVAudioEngine class manages audio nodes that tap into different input and output buses on the device. In the case of speech recognition, the engine’s input audio node is accessed and used to install a tap on the audio input bus. The audio input from the tap is then streamed to a buffer which is repeatedly appended to the speech recognizer object for conversion. The next chapter, entitled An iOS 16 Real-Time Speech Recognition Tutorial will cover these steps in greater detail.

An Audio File Speech Recognition Tutorial

The remainder of this chapter will modify the Record app created in the previous chapter to provide the option to transcribe the speech recorded to the audio file. In the first instance, load Xcode, open the Record project, and select the Main.storyboard file so that it loads into the Interface Builder tool.

Modifying the User Interface

The modified Record app will require the addition of a Transcribe button and a Text View object into which the transcribed text will be placed as it is generated. Add these elements to the storyboard scene so that the layout matches that shown in Figure 90-1 below.

Select the Transcribe button view, display the Auto Layout Align menu, and apply a constraint to center the button in the horizontal center of the containing view. Next, display the Add New Constraints menu and establish a spacing to nearest neighbor constraint on the view’s top edge using the current value and the Constrain to margins option disabled.

With the newly added Text View object selected, display the Attributes Inspector panel and delete the sample Latin text. Then, using the Add New Constraints menu, add spacing to nearest neighbor constraints on all four sides of the view with the Constrain to margins option enabled.

Figure 90-1

Display the Assistant Editor panel and establish outlet connections for the new Button and Text View named transcribeButton and textView, respectively.

Complete this tutorial section by establishing an action connection from the Transcribe button to a method named transcribeAudio.

Adding the Speech Recognition Permission

Select the Record entry at the top of the Project navigator panel and select the Info tab in the main panel. Next, click on the + button contained with the last line of properties in the Custom iOS Target Properties section. Then, select the Privacy – Speech Recognition Usage Description item from the resulting menu. Once the key has been added, double-click in the corresponding value column and enter the following text:

Speech recognition services are used by this app to convert speech to text.

Seeking Speech Recognition Authorization

In addition to adding the usage description key to the Info.plist file, the app must include code to seek authorization to perform speech recognition. This will also ensure that the device is suitably configured to perform the task and that the user has given permission for speech recognition to be performed. Before adding code to the project, the first step is to import the Speech framework within the ViewController.swift file:

import UIKit
import AVFoundation
import Speech

class ViewController: UIViewController, AVAudioPlayerDelegate, AVAudioRecorderDelegate {
.
.
.

For this example, the code to perform this task will be added as a method named authorizeSR within the ViewController.swift file as follows:

func authorizeSR() {
    SFSpeechRecognizer.requestAuthorization { authStatus in

        OperationQueue.main.addOperation {
            switch authStatus {
            case .authorized:
                self.transcribeButton.isEnabled = true

            case .denied:
                self.transcribeButton.isEnabled = false
                self.recordButton.setTitle("Speech recognition access denied by user", for: .disabled)

            case .restricted:
                self.transcribeButton.isEnabled = false
                self.transcribeButton.setTitle("Speech recognition restricted on device", for: .disabled)

            case .notDetermined:
                self.transcribeButton.isEnabled = false
                self.transcribeButton.setTitle("Speech recognition not authorized", for: .disabled)
            @unknown default:
                print("Unknown Status")
            }
        }
    }
}

The above code calls the requestAuthorization method of the SFSpeechRecognizer class with a closure specified as the completion handler. This handler is passed a status value which can be one of four values (authorized, denied, restricted, or not determined). A switch statement is then used to evaluate the status and enable the transcribe button or to display the reason for the failure on that button.

Note that the switch statement code is specifically performed on the main queue. This is because the completion handler can be called at any time and not necessarily within the main thread queue. Since the completion handler code in the statement changes the user interface, these changes must be made on the main queue to avoid unpredictable results.

With the authorizeSR method implemented, modify the end of the viewDidLoad method to call this method:

override func viewDidLoad() {
    super.viewDidLoad()
    audioInit()
    authorizeSR()
}

Performing the Transcription

All that remains before testing the app is to implement the code within the transcribeAudio action method. Locate the template method in the ViewController.swift file and modify it to read as follows:

@IBAction func transcribeAudio(_ sender: Any) {
    let recognizer = SFSpeechRecognizer()
    let request = SFSpeechURLRecognitionRequest(
				url: (audioRecorder?.url)!)
    recognizer?.recognitionTask(with: request, resultHandler: { 
	(result, error) in
         self.textView.text = result?.bestTranscription.formattedString
    })
}

The code creates an SFSpeechRecognizer instance, initializes it with a request containing the URL of the recorded audio, and then initiates a task to perform the recognition. Finally, the completion handler displays the transcribed text within the Text View object.

Testing the App

Compile and run the app on a physical device, accept the request for speech recognition access, tap the Record button, and record some speech. Next, tap the Stop button, followed by Transcribe, and watch as the recorded speech is transcribed into text within the Text View object.

Summary

The Speech framework provides apps with access to Siri’s speech recognition technology. This access allows speech to be transcribed to text, either in real-time or by passing pre-recorded audio to the recognition system. This chapter has provided an overview of speech recognition within iOS and adapted the Record app created in the previous chapter to transcribe recorded speech to text. The next chapter, entitled An iOS 16 Real-Time Speech Recognition Tutorial, will provide a guide to performing speech recognition in real time.

Recording Audio on iOS 16 with AVAudioRecorder

In addition to audio playback, the iOS AV Foundation Framework provides the ability to record sound on iOS using the AVAudioRecorder class. This chapter will work step-by-step through a tutorial demonstrating using the AVAudioRecorder class to record audio.

An Overview of the AVAudioRecorder Tutorial

This chapter aims to create an iOS app to record and play audio. It will do so by creating an instance of the AVAudioRecorder class and configuring it with a file to contain the audio and a range of settings dictating the quality and format of the audio. Finally, playback of the recorded audio file will be performed using the AVAudioPlayer class, which was covered in detail in the chapter entitled Playing Audio on iOS 16 using AVAudioPlayer.

Audio recording and playback will be controlled by buttons in the user interface that are connected to action methods which, in turn, will make appropriate calls to the instance methods of the AVAudioRecorder and AVAudioPlayer objects, respectively.

The view controller of the example app will also implement the AVAudioRecorderDelegate and AVAudioPlayerDelegate protocols and several corresponding delegate methods to receive notification of events relating to playback and recording.

Creating the Recorder Project

Begin by launching Xcode and creating a new single view-based app named Record using the Swift programming language.

Configuring the Microphone Usage Description

Access to the microphone from within an iOS app is considered a potential risk to the user’s privacy. Therefore, when an app attempts to access the microphone, the operating system will display a warning dialog to the user seeking authorization for the app to proceed. Included within the content of this dialog is a message from the app justifying using the microphone. This text message must be specified within the Info.plist file using the NSMicrophoneUsageDescription key. The absence of this key will result in the app crashing at runtime.

To add this setting:

  1. Select the Record entry at the top of the Project navigator panel and select the Info tab in the main panel.
  2. Click on the + button contained with the last line of properties in the Custom iOS Target Properties section.
  3. Select the Privacy – Microphone Usage Description item from the resulting menu.

Once the key has been added, double-click in the corresponding value column and enter the following text:

The audio recorded by this app is stored securely and is not shared.

Once the rest of the code has been added and the app is launched for the first time, a dialog will appear, including the usage message. If the user taps the OK button, microphone access will be granted to the app.

Designing the User Interface

Select the Main.storyboard file and, once loaded, drag Button objects from the Library (View -> Utilities -> Show Library) and position them on the View window. Once placed in the view, modify the text on each button so that the user interface appears as illustrated in Figure 89-1:

Figure 89-1

With the scene view selected within the storyboard canvas, display the Auto Layout Resolve Auto Layout Issues menu and select the Reset to Suggested Constraints menu option listed in the All Views in View Controller section of the menu.

Select the “Record” button object in the view canvas, display the Assistant Editor panel and verify that the editor is displaying the contents of the ViewController.swift file. Next, right-click on the Record button object and drag to a position just below the class declaration line in the Assistant Editor. Release the line and establish an outlet connection named recordButton. Repeat these steps to establish outlet connections for the “Play” and “Stop” buttons named playButton and stopButton, respectively.

Continuing to use the Assistant Editor, establish Action connections from the three buttons to methods named recordAudio, playAudio, and stopAudio.

Close the Assistant Editor panel, select the ViewController.swift file and modify it to import the AVFoundation framework, declare adherence to some delegate protocols, and add properties to store references to AVAudioRecorder and AVAudioPlayer instances:

import UIKit
import AVFoundation

class ViewController: UIViewController, AVAudioPlayerDelegate, AVAudioRecorderDelegate {

    var audioPlayer: AVAudioPlayer?
    var audioRecorder: AVAudioRecorder?
.
.

Creating the AVAudioRecorder Instance

When the app is first launched, an instance of the AVAudioRecorder class needs to be created. This will be initialized with the URL of a file into which the recorded audio will be saved. Also passed as an argument to the initialization method is a Dictionary object indicating the settings for the recording, such as bit rate, sample rate, and audio quality. A full description of the settings available may be found in the appropriate Apple iOS reference materials.

As is often the case, a good location to initialize the AVAudioRecorder instance is within a method to be called from the viewDidLoad method of the view controller located in the ViewController.swift file. Select the file in the project navigator and modify it so that it reads as follows:

.
.
override func viewDidLoad() {
    super.viewDidLoad()
    audioInit()
}

func audioInit() {
    playButton.isEnabled = false
    stopButton.isEnabled = false

    let fileMgr = FileManager.default

    let dirPaths = fileMgr.urls(for: .documentDirectory, 
					in: .userDomainMask)

    let soundFileURL = dirPaths[0].appendingPathComponent("sound.caf")

    let recordSettings =
       [AVEncoderAudioQualityKey: AVAudioQuality.min.rawValue,
                AVEncoderBitRateKey: 16,
                AVNumberOfChannelsKey: 2,
                AVSampleRateKey: 44100.0] as [String : Any]

    let audioSession = AVAudioSession.sharedInstance()

    do {
            try audioSession.setCategory(
		AVAudioSession.Category.playAndRecord, mode: .default)
    } catch let error as NSError {
        print("audioSession error: \(error.localizedDescription)")
    }

    do {
        try audioRecorder = AVAudioRecorder(url: soundFileURL,
            settings: recordSettings as [String : AnyObject])
        audioRecorder?.prepareToRecord()
    } catch let error as NSError {
        print("audioSession error: \(error.localizedDescription)")
    }
}
.
.

Since no audio has been recorded, the above method disables the play and stop buttons. It then identifies the app’s documents directory and constructs a URL to a file in that location named sound.caf. A Dictionary object is then created containing the recording quality settings before an audio session, and an instance of the AVAudioRecorder class is created. Finally, assuming no errors are encountered, the audioRecorder instance is prepared to begin recording when requested to do so by the user.

Implementing the Action Methods

The next step is implementing the action methods connected to the three button objects. Select the ViewController. swift file and modify it as outlined in the following code excerpt:

@IBAction func recordAudio(_ sender: Any) {
    if audioRecorder?.isRecording == false {
        playButton.isEnabled = false
        stopButton.isEnabled = true
        audioRecorder?.record()
    }
}

@IBAction func stopAudio(_ sender: Any) {
    stopButton.isEnabled = false
    playButton.isEnabled = true
    recordButton.isEnabled = true

    if audioRecorder?.isRecording == true {
        audioRecorder?.stop()
    } else {
        audioPlayer?.stop()
    }
}

@IBAction func playAudio(_ sender: Any) {
    if audioRecorder?.isRecording == false {
        stopButton.isEnabled = true
        recordButton.isEnabled = false

        do {
            try audioPlayer = AVAudioPlayer(contentsOf: 
					(audioRecorder?.url)!)
            audioPlayer!.delegate = self
            audioPlayer!.prepareToPlay()
            audioPlayer!.play()
        } catch let error as NSError {
            print("audioPlayer error: \(error.localizedDescription)")
        }
    }
}

Each of the above methods performs the steps necessary to enable and disable appropriate buttons in the user interface and to interact with the AVAudioRecorder and AVAudioPlayer object instances to record or playback audio.

Implementing the Delegate Methods

To receive notification about the success or otherwise of recording or playback, it is necessary to implement some delegate methods. For this tutorial, we will need to implement the methods to indicate errors have occurred and also when playback is finished. Once again, edit the ViewController.swift file and add these methods as follows:

func audioPlayerDidFinishPlaying(_ player: AVAudioPlayer, 
    successfully flag: Bool) {
    recordButton.isEnabled = true
    stopButton.isEnabled = false
}

func audioPlayerDecodeErrorDidOccur(_ player: AVAudioPlayer, error: Error?) {
    print("Audio Play Decode Error")
}

func audioRecorderDidFinishRecording(_ recorder: AVAudioRecorder, successfully flag: Bool) {
}

func audioRecorderEncodeErrorDidOccur(_ recorder: AVAudioRecorder, error: Error?) {
    print("Audio Record Encode Error")
}

Testing the App

Configure Xcode to install the app on a device or simulator session and build and run the app by clicking on the run button in the main toolbar. Once loaded onto the device, the operating system will seek permission to allow the app to access the microphone. Allow access and touch the Record button to record some sound. Touch the Stop button when the recording is completed and use the Play button to play back the audio.

Summary

This chapter has provided an overview and example of using the AVAudioRecorder and AVAudioPlayer classes of the AVFoundation framework to record and playback audio from within an iOS app. The chapter also outlined the necessity of configuring the microphone usage privacy key-value pair within the Info.plist file to obtain microphone access permission from the user.

Playing Audio on iOS 16 using AVAudioPlayer

The iOS SDK provides several mechanisms for implementing audio playback from within an iOS app. The easiest technique from the app developer’s perspective is to use the AVAudioPlayer class, which is part of the AV Foundation Framework.

This chapter will provide an overview of audio playback using the AVAudioPlayer class. Once the basics have been covered, a tutorial is worked through step by step. The topic of recording audio from within an iOS app is covered in the next chapter entitled Recording Audio on iOS 16 with AVAudioRecorder.

Supported Audio Formats

The AV Foundation Framework supports the playback of various audio formats and codecs, including software and hardware-based decoding. Codecs and formats currently supported are as follows:

  • AAC (MPEG-4 Advanced Audio Coding)
  • ALAC (Apple Lossless)
  • AMR (Adaptive Multi-rate)
  • HE-AAC (MPEG-4 High-Efficiency AAC)
  • iLBC (internet Low Bit Rate Codec)
  • Linear PCM (uncompressed, linear pulse code modulation)
  • MP3 (MPEG-1 audio layer 3)
  • µ-law and a-law

If an audio file is to be included as part of the resource bundle for an app, it may be converted to a supported audio format before inclusion in the app project using the macOS afconvert command-line tool. For details on how to use this tool, run the following command in a Terminal window:

afconvert –h

Receiving Playback Notifications

An app receives notifications from an AVAudioPlayer instance by declaring itself as the object’s delegate and implementing some or all of the following AVAudioPlayerDelegate protocol methods:

  • audioPlayerDidFinishPlaying – Called when the audio playback finishes. An argument passed through to the method indicates whether the playback was completed successfully or failed due to an error.
  • audioPlayerDecodeErrorDidOccur – Called when the AVAudioPlayer object encounters a decoding error during audio playback. An error object containing information about the nature of the problem is passed through to this method as an argument.
  • audioPlayerBeginInterruption – Called when audio playback has been interrupted by a system event, such as an incoming phone call. Playback is automatically paused, and the current audio session is deactivated.
  • audioPlayerEndInterruption – Called after an interruption ends. The current audio session is automatically activated, and playback may be resumed by calling the play method of the corresponding AVAudioPlayer instance.

Controlling and Monitoring Playback

Once an AVAudioPlayer instance has been created, audio playback may be controlled and monitored programmatically via the methods and properties of that instance. For example, the self-explanatory play, pause and stop methods may be used to control playback. Similarly, the volume property may be used to adjust the volume level of the audio playback. In contrast, the playing property may be accessed to identify whether or not the AVAudioPlayer object is currently playing audio.

In addition, playback may be delayed to begin later using the playAtTime instance method, which takes as an argument the number of seconds (as an NSTimeInterval value) to delay before beginning playback.

The length of the current audio playback may be obtained via the duration property while the current point in the playback is stored in the currentTime property.

Playback may also be programmed to loop back and repeatedly play a specified number of times using the numberOfLoops property.

Creating the Audio Example App

The remainder of this chapter will work through creating a simple iOS app that plays an audio file. The app’s user interface will consist of play and stop buttons to control playback and a slider to adjust the playback volume level.

Begin by launching Xcode and creating a new project using the iOS App template with the Swift and Storyboard options selected, entering AudioDemo as the product name.

Adding an Audio File to the Project Resources

To experience audio playback, adding an audio file to the project resources will be necessary. For this purpose, any supported audio format file will be suitable. Having identified a suitable audio file, drag and drop it into the Project Navigator panel of the main Xcode window. For this tutorial, we will be using an MP3 file named Moderato.mp3 which can be found in the audiofiles folder of the sample code archive, downloadable from the following URL:

https://www.ebookfrenzy.com/web/ios16/

Locate and unzip the file in a Finder window and drag and drop it onto the Project Navigator panel.

Designing the User Interface

The app user interface will comprise two buttons labeled “Play” and “Stop” and a slider to adjust the playback volume. Next, select the Main.storyboard file, display the Library, drag and drop components from the Library onto the View window and modify properties so that the interface appears as illustrated in Figure 88-1:

Figure 88-1

With the scene view selected within the storyboard canvas, display the Auto Layout Resolve Auto Layout Issues menu and select the Reset to Suggested Constraints menu option listed in the All Views in View Controller section of the menu.

Select the slider object in the view canvas, display the Assistant Editor panel, and verify that the editor is displaying the contents of the ViewController.swift file. Right-click on the slider object and drag it to a position just below the class declaration line in the Assistant Editor. Release the line, and in the resulting connection dialog, establish an outlet connection named volumeControl.

Right-click on the “Play” button object and drag the line to the area immediately beneath the viewDidLoad method in the Assistant Editor panel. Release the line and, within the resulting connection dialog, establish an Action method on the Touch Up Inside event configured to call a method named playAudio. Repeat these steps to establish an action connection on the “Stop” button to a method named stopAudio.

Right-click on the slider object and drag the line to the area immediately beneath the newly created actions in the Assistant Editor panel. Release the line and, within the resulting connection dialog, establish an Action method on the Value Changed event configured to call a method named adjustVolume.

Close the Assistant Editor panel, select the ViewController.swift file in the project navigator panel, and add an import directive and delegate declaration, together with a property to store a reference to the AVAudioPlayer instance as follows:

import UIKit
import AVFoundation

class ViewController: UIViewController, AVAudioPlayerDelegate {

    @IBOutlet weak var volumeControl: UISlider!
    var audioPlayer: AVAudioPlayer?
.
.

Implementing the Action Methods

The next step in our iOS audio player tutorial is implementing the action methods for the two buttons and the slider. Remaining in the ViewController.swift file, locate and implement these methods as outlined in the following code fragment:

@IBAction func playAudio(_ sender: Any) {
    audioPlayer?.play()
}

@IBAction func stopAudio(_ sender: Any) {
    audioPlayer?.stop()
}

@IBAction func adjustVolume(_ sender: Any) {
    audioPlayer?.volume = volumeControl.value
}

Creating and Initializing the AVAudioPlayer Object

Now that we have an audio file to play and appropriate action methods written, the next step is to create an AVAudioPlayer instance and initialize it with a reference to the audio file. Since we only need to initialize the object once when the app launches, a good place to write this code is in the viewDidLoad method of the ViewController.swift file:

override func viewDidLoad() {
    super.viewDidLoad()
    
    if let bundlePath = Bundle.main.path(forResource: "Moderato",
                                              ofType: "mp3") {

        let url = URL.init(fileURLWithPath: bundlePath)
        
        do {
            try audioPlayer = AVAudioPlayer(contentsOf: url)
            audioPlayer?.delegate = self
            audioPlayer?.prepareToPlay()
        } catch let error as NSError {
            print("audioPlayer error \(error.localizedDescription)")
        }
    }
}

In the above code, we create a URL reference using the filename and type of the audio file added to the project resources. Remember that this will need to be modified to reflect the audio file used in your projects.

Next, an AVAudioPlayer instance is created using the URL of the audio file. Assuming no errors were detected, the current class is designated as the delegate for the audio player object. Finally, a call is made to the audioPlayer object’s prepareToPlay method. This performs initial buffering tasks, so there is no buffering delay when the user selects the play button.

Implementing the AVAudioPlayerDelegate Protocol Methods

As previously discussed, by declaring our view controller as the delegate for our AVAudioPlayer instance, our app will be able to receive notifications relating to the playback. Templates of these methods are as follows and may be placed in the ViewController.swift file:

func audioPlayerDidFinishPlaying(_ player: AVAudioPlayer, successfully
                flag: Bool) {
}

func audioPlayerDecodeErrorDidOccur(_ player: AVAudioPlayer,
                error: Error?) {
}

func audioPlayerBeginInterruption(_ player: AVAudioPlayer) {
}

func audioPlayerEndInterruption(player: AVAudioPlayer) {
}

For this tutorial, it is not necessary to implement any code for these methods, and they are provided solely for completeness.

Building and Running the App

Once all the requisite changes have been made and saved, test the app in the iOS simulator or a physical device by clicking on the run button in the Xcode toolbar. Once the app appears, click on the Play button to begin playback. Next, adjust the volume using the slider and stop playback using the Stop button. If the playback is not audible on the device, ensure that the switch on the side of the device is not set to silent mode.

Summary

The AVAudioPlayer class, part of the AVFoundation framework, provides a simple way to play audio from within iOS apps. In addition to playing back audio, the class also provides several methods that can be used to control the playback in terms of starting, stopping, and changing the playback volume. By implementing the methods defined by the AVAudioPlayerDelegate protocol, the app may also be configured to receive notifications of events related to the playback, such as playback ending or an error occurring during the audio decoding process.