An iOS 16 CloudKit Sharing Tutorial

The chapter entitled An Introduction to iOS 16 CloudKit Sharing provided an overview of how CloudKit sharing works and the steps involved in integrating sharing into an iOS app. The intervening chapters have focused on creating a project that demonstrates the integration of CloudKit data storage into iOS apps. This chapter will extend the project started in the previous chapter to add CloudKit sharing to the CloudKitDemo app.

Preparing the Project for CloudKit Sharing

Launch Xcode and open the CloudKitDemo project created in this book’s chapter entitled An Introduction to iOS 16 CloudKit Sharing. If you have not completed the tasks in the previous chapter and are only interested in learning about CloudKit sharing, a snapshot of the project is included as part of the sample code archive for this book on the following web page:

https://www.ebookfrenzy.com/web/ios16/

Once the project has been loaded into Xcode, the CKSharingSupported key needs to be added to the project Info.plist file with a Boolean value of true. Select the CloudKitDemo target at the top of the Project Navigator panel, followed by the Info tab in the main panel. Next, locate the bottom entry in the Custom iOS Target Properties list, and hover the mouse pointer over the item. When the plus button appears, click it to add a new entry to the list. Complete the new property with the key field set to CKSharingSupported, the type to Boolean, and the value to YES, as illustrated in Figure 53-1:

Figure 53-1

Adding the Share Button

The user interface for the app now needs to be modified to add a share button to the toolbar. First, select the Main.storyboard file, locate the Bar Button Item in the Library panel, and drag and drop an instance onto the toolbar to position it to the right of the existing delete button.

Once added, select the button item, display the Attributes inspector, and select the square and arrow image:

Figure 53-2

Once the new button has been added, the toolbar should match Figure 53-3:

Figure 53-3

With the new share button item still selected, display the Assistant Editor panel and establish an Action connection to a method named shareRecord.

Creating the CloudKit Share

The next step is to add some code to the shareRecord action method to initialize and display the UICloudSharingController and to create and save the CKShare object. Next, select the ViewController.swift file, locate the stub shareRecord method, and modify it so that it reads as follows:

@IBAction func shareRecord(_ sender: Any) {

    let controller = UICloudSharingController { controller,
        prepareCompletionHandler in
        
        if let thisRecord = self.currentRecord {
            let share = CKShare(rootRecord: thisRecord)
            
            share[CKShare.SystemFieldKey.title] = 
                             "An Amazing House" as CKRecordValue
            share.publicPermission = .readOnly
            
            let modifyRecordsOperation = CKModifyRecordsOperation(
                recordsToSave: [thisRecord, share],
                recordIDsToDelete: nil)
            
            let configuration = CKOperation.Configuration()
            
            configuration.timeoutIntervalForResource = 10
            configuration.timeoutIntervalForRequest = 10
                         
            modifyRecordsOperation.modifyRecordsResultBlock = {
                result in
                switch result {
                case .success:
                    prepareCompletionHandler(share, CKContainer.default(), nil)
                case .failure(let error):
                    print(error.localizedDescription)
                }
            }     
            self.privateDatabase?.add(modifyRecordsOperation)
        } else {
            print("User error: No record selected")
        }
    }
    
    controller.availablePermissions = [.allowPublic, .allowReadOnly,
            .allowReadWrite, .allowPrivate]
    controller.popoverPresentationController?.barButtonItem =
        sender as? UIBarButtonItem
    
    present(controller, animated: true)
}Code language: Swift (swift)

The code added to this method follows the steps outlined in the chapter entitled An Introduction to iOS 16 CloudKit Sharing to display the CloudKit sharing view controller, create a share object initialized with the currently selected record and save it to the user’s private database.

Accepting a CloudKit Share

Now that the user can create a CloudKit share, the app needs to be modified to accept a share and display it to the user. The first step in this process is implementing the userDidAcceptCloudKitShareWith method within the project’s scene delegate class. Edit the SceneDelegate.swift file and implement this method as follows:

.
.
import CloudKit
.
.
func windowScene(_ windowScene: UIWindowScene,
    userDidAcceptCloudKitShareWith cloudKitShareMetadata: CKShare.Metadata) {
   
    acceptCloudKitShare(metadata: cloudKitShareMetadata) { [weak self] result in
        switch result {
        case .success:
            DispatchQueue.main.async {
                let viewController: ViewController = 
                     self?.window?.rootViewController as! ViewController
                viewController.fetchShare(cloudKitShareMetadata)
            }
        case .failure(let error):
            print(error.localizedDescription )
        }
    }
}
.
.Code language: Swift (swift)

When the user clicks on a CloudKit share link, for example, in an email or text message, the operating system will call the above method to notify the app that shared CloudKit data is available. The above implementation of this method calls a method named acceptCloudKitShare and passes it the CloudKitShareMetadata object it received from the operating system. If the acceptCloudKitShare method returns a successful result, the delegate method obtains a reference to the app’s root view controller and calls a method named fetchShare (which we will write in the next section) to extract the shared record from the CloudKit database and display it. Next, we need to add the acceptCloudKitShare method as follows:

Fetching the Shared Record

At this point, the share has been accepted and a CKShare.Metadata object provided, from which information about the shared record may be extracted. All that remains before the app can be tested is to implement the fetchShare method within the ViewController.swift file:

func fetchShare(_ metadata: CKShare.Metadata) {

    let operation = CKFetchRecordsOperation(recordIDs: 
                            [metadata.hierarchicalRootRecordID!])

    operation.perRecordResultBlock = { recordId, result in
        switch result {
        case .success(let record):
            DispatchQueue.main.async() {
                self.currentRecord = record
                self.addressField.text =
                    record.object(forKey: "address") as? String
                self.commentsField.text =
                    record.object(forKey: "comment") as? String
                let photo =
                    record.object(forKey: "photo") as! CKAsset
                let image = UIImage(contentsOfFile:
                                        photo.fileURL!.path)
                self.imageView.image = image
                self.photoURL = self.saveImageToFile(image!)
            }
        case .failure(let error):
            print(error.localizedDescription)
        }
    }
    
    operation.fetchRecordsResultBlock = { result in
        switch result {
        case .success:
            break
        case .failure(let error):
            print(error.localizedDescription)
        }
    }
    CKContainer.default().sharedCloudDatabase.add(operation)
}Code language: Swift (swift)

The method prepares a standard CloudKit fetch operation based on the record ID contained within the share metadata object and performs the fetch using the sharedCloudDatabase instance. On a successful fetch, the completion handler extracts the data from the shared record and displays it in the user interface.

Testing the CloudKit Share Example

To thoroughly test CloudKit sharing, two devices with different Apple IDs must be used. If you have access to two devices, create a second Apple ID for testing purposes and sign in using that ID on one of the devices. Once logged in, make sure that the devices can send and receive iMessage or email messages between each other and install and run the CloudKitDemo app on both devices. Once the testing environment is set up, launch the CloudKitDemo app on one of the devices and add a record to the private database. Once added, tap the Share button and use the share view controller interface to send a share link message to the Apple ID associated with the second device. When the message arrives on the second device, tap the share link and accept the share when prompted. Once the share has been accepted, the CloudKitDemo app should launch and display the shared record.

Summary

This chapter puts the theory of CloudKit sharing outlined in the chapter entitled An Introduction to iOS 16 CloudKit Sharing into practice by enhancing the CloudKitDemo project to include the ability to share CloudKit-based records with other app users. This involved creating and saving a CKShare object, using the UICloudSharingController class, and adding code to handle accepting and fetching a shared CloudKit database record.

An Introduction to iOS 16 CloudKit Sharing

Before the release of iOS 10, the only way to share CloudKit records between users was to store those records in a public database. With the introduction of CloudKit sharing, individual app users can now share private database records with other users.

This chapter aims to provide an overview of CloudKit sharing and the classes used to implement sharing within an iOS app. The techniques outlined in this chapter will be put to practical use in the An iOS CloudKit Sharing Tutorial chapter.

Understanding CloudKit Sharing

CloudKit sharing provides a way for records within a private database to be shared with other app users, entirely at the discretion of the database owner. When a user decides to share CloudKit data, a share link in the form of a URL is sent to the person with whom the data is to be shared. This link can be sent in various ways, including text messages, email, Facebook, or Twitter. When the recipient taps on the share link, the app (if installed) will be launched and provided with the shared record information ready to be displayed.

The level of access to a shared record may also be defined to control whether a recipient can view and modify the record. It is important to be aware that when a share recipient accepts a share, they are receiving a reference to the original record in the owner’s private database. Therefore, a modification performed on a share will be reflected in the original private database.

Preparing for CloudKit Sharing

Before an app can take advantage of CloudKit sharing, the CKSharingSupported key needs to be added to the project Info.plist file with a Boolean true value. Also, a CloudKit record may only be shared if it is stored in a private database and is a member of a record zone other than the default zone.

The CKShare Class

CloudKit sharing is made possible primarily by the CKShare class. This class is initialized with the root CKRecord instance that is to be shared with other users together with the permission setting. The CKShare object may also be configured with title and icon information to be included in the share link message. The CKShare and associated CKRecord objects are then saved to the private database. The following code, for example, creates a CKShare object containing the record to be shared and configured for read-only access:

let share = CKShare(rootRecord: myRecord)
share[CKShare.SystemFieldKey.title] = "My First Share" as CKRecordValue
share.publicPermission = .readOnlyCode language: Swift (swift)

Once the share has been created, it is saved to the private database using a CKModifyRecordsOperation object. Note the recordsToSave: argument is declared as an array containing both the share and record objects:

let modifyRecordsOperation = CKModifyRecordsOperation(
    recordsToSave: [myRecord, share], recordIDsToDelete: nil)Code language: Swift (swift)

Next, a CKConfiguration instance needs to be created, configured with optional settings, and assigned to the operation:

let configuration = CKOperation.Configuration()
        
configuration.timeoutIntervalForResource = 10
configuration.timeoutIntervalForRequest = 10Code language: Swift (swift)

Next, a lambda must be assigned to the modifyRecordsResultBlock property of the modifyRecordsOperation object. The code in this lambda is called when the operation completes to let your app know whether the share was successfully saved:

modifyRecordsOperation.modifyRecordsResultBlock = { result in
    switch result {
    case .success:
        // Handle completion
    case .failure(let error):
        print(error.localizedDescription)
    }
}Code language: Swift (swift)

Finally, the operation is added to the database to begin execution:

self.privateDatabase?.add(modifyRecordsOperation)Code language: Swift (swift)

The UICloudSharingController Class

To send a share link to another user, CloudKit needs to know both the identity of the recipient and the method by which the share link is to be transmitted. One option is to manually create CKShareParticipant objects for each participant and add them to the CKShare object. Alternatively, the CloudKit framework includes a view controller specifically for this purpose. When presented to the user (Figure 51-1), the UICloudSharingController class provides the user with a variety of options for sending the share link to another user:

Figure 51-1

The app is responsible for creating and presenting the controller to the user, the template code for which is outlined below:

let controller = UICloudSharingController { 
	controller, prepareCompletionHandler in

	// Code here to create the CKShare and save it to the database
}

controller.availablePermissions = 
        [.allowPublic, .allowReadOnly, .allowReadWrite, .allowPrivate]

controller.popoverPresentationController?.barButtonItem =
    sender as? UIBarButtonItem

present(controller, animated: true)Code language: Swift (swift)

Note that the above code fragment also specifies the permissions to be provided as options within the controller user interface. These options are accessed and modified by tapping the link in the Collaboration section of the sharing controller (in Figure 51-1 above, the link reads “Only invited people can edit”). Figure 51-2 shows an example share options settings screen:

Figure 51-2

Once the user selects a method of communication from the cloud-sharing controller, the completion handler assigned to the controller will be called. As outlined in the previous section, the CKShare object must be created and saved within this handler. After the share has been saved to the database, the cloud-sharing controller must be notified that the share is ready to be sent. This is achieved by a call to the prepareCompletionHandler method that was passed to the completion handler in the above code. When prepareCompletionHandler is called, it must be passed the share object and a reference to the app’s CloudKit container. Bringing these requirements together gives us the following code:

let controller = UICloudSharingController { controller,
    prepareCompletionHandler in

let share = CKShare(rootRecord: thisRecord)

        share[CKShare.SystemFieldKey.title]
                 = "An Amazing House" as CKRecordValue
        share.publicPermission = .readOnly

        // Create a CKModifyRecordsOperation object and configure it
        // to save the CKShare instance and the record to be shared.
        let modifyRecordsOperation = CKModifyRecordsOperation(
            recordsToSave: [myRecord, share],
            recordIDsToDelete: nil)

        // Create a CKOperation instance
        let configuration = CKOperation.Configuration()

        // Set configuration properties to provide timeout limits
        configuration.timeoutIntervalForResource = 10
        configuration.timeoutIntervalForRequest = 10

        // Apply the configuration options to the operation
        modifyRecordsOperation.configuration = configuration

        // Assign a completion block to the CKModifyRecordsOperation. This will
        // be called the modify records operation completes or fails. 
                     
        modifyRecordsOperation.modifyRecordsResultBlock = { result in
            switch result {
            case .success:
                // The share operation was successful. Call the completion
                // handler
                prepareCompletionHandler(share, CKContainer.default(), nil)
            case .failure(let error):
                print(error.localizedDescription)
            }
        }
        
        // Start the operation by adding it to the database
        self.privateDatabase?.add(modifyRecordsOperation)
}Code language: Swift (swift)

Once the prepareCompletionHandler method has been called, the app for the chosen form of communication (Messages, Mail, etc.) will launch preloaded with the share link. All the user needs to do at this point is enter the contact details for the intended share recipient and send the message. Figure 51-3, for example, shows a share link loaded into the Mail app ready to be sent:

Figure 51-3

Accepting a CloudKit Share

When the recipient user receives a share link and selects it, a dialog will appear, providing the option to accept the share and open it in the corresponding app. When the app opens, the userDidAcceptCloudKitShareWith method is called on the scene delegate class located in the project’s SceneDelegate.swift file:

func windowScene(_ windowScene: UIWindowScene,
    userDidAcceptCloudKitShareWith cloudKitShareMetadata: CKShare.Metadata) {
}Code language: Swift (swift)

When this method is called, it is passed a CKShare.Metadata object containing information about the share. Although the user has accepted the share, the app must also accept the share using a CKAcceptSharesOperation object. As the acceptance operation is performed, it will report the results of the process via two result blocks assigned to it. The following example shows how to create and configure a CKAcceptSharesOperation instance to accept a share:

let container = CKContainer(identifier: metadata.containerIdentifier)
let operation = CKAcceptSharesOperation(shareMetadatas: [metadata])     
var rootRecordID: CKRecord.ID!

operation.acceptSharesResultBlock = { result in
    switch result {
    case .success:
        // The share was accepted successfully. Call the completion handler.
        completion(.success(rootRecordID))
    case .failure(let error):
        completion(.failure(error))
    }
}

operation.perShareResultBlock = { metadata, result in
    switch result {
    case .success:
        // The shared record ID was successfully obtained from the metadata.
        // Save a local copy for later. 
        rootRecordID = metadata.hierarchicalRootRecordID

        // Display the appropriate view controller and use it to fetch, and 
        // display the shared record.
        DispatchQueue.main.async {
            let viewController: ViewController = 
                    self.window?.rootViewController as! ViewController
            viewController.fetchShare(metadata)
        }        
    case .failure(let error):
        print(error.localizedDescription)
    }
}Code language: Swift (swift)

The final step in accepting the share is to add the configured CKAcceptSharesOperation object to the CKContainer instance to accept share the share:

container.add(operation) Code language: Swift (swift)

Fetching a Shared Record

Once a share has been accepted by both the user and the app, the shared record needs to be fetched and presented to the user. This involves the creation of a CKFetchRecordsOperation object using the root record ID contained within a CKShare.Metadata instance that has been configured with result blocks to be called with the results of the fetch operation. It is essential to be aware that this fetch operation must be executed on the shared cloud database instance of the app instead of the recipient’s private database. The following code, for example, fetches the record associated with a CloudKit share:

let operation = CKFetchRecordsOperation(
                     recordIDs: [metadata.hierarchicalRootRecordID!])

operation.perRecordResultBlock = { recordId, result in
    switch result {
    case .success(let record):
        DispatchQueue.main.async() {
             // Shared record successfully fetched. Update user 
             // interface here to present to the user. 
        }
    case .failure(let error):
        print(error.localizedDescription)
    }
}

operation.fetchRecordsResultBlock = { result in
    switch result {
    case .success:
        break
    case .failure(let error):
        print(error.localizedDescription)
    }
}

CKContainer.default().sharedCloudDatabase.add(operation)Code language: Swift (swift)

Once the record has been fetched, it can be presented to the user within the perRecordResultBlock code, taking the steps above to perform user interface updates asynchronously on the main thread.

Summary

CloudKit sharing allows records stored within a private CloudKit database to be shared with other app users at the discretion of the record owner. An app user could, for example, make one or more records accessible to other users so that they can view and, optionally, modify the record. When a record is shared, a share link is sent to the recipient user in the form of a URL. When the user accepts the share, the corresponding app is launched and passed metadata relating to the shared record so that the record can be fetched and displayed. CloudKit sharing involves the creation of CKShare objects initialized with the record to be shared. The UICloudSharingController class provides a pre-built view controller which handles much of the work involved in gathering the necessary information to send a share link to another user. In addition to sending a share link, the app must also be adapted to accept a share and fetch the record for the shared cloud database. This chapter has covered the basics of CloudKit sharing, a topic that will be covered further in a later chapter entitled An iOS CloudKit Sharing Tutorial.

Getting Location Information using the iOS 16 Core Location Framework

iOS devices can employ several techniques for obtaining information about the current geographical location of the device. These mechanisms include GPS, cell tower triangulation, and finally (and least accurately), using the IP address of available Wi-Fi connections. The mechanism used by iOS to detect location information is largely transparent to the app developer. The system will automatically use the most accurate solution available at any given time. All that is needed to integrate location-based information into an iOS app is understanding how to use the Core Location Framework, which is the subject of this chapter.

Once the basics of location tracking with Core Location have been covered in this chapter, the next chapter will provide detailed steps on how to create An Example iOS 16 Location App.

The Core Location Manager

The key classes contained within the Core Location Framework are CLLocationManager and CLLocation. An instance of the CLLocationManager class can be created using the following Swift code:

var locationManager: CLLocationManager = CLLocationManager()

Once a location manager instance has been created, it must seek permission from the user to collect location information before it can begin to track data.

Requesting Location Access Authorization

Before any app can begin tracking location data, it must first seek permission from the user. This can be achieved by calling one of two methods on the CLLocationManager instance, depending on the specific requirement. For example, suppose the app only needs to track location information when the app is in the foreground. In that case, a call should be made to the requestWhenInUseAuthorization method of the location manager instance. For example:

locationManager.requestWhenInUseAuthorization()

If tracking is also required when the app is running in the background, the requestAlwaysAuthorization method should be called:

locationManager.requestAlwaysAuthorization()

If an app requires always authorization, the recommended path to requesting this permission is first to seek when in use permission and then offer the user the opportunity to elevate this permission to always mode at the point that the app needs it. The reasoning behind this recommendation is that when seeking always permission, the request dialog displayed by iOS will provide the user the option of using either when in use or always location tracking. Given these choices, most users will typically select the when in use option. Therefore, a better approach is to begin by requesting when in use tracking and then explain the benefits of elevating to always mode in a later request.

Both location authorization request method calls require that specific key-value pairs be added to the Information Property List dictionary contained within the app’s Info.plist file. The values take the form of strings and must describe the reason why the app needs access to the user’s current location. The keys associated with these values are as follows:

  • NSLocationWhenInUseUsageDescription – A string value describing to the user why the app needs access to the current location when running in the foreground. This string is displayed when a call is made to the requestWhenInUseAuthorization method of the locationManager instance. The dialog displayed to the user containing this message will only provide the option to permit when in use location tracking. All apps built using the iOS 11 SDK or later must include this key regardless of the usage permission level being requested to access the device location.
  • NSLocationAlwaysAndWhenInUseUsageDesciption – The string displayed when permission is requested for always authorization using the requestAlwaysAuthorization method. The request dialog containing this message will allow the user to select either always or when in use authorization. All apps built using the iOS 11 SDK or later must include this key when accessing device location information.
  • NSLocationAlwaysUsageDescription – A string describing to the user why the app needs always access to the current location. This description is not used on devices running iOS 11 or later, though it should still be declared for compatibility with legacy devices.

Configuring the Desired Location Accuracy

The level of accuracy to which location information is to be tracked is specified via the desiredAccuracy property of the CLLocationManager object. It is important to keep in mind when configuring this property that the greater the level of accuracy selected, the greater the drain on the device battery. An app should, therefore, never request a greater accuracy than is needed.

Several predefined constant values are available for use when configuring this property:

  • kCLLocationAccuracyBestForNavigation – Uses the highest possible level of accuracy augmented by additional sensor data. This accuracy level is intended solely for when the device is connected to an external power supply.
  • kCLLocationAccuracyBest – The highest recommended level of accuracy for devices running on battery power.
  • kCLLocationAccuracyNearestTenMeters – Accurate to within 10 meters.
  • kCLLocationAccuracyHundredMeters – Accurate to within 100 meters.
  • kCLLocationAccuracyKilometer – Accurate to within one kilometer.
  • kCLLocationAccuracyThreeKilometers – Accurate to within three kilometers.

The following code, for example, sets the level of accuracy for a location manager instance to “best accuracy”:

locationManager.desiredAccuracy = kCLLocationAccuracyBest

Configuring the Distance Filter

The default configuration for the location manager is to report updates whenever any changes are detected in the device’s location. The distanceFilter property of the location manager allows apps to specify the amount of distance the device location must change before an update is triggered. If, for example, the distance filter is set to 1000 meters, the app will only receive a location update when the device travels 1000 meters or more from the location of the last update. For example, to specify a distance filter of 1500 meters:

locationManager.distanceFilter = 1500.0

The distance filter may be canceled, thereby returning to the default setting, using the kCLDistanceFilterNone constant:

locationManager.distanceFilter = kCLDistanceFilterNone

Continuous Background Location Updates

The location tracking options covered so far in this chapter only receive updates when the app is either in the foreground or background. The updates will stop as soon as the app enters the suspended state (in other words, the app is still resident in memory but is no longer executing code). However, if location updates are required even when the app is suspended (a key requirement for navigation-based apps), continuous background location updates must be enabled for the app. When enabled, the app will be woken from suspension each time a location update is triggered and provided the latest location data.

Enable continuous location updates is a two-step process beginning with the addition of an entry to the project Info.plist file. This is most easily achieved by enabling the location updates background mode in the Xcode Signing & Capabilities panel, as shown in Figure 66-1:

Figure 66-1

Within the app code, continuous updates are enabled by setting the allowsBackgroundLocationUpdates property of the location manager to true:

locationManager.allowsBackgroundLocationUpdates = true

To allow the location manager to suspend updates temporarily, set the pausesLocationUpdatesAutomatically property of the location manager to true.

locationManager.pausesLocationUpdatesAutomatically = true

This setting allows the location manager to extend battery life by pausing updates when it is appropriate to do so (for example, when the user’s location remains unchanged for a significant amount of time). When the user starts moving again, the location manager will automatically resume updates.

Continuous location background updates are available for apps for both always and when in use authorization modes.

The Location Manager Delegate

Location manager updates and errors result in calls to two delegate methods defined within the CLLocationManagerDelegate protocol. Templates for the two delegate methods that must be implemented to comply with this protocol are as follows:

func locationManager(_ manager: CLLocationManager,
                didUpdateLocations locations: [CLLocation])
{
   // Handle location updates here
}

func locationManager(_ manager: CLLocationManager,
         didFailWithError error: Error)
{
   // Handle errors here 
}

Each time the location changes, the didUpdateLocations delegate method is called and passed as an argument an array of CLLocation objects with the last object in the array containing the most recent location data.

Changes to the location tracking authorization status of an app are reported via a call to the optional didChangeAuthorization delegate method:

func locationManager(_ manager: CLLocationManager, 
      didChangeAuthorization status: CLAuthorizationStatus) {

    // App may no longer be authorized to obtain location
    //information. Check the status here and respond accordingly.
}

Once a class has been configured to act as the delegate for the location manager, that object must be assigned to the location manager instance. In most cases, the delegate will be the same view controller class in which the location manager resides, for example:

locationManager.delegate = self

Starting and Stopping Location Updates

Once suitably configured and authorized, the location manager can then be instructed to start tracking location information:

locationManager.startUpdatingLocation()

With each location update, the didUpdateLocations delegate method is called by the location manager and passed information about the current location.

To stop location updates, call the stopUdatingLocation method of the location manager as follows:

locationManager.stopUpdatingLocation()

Obtaining Location Information from CLLocation Objects

Location information is passed through to the didUpdateLocation delegate method in the form of CLLocation objects. A CLLocation object encapsulates the following data:

  • Latitude
  • Longitude
  • Horizontal Accuracy
  • Altitude
  • Altitude Accuracy

Longitude and Latitude

Longitude and latitude values are stored as type CLLocationDegrees and may be obtained from a CLLocation object as follows:

let currentLatitude: CLLocationDistance = 
        location.coordinate.latitude

let currentLongitude: CLLocationDistance = 
        location.coordinate.longitude

Accuracy

Horizontal and vertical accuracy are stored in meters as CLLocationAccuracy values and may be accessed as follows:

let verticalAccuracy: CLLocationAccuracy = 
        location.verticalAccuracy

let horizontalAccuracy: CLLocationAccuracy = 
        location.horizontalAccuracy

Altitude

The altitude value is stored in meters as a type CLLocationDistance value and may be accessed from a CLLocation object as follows:

let altitude: CLLocationDistance = location.altitude

Getting the Current Location

If all that is required from the location manager is the user’s current location without the need for continuous location updates, this can be achieved via a call to the requestLocation method of the location manager instance. This method will identify the current location and call the didUpdateLocations delegate once passing through the current location information. Location updates are then automatically turned off:

locationManager.requestLocation()

Calculating Distances

The distance between two CLLocation points may be calculated by calling the distance(from:) method of the end location and passing through the start location as an argument. For example, the following code calculates the distance between the points specified by startLocation and endLocation:

var distance: CLLocationDistance = 
	endLocation.distance(from: startLocation)

Summary

This chapter has provided an overview of the use of the iOS Core Location Framework to obtain location information within an iOS app. This theory will be put into practice in the next chapter entitled An Example iOS 16 Location App.

An Example iOS 16 MKMapItem App

This chapter aims to work through creating an example iOS app that uses reverse geocoding together with the MKPlacemark and MKMapItem classes. The app will consist of a screen into which the user will be required to enter destination address information. Then, when the user selects a button, a map will be launched containing turn-by-turn directions from the user’s current location to the specified destination.

Creating the MapItem Project

Launch Xcode and create a new project using the iOS App template with the Swift and Storyboard options selected, entering MapItem as the product name.

Designing the User Interface

The user interface will consist of four Text Field objects into which the destination address will be entered, together with a Button to launch the map. Select the Main.storyboard file in the project navigator panel and, using the Library palette, design the user interface layout to resemble that of Figure 65-1. Take steps to widen the Text Fields and configure Placeholder text attributes on each one.

If you reside in a country not divided into States and Zip code regions, feel free to adjust the user interface accordingly.

Display the Resolve Auto Layout Issues menu and select the Reset to Suggested Constraints option under All Views in View Controller.

The next step is to connect the outlets for the text views and declare an action for the button. Next, select the Street address Text Field object, display the Assistant Editor, and ensure that the editor displays the ViewController.swift file.

Figure 65-1

Ctrl-click on the Street address Text Field object and drag the resulting line to the area immediately beneath the An Example iOS 16 MKMapItem App

class declaration directive in the Assistant Editor panel. Upon releasing the line, the configuration panel will appear. Configure the connection as an Outlet named address and click on the Connect button. Repeat these steps for the City, State, and Zip text fields, connecting them to outlets named city, state, and zip.

Ctrl-click on the Get Directions button and drag the resulting line to a position beneath the new outlets declared in the Assistant Editor. In the resulting configuration panel, change the Connection type to Action and name the method getDirections. On completion, the beginning of the ViewController.swift file should read as follows:

import UIKit

class ViewController: UIViewController {

    @IBOutlet weak var address: UITextField!
    @IBOutlet weak var city: UITextField!
    @IBOutlet weak var state: UITextField!
    @IBOutlet weak var zip: UITextField!
.
.
    @IBAction func getDirections(_ sender: Any) {
    }
.
.
}Code language: Swift (swift)

Converting the Destination using Forward Geocoding

When the user touches the button in the user interface, the getDirections method can extract the address information from the text fields. The objective will be to create an MKPlacemark object to contain this location. As outlined in Integrating Maps into iOS 16 Apps using MKMapItem, an MKPlacemark instance requires the longitude and latitude of an address before it can be instantiated. Therefore, the first step in the getDirections method is to perform a forward geocode translation of the address. Before doing so, however, it is necessary to declare a property in the ViewController.swift file in which to store these coordinates once they have been calculated. This will, in turn, requires that the CoreLocation framework be imported. Therefore, now is also an opportune time to import the MapKit and Contacts frameworks, both of which will be required later in the chapter:

Next, select the ViewController.swift file, locate the getDirections method stub and modify it to convert the address string to geographical coordinates:

@IBAction func getDirections(_ sender: Any) {
    
    if let addressString = address.text,
        let cityString = city.text,
        let stateString = state.text,
        let zipString = zip.text {
    
        let addressString = 
            "\(addressString) \(cityString) \(stateString) \(zipString)"
        
        CLGeocoder().geocodeAddressString(addressString, 
                   completionHandler: {(placemarks, error) in
                
                if error != nil {
                    print("Geocode failed: \(error!.localizedDescription)")
                } else if let marks = placemarks, marks.count > 0 {
                        let placemark = marks[0]
                        if let location = placemark.location {
                        self.coords = location.coordinate
                        self.showMap()
                    }
                }
        })
    }
}Code language: Swift (swift)

The steps used to perform the geocoding translation mirror those outlined in Integrating Maps into iOS 16 Apps using MKMapItem with one difference: a method named showMap is called if a successful translation occurs. All that remains, therefore, is to implement this method.

Launching the Map

With the address string and coordinates obtained, the final task is implementing the showMap method. This method will create a new MKPlacemark instance for the destination address, configure options for the map to request driving directions, and launch the map. Since the map will be launched with a single map item, it will default to providing directions from the current location. With the ViewController.swift file still selected, add the code for the showMap method so that it reads as follows:

func showMap() {
    
    if let addressString = address.text,
        let cityString = city.text,
        let stateString = state.text,
        let zipString = zip.text,
        let coordinates = coords {
        
        let addressDict =
            [CNPostalAddressStreetKey: addressString,
             CNPostalAddressCityKey: cityString,
             CNPostalAddressStateKey: stateString,
             CNPostalAddressPostalCodeKey: zipString]
        
        let place = MKPlacemark(coordinate: coordinates,
                                addressDictionary: addressDict)
        
        let mapItem = MKMapItem(placemark: place)
        
        let options = [MKLaunchOptionsDirectionsModeKey:
            MKLaunchOptionsDirectionsModeDriving]
        
        mapItem.openInMaps(launchOptions: options)
    }
}Code language: Swift (swift)

The method simply creates an NSDictionary containing the contact keys and values for the destination address. It then creates an MKPlacemark instance using the address dictionary and the coordinates from the forward geocoding operation. Next, a new MKMapItem object is created using the placemarker object before another dictionary is created and configured to request driving directions. Finally, the map is launched.

Building and Running the App

Within the Xcode toolbar, click on the Run button to compile and run the app, either on a physical iOS device or the iOS Simulator. Once loaded, enter an address into the text fields before touching the Get Directions button. The map should subsequently appear with the route between your current location and the destination address. Note that if the app is running in the simulator, the current location will likely default to Apple’s headquarters in California:

Figure 65-2

Summary

This chapter’s goal has been to create a simple app that uses geocoding and the MKPlacemark and MKMapItem classes. The example app created in this chapter has demonstrated the ease with which maps and directions can be integrated into iOS apps.

Integrating Maps into iOS 16 Apps using MKMapItem

If there is one fact about Apple that we can state with any degree of certainty, it is that the company is passionate about retaining control of its destiny. Unfortunately, one glaring omission in this overriding corporate strategy has been the reliance on a competitor (in the form of Google) for mapping data in iOS. This dependency officially ended with iOS 6 through the introduction of Apple Maps.

In iOS 8, Apple Maps officially replaced the Google-based map data with data provided primarily by TomTom (but also technology from other companies, including some acquired by Apple for this purpose). Headquartered in the Netherlands, TomTom specializes in mapping and GPS systems. Of particular significance, however, is that TomTom (unlike Google) does not make smartphones, nor does it develop an operating system that competes with iOS, making it a more acceptable partner for Apple.

As part of the iOS 6 revamp of mapping, the SDK also introduced a class called MKMapItem, designed solely to ease the integration of maps and turn-by-turn directions into iOS apps. This was further enhanced in iOS 9 with the introduction of support for transit times, directions, and city flyover support.

For more advanced mapping requirements, the iOS SDK also includes the original classes of the MapKit framework, details of which will be covered in later chapters.

MKMapItem and MKPlacemark Classes

The MKMapItem class aims to make it easy for apps to launch maps without writing significant amounts of code. MKMapItem works in conjunction with the MKPlacemark class, instances of which are passed to MKMapItem to define the locations that are to be displayed in the resulting map. A range of options is also provided with MKMapItem to configure both the appearance of maps and the nature of directions to be displayed (i.e., whether directions are for driving, walking, or public transit).

An Introduction to Forward and Reverse Geocoding

It is difficult to talk about mapping, particularly when dealing with the MKPlacemark class, without first venturing into geocoding. Geocoding can best be described as converting a textual-based geographical location (such as a street address) into geographical coordinates expressed in longitude and latitude.

In iOS development, geocoding may be performed using the CLGeocoder class to convert a text-based address string into a CLLocation object containing the coordinates corresponding to the address. The following code, for example, converts the street address of the Empire State Building in New York to longitude and latitude coordinates:

let addressString = "350 5th Avenue New York, NY"

CLGeocoder().geocodeAddressString(addressString, 
        completionHandler: {(placemarks, error) in
    
    if error != nil {
        print("Geocode failed with error: \(error!.localizedDescription)")
    } else if let marks = placemarks, marks.count > 0 {
        let placemark = marks[0]
        if let location = placemark.location {
            let coords = location.coordinate
        
            print(coords.latitude)
            print(coords.longitude)
        }
    }
})Code language: Swift (swift)

The code calls the geocodeAddressString method of a CLGeocoder instance, passing through a string object containing the street address and a completion handler to be called when the translation is complete. Passed as arguments to the handler are an array of CLPlacemark objects (one for each match for the address) together with an Error object which may be used to identify the reason for any failures.

For this example, the assumption is made that only one location matched the address string provided. The location information is then extracted from the CLPlacemark object at location 0 in the array, and the coordinates are displayed on the console.

The above code is an example of forward geocoding in that coordinates are calculated based on a text address description. Reverse geocoding, as the name suggests, involves the translation of geographical coordinates into a human-readable address string. Consider, for example, the following code:

let newLocation = CLLocation(latitude: 40.74835, longitude: -73.984911)

CLGeocoder().reverseGeocodeLocation(newLocation, completionHandler: {(placemarks, error) in
    if error != nil {
        print("Geocode failed with error: \(error!.localizedDescription)")
    }
    
    if let marks = placemarks, marks.count > 0 {
        let placemark = marks[0]
        let postalAddress = placemark.postalAddress
        
        if let address = postalAddress?.street,
            let city = postalAddress?.city,
            let state = postalAddress?.state,
            let zip = postalAddress?.postalCode {
        
                print("\(address) \(city) \(state) \(zip)")
        }
    }
})Code language: Swift (swift)

In this case, a CLLocation object is initialized with longitude and latitude coordinates and then passed through to the reverseGeocodeLocation method of a CLGeocoder object. Next, the method passes through an array of matching addresses to the completion handler in the form of CLPlacemark objects. Each placemark contains Integrating Maps into iOS 16 Apps using MKMapItem

the address information for the matching location in the form of a CNPostalAddress object. Once again, the code assumes a single match is contained in the array and accesses and displays the address, city, state, and zip properties of the postal address object on the console.

When executed, the above code results in output that reads:

338 5th Ave New York New York 10001Code language: plaintext (plaintext)

It should be noted that the geocoding is not performed on the iOS device but rather on a server to which the device connects when a translation is required, and the results are subsequently returned when the translation is complete. As such, geocoding can only occur when the device has an active internet connection.

Creating MKPlacemark Instances

Each location to be represented when a map is displayed using the MKMapItem class must be represented by an MKPlacemark object. When MKPlacemark objects are created, they must be initialized with the geographical coordinates of the location together with an NSDictionary object containing the address property information. Continuing the example of the Empire State Building in New York, an MKPlacemark object would be created as follows:

import Contacts
import MapKit
.
.
let coords = CLLocationCoordinate2DMake(40.7483, -73.984911)

let address = [CNPostalAddressStreetKey: "350 5th Avenue",
               CNPostalAddressCityKey: "New York",
               CNPostalAddressStateKey: "NY",
               CNPostalAddressPostalCodeKey: "10118",
               CNPostalAddressISOCountryCodeKey: "US"]

let place = MKPlacemark(coordinate: coords, addressDictionary: address)Code language: Swift (swift)

While it is possible to initialize an MKPlacemark object passing through a nil value for the address dictionary, this will result in the map appearing, albeit with the correct location marked, but it will be tagged as “Unknown” instead of listing the address. The coordinates are, however, mandatory when creating an MKPlacemark object. If the app knows the text address but not the location coordinates, geocoding will need to be used to obtain the coordinates before creating the MKPlacemark instance.

Working with MKMapItem

Given the tasks it can perform, the MKMapItem class is extremely simple to use. In its simplest form, it can be initialized by passing through a single MKPlacemark object as an argument, for example:

let mapItem = MKMapItem(placemark: place)Code language: Swift (swift)

Once initialized, the openInMaps(launchOptions:) method will open the map positioned at the designated location with an appropriate marker, as illustrated in Figure 64-1:

mapItem.openInMaps(launchOptions: nil)Code language: Swift (swift)
Figure 64-1

Similarly, the map may be initialized to display the current location of the user’s device via a call to the MKMapItem forCurrentLocation method:

let mapItem = MKMapItem.forCurrentLocation()Code language: Swift (swift)

Multiple locations may be tagged on the map by placing two or more MKMapItem objects in an array and then passing that array through to the openMaps(with:) class method of the MKMapItem class. For example:

let mapItems = [mapItem1, mapItem2, mapItem3]

MKMapItem.openMaps(with: mapItems, launchOptions: nil)Code language: Swift (swift)

MKMapItem Options and Configuring Directions

In the example code fragments presented in the preceding sections, a nil value was passed through as the options argument to the MKMapItem methods. In fact, several configuration options are available for use when opening a map. These values need to be set up within an NSDictionary object using a set of pre-defined keys and values:

  • MKLaunchOptionsDirectionsModeKey – Controls whether directions are to be provided with the map. If only one placemarker is present, directions from the current location to the placemarker will be provided. The mode for the directions should be either MKLaunchOptionsDirectionsModeDriving, MKLaunchOptionsDirectionsModeWalking, or MKLaunchOptionsDirectionsModeTransit.
  • MKLaunchOptionsMapTypeKey – Indicates whether the map should display standard, satellite, hybrid, flyover, or hybrid flyover map images.
  • MKLaunchOptionsMapCenterKey – Corresponds to a CLLocationCoordinate2D structure value containing the coordinates of the location on which the map is to be centered.
  • MKLaunchOptionsMapSpanKey – An MKCoordinateSpan structure value designating the region the map should display when launched.
  • MKLaunchOptionsShowsTrafficKey – A Boolean value indicating whether traffic information should be Integrating Maps into iOS 16 Apps using MKMapItem superimposed over the map when it is launched.
  • MKLaunchOptionsCameraKey – When displaying a map in 3D flyover mode, the value assigned to this key takes the form of an MKMapCamera object configured to view the map from a specified perspective.

The following code, for example, opens a map with traffic data displayed and includes turn-by-turn driving directions between two map items:

let mapItems = [mapItem1, mapItem2]
let options = [MKLaunchOptionsDirectionsModeKey:
                        MKLaunchOptionsDirectionsModeDriving,
                MKLaunchOptionsShowsTrafficKey: true] as [String : Any]

MKMapItem.openMaps(with: mapItems, launchOptions: options)Code language: Swift (swift)

Adding Item Details to an MKMapItem

When a location is marked on a map, the address is displayed together with a blue arrow, which displays an information card for that location when selected.

The MKMapItem class allows additional information to be added to a location through the name, phoneNumber, and url properties. The following code, for example, adds these properties to the map item for the Empire State Building:

mapItem.name = "Empire State Building"
mapItem.phoneNumber = "+12127363100"
mapItem.url = URL(string: "https://esbnyc.com")

mapItem.openInMaps(launchOptions: nil)Code language: Swift (swift)

When the code is executed, the map place marker displays the location name instead of the address, together with the additional information:

Figure 64-2

A force touch performed on the marker displays a popover panel containing options to call the provided number or visit the website:

Figure 64-3

Summary

iOS 6 replaced Google Maps with maps provided by TomTom. Unlike Google Maps, which was assembled from static images, the new Apple Maps are dynamically rendered, resulting in clear and smooth zooming and more precise region selections. iOS 6 also introduced the MKMapItem class, which aims to make it easy for iOS app developers to launch maps and provide turn-by-turn directions with the minimum amount of code.

Within this chapter, the basics of geocoding and the MKPlacemark and MKMapItem classes have been covered. The next chapter, entitled An Example iOS 16 MKMapItem App, will work through creating an example app that utilizes the knowledge covered in this chapter.

An iOS 16 UIKit Dynamics Tutorial

With the basics of UIKit Dynamics covered in the previous chapter, this chapter will apply this knowledge to create an example app designed to show UIKit Dynamics in action. The example app created in this chapter will use the gravity, collision, elasticity, and attachment features in conjunction with touch handling to demonstrate how these key features are implemented.

Creating the UIKit Dynamics Example Project

Launch Xcode and create a new project using the iOS App template with the Swift and Storyboard options selected, entering UIKitDynamics as the product name.

Adding the Dynamic Items

The app’s user interface will consist of two view objects drawn as squares colored blue and red, respectively. Therefore, the first step in the tutorial is to implement the code to create and draw these views. Within the project navigator panel, locate and select the ViewController.swift file and add variables for these two views so that the file reads as follows:

import UIKit

class ViewController: UIViewController {

    var blueBoxView: UIView?
    var redBoxView: UIView?Code language: Swift (swift)

With the references declared, select the ViewController.swift file, add a new method (and call it from the viewDidLoad method) to draw the views, color them appropriately and then add them to the parent view so that they appear within the user interface:

override func viewDidLoad() {
    super.viewDidLoad()
    initViews()
}

func initViews() {

    var frameRect = CGRect(x: 10, y: 50, width: 80, height: 80)
    blueBoxView = UIView(frame: frameRect)
    blueBoxView?.backgroundColor = UIColor.blue
    
    frameRect = CGRect(x: 150, y: 50, width: 60, height: 60)
    redBoxView = UIView(frame: frameRect)
    redBoxView?.backgroundColor = UIColor.red
    
    if let blueBox = blueBoxView, let redBox = redBoxView {
        self.view.addSubview(blueBox)
        self.view.addSubview(redBox)
    }
}Code language: Swift (swift)

Perform a test run of the app on either a simulator or physical iOS device and verify that the new views appear as expected within the user interface (Figure 63-1):

Figure 63-1

Creating the Dynamic Animator Instance

As outlined in the previous chapter, a key element in implementing UIKit Dynamics is an instance of the UIDynamicAnimator class. Select the ViewController.swift file and add an instance variable for a UIDynamicAnimator object within the app code:

import UIKit

class ViewController: UIViewController {

    var blueBoxView: UIView?
    var redBoxView: UIView?
    var animator: UIDynamicAnimator?Code language: Swift (swift)

Next, modify the initViews method within the ViewController.swift file once again to add code to create and initialize the instance, noting that the top-level view of the view controller is passed through as the reference view:

Code language: Swift (swift)

With the dynamic items added to the user interface and an instance of the dynamic animator created and initialized, it is time to begin creating dynamic behavior instances.

Adding Gravity to the Views

The first behavior to be added to the example app will be gravity. For this tutorial, gravity will be added to both views such that a force of gravity of 1.0 UIKit Newton is applied directly downwards along the y-axis of the parent view. To achieve this, the initViews method needs to be further modified to create a suitably configured instance of the UIGravityBehavior class and to add that instance to the dynamic animator:

func initViews() {
    
    var frameRect = CGRect(x: 10, y: 20, width: 80, height: 80)
    blueBoxView = UIView(frame: frameRect)
    blueBoxView?.backgroundColor = UIColor.blue
    
    frameRect = CGRect(x: 150, y: 20, width: 60, height: 60)
    redBoxView = UIView(frame: frameRect)
    redBoxView?.backgroundColor = UIColor.red
    
    if let blueBox = blueBoxView, let redBox = redBoxView {
        self.view.addSubview(blueBox)
        self.view.addSubview(redBox)
        
        animator = UIDynamicAnimator(referenceView: self.view)
        
        let gravity = UIGravityBehavior(items: [blueBox,
                                                redBox])
        let vector = CGVector(dx: 0.0, dy: 1.0)
        gravity.gravityDirection = vector
        
        animator?.addBehavior(gravity)
    }
}Code language: Swift (swift)

Compile and run the app once again. Note that after launching, the gravity behavior causes the views to fall from the top of the reference view and out of view at the bottom of the device display. To keep the views within the bounds of the reference view, we need to set up a collision behavior.

Implementing Collision Behavior

In terms of collision behavior, the example requires that collisions occur both when the views impact each other and when making contact with the boundaries of the reference view. With these requirements in mind, the collision behavior needs to be implemented as follows:

func initViews() {
    
    var frameRect = CGRect(x: 10, y: 20, width: 80, height: 80)
    blueBoxView = UIView(frame: frameRect)
    blueBoxView?.backgroundColor = UIColor.blue
    
    frameRect = CGRect(x: 150, y: 20, width: 60, height: 60)
    redBoxView = UIView(frame: frameRect)
    redBoxView?.backgroundColor = UIColor.red
    
    if let blueBox = blueBoxView, let redBox = redBoxView {
        self.view.addSubview(blueBox)
        self.view.addSubview(redBox)
        
        animator = UIDynamicAnimator(referenceView: self.view)
        
        let gravity = UIGravityBehavior(items: [blueBox,
                                                redBox])
        let vector = CGVector(dx: 0.0, dy: 1.0)
        gravity.gravityDirection = vector
        
        let collision = UICollisionBehavior(items: [blueBox,
                                                    redBox])
        
        collision.translatesReferenceBoundsIntoBoundary = true
        
        animator?.addBehavior(collision)
        animator?.addBehavior(gravity)
    }
}Code language: Swift (swift)

Running the app should now cause the views to stop at the bottom edge of the reference view and bounce slightly after impact. The amount by which the views bounce in the event of a collision can be changed by creating a UIDynamicBehavior class instance and changing the elasticity property. The following code, for example, changes the elasticity of the blue box view so that it bounces to a higher degree than the red box:

func initViews() {
.
.
.        
        collision.translatesReferenceBoundsIntoBoundary = true

        let behavior = UIDynamicItemBehavior(items: [blueBox])
        behavior.elasticity = 0.5
        
        animator?.addBehavior(behavior)
        animator?.addBehavior(collision)
        animator?.addBehavior(gravity)
    }
}Code language: Swift (swift)

Attaching a View to an Anchor Point

So far in this tutorial, we have added some behavior to the app but have not yet implemented any functionality that connects UIKit Dynamics to user interaction. In this section, however, the example will be modified to create an attachment between the blue box view and the point of contact of a touch on the screen. This anchor point will be continually updated as the user’s touch moves across the screen, thereby causing the blue box to follow the anchor point. The first step in this process is to declare within the ViewController.swift file some instance variables within which to store both the current location of the anchor point and a reference to a UIAttachmentBehavior instance:

import UIKit

class ViewController: UIViewController {

    var blueBoxView: UIView?
    var redBoxView: UIView?
    var animator: UIDynamicAnimator?
    var currentLocation: CGPoint?
    var attachment: UIAttachmentBehavior?Code language: Swift (swift)

As outlined in the chapter entitled An Overview of iOS 16 Multitouch, Taps, and Gestures, touches can be detected by overriding the touchesBegan, touchesMoved, and touchesEnded methods. The touchesBegan method in the ViewController.swift file now needs to be implemented to obtain the coordinates of the touch and to add an attachment behavior between that location and the blue box view to the animator instance:

As the touch moves around within the reference view, the anchorPoint property of the attachment behavior needs to be modified to track the motion. This involves overriding the touchesMoved method as follows:

override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
    if let theTouch = touches.first {
        
        currentLocation = theTouch.location(in: self.view)
        
        if let location = currentLocation {
            attachment?.anchorPoint = location
        }
    }
}Code language: Swift (swift)

Finally, when the touch ends, the attachment needs to be removed so that the view will be pulled down to the bottom of the reference view by the previously defined gravity behavior. Remaining within the ViewController. swift file, implement the touchesEnded method as follows:

override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
    
    if let attach = attachment {
        animator?.removeBehavior(attach)
    }
}Code language: Swift (swift)

Compile and run the app and touch the display. As the touch moves, note that the blue box view moves as though tethered to the touch point. Move the touch such that the blue and red boxes collide and observe that the red box will move in response to the collision while the blue box will rotate on the attachment point as illustrated in Figure 63-2:

Figure 63-2

Release the touch and note that gravity causes the blue box to fall once again and settle at the bottom edge of the reference view.

The code that creates the attachment currently attaches to the center point of the blue box view. Modify the touchesBegan method to adjust the attachment point so that it is off-center:

override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
    
    if let theTouch = touches.first, let blueBox = blueBoxView {
        
        currentLocation = theTouch.location(in: self.view) as CGPoint?
        
        if let location = currentLocation {
            let offset = UIOffset(horizontal: 20, vertical: 20)
            attachment = UIAttachmentBehavior(item: blueBox,
                                        offsetFromCenter: offset,
                                          attachedToAnchor: location)
        }
        
        if let attach = attachment {
            animator?.addBehavior(attach)
        }
    }
}Code language: Swift (swift)

When the blue box view is now suspended by the anchor point attachment, it will tilt in accordance with the offset attachment point.

Implementing a Spring Attachment Between two Views

The final step in this tutorial is to attach the two views using a spring-style attachment. All that this involves is a few lines of code within the viewDidLoad method to create the attachment behavior, set the frequency and damping values to create the springing effect, and then add the behavior to the animator instance:

func initViews() {
.
.
        let behavior = UIDynamicItemBehavior(items: [blueBox])
        behavior.elasticity = 0.5
        
        let boxAttachment = UIAttachmentBehavior(item: blueBox,
                                                 attachedTo: redBox)
        boxAttachment.frequency = 4.0
        boxAttachment.damping = 0.0
        
        animator?.addBehavior(boxAttachment)

        animator?.addBehavior(behavior)
        animator?.addBehavior(collision)
        animator?.addBehavior(gravity)
    }
}Code language: Swift (swift)

When the app is now run, the red box will move in relation to the blue box as though connected by a spring (Figure 63-3). The views will even spring apart when pushed together before the touch is released.

Figure 63-3

Summary

The example created in this chapter has demonstrated the steps involved in implementing UIKit Dynamics within an iOS app in the form of gravity, collision, and attachment behaviors. Perhaps the most remarkable fact about the animation functionality implemented in this tutorial is that it was achieved in approximately 40 lines of UIKit Dynamics code, a fraction of the amount of code that would have been required to implement such behavior in the absence of UIKit Dynamics.

iOS 16 UIKit Dynamics – An Overview

UIKit Dynamics provides a powerful and flexible mechanism for combining user interaction and animation into iOS user interfaces. What distinguishes UIKit Dynamics from other approaches to animation is the ability to declare animation behavior in terms of real-world physics.

Before moving on to a detailed tutorial in the next chapter, this chapter will provide an overview of the concepts and methodology behind UIKit Dynamics in iOS.

Understanding UIKit Dynamics

UIKit Dynamics allows for the animation of user interface elements (typically view items) to be implemented within a user interface, often in response to user interaction. To fully understand the concepts behind UIKit Dynamics, it helps to visualize how real-world objects behave.

Holding an object in the air and then releasing it, for example, will cause it to fall to the ground. This behavior is, of course, the result of gravity. However, whether or not, and by how much, an object bounces upon impact with a solid surface is dependent upon that object’s elasticity and its velocity at the point of impact.

Similarly, pushing an object positioned on a flat surface will cause that object to travel a certain distance depending on the magnitude and angle of the pushing force combined with the level of friction at the point of contact between the two surfaces.

An object tethered to a moving point will react in various ways, such as following the anchor point, swinging in a pendulum motion, or even bouncing and spinning on the tether in response to more aggressive motions. However, an object similarly attached using a spring will behave entirely differently in response to the movement of the point of attachment.

Considering how objects behave in the real world, imagine the ability to selectively apply these same physics-related behaviors to view objects in a user interface, and you will begin understanding the basic concepts behind UIKit Dynamics. Not only does UIKit Dynamics allow user interface interaction and animation to be declared using concepts we are already familiar with, but in most cases, it allows this to be achieved with just a few simple lines of code.

The UIKit Dynamics Architecture

Before looking at how UIKit Dynamics are implemented in app code, it helps to understand the different elements that comprise the dynamics architecture.

The UIKit Dynamics implementation comprises four key elements: a dynamic animator, a set of one or more dynamic behaviors, one or more dynamic items, and a reference view.

Dynamic Items

The dynamic items are the view elements within the user interface to be animated in response to specified dynamic behaviors. A dynamic item is any view object that implements the UIDynamicItem protocol, which includes the UIView and UICollectionView classes and any subclasses thereof (such as UIButton and UILabel).

Any custom view item can work with UIKit Dynamics by conforming to the UIDynamicItem protocol.

Dynamic Behaviors

Dynamic behaviors are used to configure the behavior to be applied to one or more dynamic items. A range of predefined dynamic behavior classes is available, including UIAttachmentBehavior, UICollisionBehavior, UIGravityBehavior, UIDynamicItemBehavior, UIPushBehavior, and UISnapBehavior. Each is a subclass of the UIDynamicBehavior class, which will be covered in detail later in this chapter.

In general, an instance of the class corresponding to the desired behavior (UIGravityBehavior for gravity, for example) will be created, and the dynamic items for which the behavior is to be applied will be added to that instance. Dynamic items can be assigned to multiple dynamic behavior instances simultaneously and may be added to or removed from a dynamic behavior instance during runtime.

Once created and configured, behavior objects are added to the dynamic animator instance. Once added to a dynamic animator, the behavior may be removed at any time.

The Reference View

The reference view dictates the area of the screen within which the UIKit Dynamics animation and interaction are to take place. This is typically the parent superclass view or collection view, of which the dynamic item views are children.

The Dynamic Animator

The dynamic animator coordinates the dynamic behaviors and items and works with the underlying physics engine to perform the animation. The dynamic animator is represented by an instance of the UIDynamicAnimator class and is initialized with the corresponding reference view at creation time. Once created, suitably configured dynamic behavior instances can be added and removed as required to implement the desired user interface behavior.

The overall architecture for a UIKit Dynamics implementation can be represented visually using the diagram outlined in Figure 62-1:

Figure 62-1

The above example has added three dynamic behaviors to the dynamic animator instance. The reference view contains five dynamic items, all but one of which have been added to at least one dynamic behavior instance.

Implementing UIKit Dynamics in an iOS App

The implementation of UIKit Dynamics in an app requires three very simple steps:

  1. Create an instance of the UIDynamicAnimator class to act as the dynamic animator and initialize it with reference to the reference view.
  2. Create and configure a dynamic behavior instance and assign to it the dynamic items on which the specified behavior is to be imposed.
  3. Add the dynamic behavior instance to the dynamic animator.4. Repeat from step 2 to create and add additional behaviors.

Dynamic Animator Initialization

The first step in implementing UIKit Dynamics is to create and initialize an instance of the UIDynamicAnimator class. The first step is to declare an instance variable for the reference:

var animator: UIDynamicAnimator?

Next, the dynamic animator instance can be created. The following code, for example, creates and initializes the animator instance within the viewDidLoad method of a view controller, using the view controller’s parent view as the reference view:

override func viewDidLoad() {
    super.viewDidLoad()
    animator = UIDynamicAnimator(referenceView: self.view)
}

With the dynamic animator created and initialized, the next step is to configure behaviors, the details for which differ slightly depending on the nature of the behavior.

Configuring Gravity Behavior

Gravity behavior is implemented using the UIGravityBehavior class, the purpose of which is to cause view items to want to “fall” within the reference view as though influenced by gravity. UIKit Dynamics gravity is slightly different from real-world gravity in that it is possible to define a vector for the direction of the gravitational force using x and y components (x, y) contained within a CGVector instance. The default vector for this class is (0.0, 1.0), corresponding to downward acceleration at a speed of 1000 points per second2. A negative x or y value will reverse the direction of gravity.

A UIGravityBehavior instance can be initialized as follows, passing through an array of dynamic items on which the behavior is to be imposed (in this case, two views named view1 and view2):

let gravity = UIGravityBehavior(items: [view1, view2])

Once created, the default vector can be changed if required at any time:

let vector = CGVectorMake(0.0, 0.5)
gravity.gravityDirection = vector

Finally, the behavior needs to be added to the dynamic animator instance:

animator?.addBehavior(gravity)

At any point during the app lifecycle, dynamic items may be added to, or removed from, the behavior:

gravity.addItem(view3)
gravity.removeItem(view)

Similarly, the entire behavior may be removed from the dynamic animator:

animator?.removeBehavior(gravity)

When gravity behavior is applied to a view, and in the absence of opposing behaviors, the view will immediately move in the direction of the specified gravity vector. In fact, as currently defined, the view will fall out of the bounds of the reference view and disappear. This can be prevented by setting up a collision behavior.

Configuring Collision Behavior

UIKit Dynamics is all about making items move on the device display. When an item moves, there is a high chance it will collide either with another item or the boundaries of the encapsulating reference view. As previously discussed, in the absence of any form of collision behavior, a moving item can move out of the visible area of the reference view. Such a configuration will also cause a moving item to simply pass over the top of any other items that happen to be in its path. Collision behavior (defined using the UICollisionBehavior class) allows such collisions to behave in ways more representative of the real world.

Collision behavior can be implemented between dynamic items (such that certain items can collide with others) or within boundaries (allowing collisions to occur when an item reaches a designated boundary). Boundaries can be defined such that they correspond to the boundaries of the reference view, or entirely new boundaries can be defined using lines and Bezier paths.

As with gravity behavior, a collision is generally created and initialized with an array object containing the items to which the behavior is to be applied. For example:

let collision = UICollisionBehavior(items: [view1, view2])
animator?.addBehavior(collision)

As configured, view1 and view2 will now collide when coming into contact. The physics engine will decide what happens depending on the items’ elasticity and the collision’s angle and speed. In other words, the engine will animate the items to behave as if they were physical objects subject to the laws of physics.

By default, an item under the influence of a collision behavior will collide with other items in the same collision behavior set and any boundaries set up. To declare the reference view as a boundary, set the translatesReferenceBoundsIntoBoundary property of the behavior instance to true:

collision.translatesReferenceBoundsIntoBoundary = true

A boundary inset from the edges of the reference view may be defined using the setsTranslateReferenceBoundsIntoBoundaryWithInsets method, passing through the required insets as an argument in the form of a UIEdgeInsets object.

The collisionMode property may be used to change default collision behavior by assigning one of the following constants:

  • UICollisionBehaviorMode.items – Specifies that collisions only occur between items added to the collision behavior instance. Boundary collisions are ignored.
  • UICollisionBehaviorMode.boundaries – Configures the behavior to ignore item collisions, recognizing only collisions with boundaries.
  • UICollisionBehaviorMode.everything – Specifies that collisions occur between items added to the behavior and all boundaries. This is the default behavior.

The following code, for example, enables collisions only for items:

collision.collisionMode = UICollisionBehaviorMode.items

If an app needs to react to a collision, declare a class instance that conforms to the UICollisionBehaviorDelegate class by implementing the following methods and assign it as the delegate for the UICollisionBehavior object instance.

  • collisionBehavior(_:beganContactForItem:withBoundaryIdentifier:atPoint:)
  • collisionBehavior(_:beganContactForItem:withItem:atPoint:)
  • collisionBehavior(_:endedContactForItem:withBoundaryIdentifier:)
  • collisionBehavior(_:endedContactForItem:withItem:)

When implemented, the app will be notified when collisions begin and end. In most cases, the delegate methods will be passed information about the collision, such as the location and the items or boundaries involved.

In addition, aspects of the collision behavior, such as friction and the elasticity of the colliding items (such that they bounce on contact), may be configured using the UIDynamicBehavior class. This class will be covered in detail later in this chapter.

Configuring Attachment Behavior

As the name suggests, the UIAttachmentBehavior class allows dynamic items to be configured to behave as if attached. These attachments can take the form of two items attached or an item attached to an anchor point at specific coordinates within the reference view. In addition, the attachment can take the form of an imaginary piece of cord that does not stretch or a spring attachment with configurable damping and frequency properties that control how “bouncy” the attached item is in response to motion.

By default, the attachment point within the item itself is positioned at the center of the view. This can, however, be changed to a different position causing the real-world behavior outlined in Figure 62-2 to occur:

Figure 62-2

The physics engine will generally simulate animation to match what would typically happen in the real world. As illustrated above, the item will tilt when not attached in the center. If the anchor point moves, the attached view will also move. Depending on the motion, the item will swing in a pendulum motion and, assuming appropriate collision behavior configuration, bounce off any boundaries it collides with as it swings.

As with all UIKit Dynamics behavior, the physics engine performs all the work to achieve this. The only effort required by the developer is to write a few lines of code to set up the behavior before adding it to the dynamic animator instance. The following code, for example, sets up an attachment between two dynamic items:

let attachment = UIAttachmentBehavior(item: view1,  
					attachedToItem: view2)
animator?.addBehavior(attachment)

The following code, on the other hand, specifies an attachment between view1 and an anchor point with the frequency and damping values set to configure a spring effect:

let anchorpoint = CGPoint(x: 100, y: 100)
let attachment = UIAttachmentBehavior(item: view1,
			attachedToAnchor: anchorPoint)
attachment.frequency = 4.0
attachment.damping = 0.0

The above examples attach to the center point of the view. The following code fragment sets the same attachment as above, but with an attachment point offset 20, 20 points relative to the center of the view:

let anchorpoint = CGPoint(x: 100, y: 100)
let offset = UIOffset(horizontal: 20, vertical: 20)

let attachment = UIAttachmentBehavior(item: view1, 
				offsetFromCenter: offset, 
				attachedToAnchor: anchorPoint)

Configuring Snap Behavior

The UISnapBehavior class allows a dynamic item to be “snapped” to a specific location within the reference view. When implemented, the item will move toward the snap location as though pulled by a spring and, depending on the damping property specified, oscillate several times before finally snapping into place. Until the behavior is removed from the dynamic animator, the item will continue to snap to the location when subsequently moved to another position.

The damping property can be set to any value between 0.0 and 1.0, with 1.0 specifying maximum oscillation. The default value for damping is 0.5.

The following code configures snap behavior for dynamic item view1 with damping set to 1.0:

let point = CGPoint(x: 100, y: 100)
let snap = UISnapBehavior(item: view1, snapToPoint: point)
snap.damping = 1.0

animator?.addBehavior(snap)

Configuring Push Behavior

Push behavior, defined using the UIPushBehavior class, simulates the effect of pushing one or more dynamic items in a specific direction with a specified force. The force can be specified as continuous or instantaneous. In the case of a continuous push, the force is continually applied, causing the item to accelerate over time. The instantaneous push is more like a “shove” than a push in that the force is applied for a short pulse causing the item to gain velocity quickly but gradually lose momentum and eventually stop. Once an instantaneous push event has been completed, the behavior is disabled (though it can be re-enabled).

The direction of the push can be defined in radians or using x and y components. By default, the pushing force is applied to the center of the dynamic item, though, as with attachments, this can be changed to an offset relative to the center of the view.

A force of magnitude 1.0 is defined as being a force of one UIKit Newton, which equates to a view sized at 100 x 100 points with a density of value 1.0 accelerating at a rate of 100 points per second2. As explained in the next section, the density of a view can be configured using the UIDynamicItemBehavior class.

The following code pushes an item with instantaneous force at a magnitude of 0.2 applied on both the x and y axes, causing the view to move diagonally down and to the right:

let push = UIPushBehavior(items: [view1], 
			   mode: UIPushBehaviorMode.instantaneous)
let vector = CGVector(dx: 0.2, dy: 0.2)
push.pushDirection = vector

Continuous push behavior can be achieved by changing the mode in the above code property to UIPushBehaviorMode.continuous.

To change the point where force is applied, configure the behavior using the setTargetOffsetFromCenter(_:for:) method of the behavior object, specifying an offset relative to the center of the view. For example:

let offset = UIOffset(horizontal: 20, vertical: 20)
push.setTargetOffsetFromCenter(offset, for:view1)

In most cases, an off-center target for the pushing force will cause the item to rotate as it moves, as indicated in Figure 62-3:

Figure 62-3

The UIDynamicItemBehavior Class

The UIDynamicItemBehavior class allows additional behavior characteristics to be defined that complement a number of the above primitive behaviors. This class can, for example, be used to define the density, resistance, and elasticity of dynamic items so that they do not move as far when subjected to an instantaneous push or bounce to a greater extent when involved in a collision. Dynamic items also can rotate by default. If rotation is not required for an item, this behavior can be turned off using a UIDynamicItemBehavior instance.

The behavioral properties of dynamic items that the UIDynamicItemBehavior class can govern are as follows:

  • allowsRotation – Controls whether or not the item is permitted to rotate during animation.
  • angularResistence – The amount by which the item resists rotation. The higher the value, the faster the item will stop rotating.
  • density – The mass of the item.
  • elasticity – The amount of elasticity an item will exhibit when involved in a collision. The greater the elasticity, the more the item will bounce.
  • friction – The resistance exhibited by an item when it slides against another item.
  • resistance – The overall resistance that the item exhibits in response to behavioral influences. The greater the value, the sooner the item will come to a complete stop during animation.

In addition, the class includes the following methods that may be used to increase or decrease the angular or linear velocity of a specified dynamic item:

  • angularVelocity(for:) – Increases or decreases the angular velocity of the specified item. Velocity is specified in radians per second, where a negative value reduces the angular velocity.
  • linearVelocity(for:) – Increases or decreases the linear velocity of the specified item. Velocity is specified in points per second, where a negative value reduces the velocity.

The following code example creates a new UIDynamicItemBehavior instance and uses it to set resistance and elasticity for two views before adding the behavior to the dynamic animator instance:

let behavior = UIDynamicItemBehavior(items: [view1, view2])
behavior.elasticity = 0.2
behavior.resistance = 0.5
animator?.addBehavior(behavior)

62.11 Combining Behaviors to Create a Custom Behavior

Multiple behaviors may be combined to create a single custom behavior using an instance of the UIDynamicBehavior class. The first step is to create and initialize each of the behavior objects. An instance of the UIDynamicBehavior class is then created, and each behavior is added to it via calls to the addChildBehavior method. Once created, only the UIDynamicBehavior instance needs to be added to the dynamic animator. For example:

// Create multiple behavior objects here

let customBehavior = UIDynamicBehavior()

customBehavior.addChildBehavior(behavior)
customBehavior.addChildBehavior(attachment)
customBehavior.addChildBehavior(gravity)
customBehavior.addChildBehavior(push)

animator?.addBehavior(customBehavior)

Summary

UIKit Dynamics provides a new way to bridge the gap between user interaction with an iOS device and corresponding animation within an app user interface. UIKit Dynamics takes a novel approach to animation by allowing view items to be configured such that they behave in much the same way as physical objects in the real world. This chapter has covered an overview of the basic concepts behind UIKit Dynamics and provided some details on how such behavior is implemented in terms of coding. The next chapter will work through a tutorial demonstrating many of these concepts.

iOS 16 Animation using UIViewPropertyAnimator

Most visual effects used throughout the iOS user interface are performed using UIKit animation. UIKit provides a simple mechanism for implementing basic animation within an iOS app. For example, if you need a user interface element to fade in or out of view gently, slide smoothly across the screen, or gracefully resize or rotate before the user’s eyes, these effects can be achieved using UIKit animation in just a few lines of code.

This chapter will introduce the basics of UIKit animation and work through a simple example. While much can be achieved with UIKit animation, if you plan to develop a graphics-intensive 3D style app, it is more likely that Metal or SceneKit will need to be used, a subject area to which numerous books are dedicated.

The Basics of UIKit Animation

The cornerstone of animation in UIKit is the UIViewPropertyAnimator class. This class allows the changes made to the properties of a view object to be animated using a range of options.

For example, consider a UIView object containing a UIButton connected to an outlet named theButton. The app requires that the button gradually fades from view over 3 seconds. This can be achieved by making the button transparent through the use of the alpha property:

theButton.alpha = 0Code language: Swift (swift)

However, setting the alpha property to 0 causes the button to become transparent immediately. To make it fade out of sight gradually, we need to create a UIViewPropertyAnimator instance configured with the duration of the animation. This class also needs to know the animation curve of the animation. This curve is used to control the speed of the animation as it is running. For example, an animation might start slow, speed up and then slow down again before completion. The timing curve of an animation is controlled by the UICubicTimingParameters and UISpringTimingParameters classes. For example, the following code configures a UIViewPropertyAnimator instance using the standard “ease in” animation curve dispersed over a 2-second duration:

let timing = UICubicTimingParameters(animationCurve: .easeIn)
let animator = UIViewPropertyAnimator(duration: 2.0, 
				timingParameters:timing)Code language: Swift (swift)

Once the UIViewPropertyAnimator class has been initialized, the animation sequence to be performed needs to be added, followed by a call to the object’s startAnimation method:

animator.addAnimations {
    self.theButton.alpha = 0
}
animator.startAnimation()Code language: Swift (swift)

A range of other options is available when working with a UIViewPropertyAnimator instance. Animation may be paused or stopped anytime via calls to the pauseAnimation and stopAnimation methods. To configure the animator to call a completion handler when the animation finishes, assign the handler to the object’s completion property. The animation may be reversed by assigning a true value to the isReversed property. The start of the animation may be delayed by passing through a delay duration when initializing the UIViewPropertyAnimator class as follows:

animator.startAnimation(afterDelay: 4.0)Code language: Swift (swift)

Understanding Animation Curves

As previously mentioned, in addition to specifying the duration of an animation sequence, the linearity of the animation timeline may also be defined by specifying an animation curve. This setting controls whether the animation is performed at a constant speed, whether it starts out slow and speeds up, and provides options for adding spring-like behavior to an animation.

The UICubicTimingParameters class is used to configure time-based animation curves. As demonstrated in the previous section, one option when using this class is to use one of the following four standard animation curves provided by UIKit:

  • .curveLinear – The animation is performed at a constant speed for the specified duration and is the option declared in the above code example.
  • .curveEaseOut – The animation starts fast and slows as the end of the sequence approaches.
  • .curveEaseIn – The animation sequence starts slow and speeds up as the end approaches.
  • .curveEaseInOut – The animation starts slow, speeds up, and slows down again.

If the standard options do not meet your animation needs, a custom cubic curve may be created and used as the animation curve simply by specifying control points:

let timing = UICubicTimingParameters(
		controlPoint1: CGPoint(x:0.0, y:1.0),
               controlPoint2: CGPoint(x:1.0,y:0.0))Code language: Swift (swift)

Alternatively, property changes to a view may be animated using a spring effect via the UISpringTimingParameters class. Instances of this class can be configured using mass, spring “stiffness,” damping, and velocity values as follows:

let timing = UISpringTimingParameters(mass: 0.5, stiffness: 0.5, 
	damping: 0.3, initialVelocity: CGVector(dx:1.0, dy: 0.0))Code language: Swift (swift)

Alternatively, the spring effect may be configured using just the damping ratio and velocity:

let timing = UISpringTimingParameters(dampingRatio: 0.4, 
		initialVelocity: CGVector(dx:1.0, dy: 0.0))Code language: Swift (swift)

Performing Affine Transformations

Transformations allow changes to be made to the coordinate system of a screen area. This essentially allows the programmer to rotate, resize and translate a UIView object. A call is made to one of several transformation functions, and the result is assigned to the transform property of the UIView object.

For example, to change the scale of a UIView object named myView by a factor of 2 in both height and width:

myView.transform = CGAffineTransform(scaleX: 2, y: 2)Code language: Swift (swift)

Similarly, the UIView object may be rotated using the CGAffineTransform(rotationAngle:) function, which takes as an argument the angle (in radians) by which the view is to be rotated. The following code, for example, rotates a view by 90 degrees:

let angle = CGFloat(90 * .pi / 180)
myView.transform = CGAffineTransform(rotationAngle: angle)Code language: Swift (swift)

The key point to remember with transformations is that they become animated effects when performed within an animation sequence. The transformations evolve over the duration of the animation and follow the specified animation curve in terms of timing.

Combining Transformations

Two transformations may be combined to create a single transformation effect via a call to the concatenating method of the first transformation instance, passing through the second transformation object as an argument. This function takes as arguments the two transformation objects to be combined. The result may then be assigned to the transform property of the UIView object to be transformed. The following code fragment, for example, creates a transformation combining both scale and rotation:

let scaleTrans = CGAffineTransform(scaleX: 2, 2)

let angle = CGFloat(90 * .pi / 180)
let rotateTrans = CGAffineTransform(rotationAngle: angle)

scaleTrans.concatenating(rotateTrans)Code language: Swift (swift)

Affine transformations offer an extremely powerful and flexible mechanism for creating animations, and it is impossible to do justice to these capabilities in a single chapter. However, a good starting place to learn about affine transformations is the Transforms chapter of Apple’s Quartz 2D Programming Guide.

Creating the Animation Example App

The remainder of this chapter is dedicated to creating an iOS app that demonstrates the use of UIKit animation. The result is a simple app on which a blue square appears. When the user touches a location on the screen, the box moves to that location using a spring-based animation curve. Through the use of affine transformations, the box will rotate 180 degrees as it moves to the new location while also changing in size and color. Finally, a completion handler will change the color a second time once the animation has finished.

Launch Xcode and create a new project using the iOS App template with the Swift and Storyboard options selected, entering Animate as the product name.

Implementing the Variables

For this app, we will need a UIView to represent the blue square and variables to contain the rotation angle and scale factor by which the square will be transformed. These need to be declared in the ViewController.swift file as follows:

import UIKit

class ViewController: UIViewController {

    var scaleFactor: CGFloat = 2
    var angle: Double = 180
    var boxView: UIView?
.
.Code language: Swift (swift)

Drawing in the UIView

Having declared the UIView reference, we need to initialize an instance object and draw a blue square at a specific location on the screen. We also need to add boxView as a subview of the app’s main view object. These tasks only need to be performed once when the app first starts up, so a good option is within a new method to be called from the viewDidLoad method of the ViewController.swift file:

override func viewDidLoad() {
    super.viewDidLoad()
    
    initView()
}

func initView() {
    let frameRect = CGRect(x: 20, y: 20, width: 45, height: 45)
    boxView = UIView(frame: frameRect)
    
    if let view = boxView {
        view.backgroundColor = UIColor.blue
        self.view.addSubview(view)
    }
}Code language: Swift (swift)

Detecting Screen Touches and Performing the Animation

When the user touches the screen, the blue box needs to move from its current location to the location of the touch. During this motion, the box will rotate 180 degrees and change in size. The detection of screen touches was covered in detail in An Overview of iOS 16 Multitouch, Taps, and Gestures. For this example, we want to initiate the animation at the point that the user’s finger is lifted from the screen, so we need to implement the touchesEnded method in the ViewController.swift file:

override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {

    if let touch = touches.first {
        let location = touch.location(in: self.view)
        let timing = UICubicTimingParameters(
				animationCurve: .easeInOut)
        let animator = UIViewPropertyAnimator(duration: 2.0, 
				timingParameters:timing)

        animator.addAnimations {
            let scaleTrans =
                CGAffineTransform(scaleX: self.scaleFactor,
                                  y: self.scaleFactor)
            let rotateTrans = CGAffineTransform(
                rotationAngle: CGFloat(self.angle * .pi / 180))

            self.boxView!.transform =
                scaleTrans.concatenating(rotateTrans)

            self.angle = (self.angle == 180 ? 360 : 180)
            self.scaleFactor = (self.scaleFactor == 2 ? 1 : 2)
            self.boxView?.backgroundColor = UIColor.purple
            self.boxView?.center = location
        }

        animator.addCompletion {_ in
            self.boxView?.backgroundColor = UIColor.green
        }
        animator.startAnimation()
    }
}Code language: Swift (swift)

Before compiling and running the app, we need to take some time to describe the actions performed in the above method. First, the method gets the UITouch object from the touches argument, and the location(in:) method of this object is called to identify the location on the screen where the touch took place:

if let touch = touches.first {
    let location = touch.location(in: self.view))Code language: Swift (swift)

An instance of the UICubicTimingParameters class is then created and configured with the standard ease-in, ease-out animation curve:

let timing = UICubicTimingParameters(animationCurve: .easeInOut)Code language: Swift (swift)

The animation object is then created and initialized with the timing object and a duration value of 2 seconds:

let animator = UIViewPropertyAnimator(duration: 2.0, 
				timingParameters:timing)Code language: Swift (swift)

The animation closure is then added to the animation object. This begins the creation of two transformations for the view, one to scale the size of the view and one to rotate it 180 degrees. These transformations are then combined into a single transformation and applied to the UIView object:

let scaleTrans =
            CGAffineTransform(scaleX: self.scaleFactor,
                                   y: self.scaleFactor)
let rotateTrans = CGAffineTransform(
                     rotationAngle: CGFloat(self.angle * .pi / 180))

self.boxView?.transform = scaleTrans.concatenating(rotateTrans)Code language: Swift (swift)

Ternary operators are then used to switch the scale and rotation angle variables ready for the next touch. In other words, after rotating 180 degrees on the first touch, the view will need to be rotated to 360 degrees on the next animation. Similarly, once the box has been scaled by a factor of 2, it needs to scale back to its original size on the next animation:

self.angle = (self.angle == 180 ? 360 : 180)
self.scaleFactor = (self.scaleFactor == 2 ? 1 : 2Code language: Swift (swift)

Finally, the location of the view is moved to the point on the screen where the touch occurred, and the color of the box changed to purple:

self.boxView?.backgroundColor = UIColor.purple
self.boxView?.center = locationCode language: Swift (swift)

Next, a completion handler is assigned to the animation and implemented such that it changes the color of the box view to green:

animator.addCompletion {_ in
    self.boxView?.backgroundColor = UIColor.green
}Code language: Swift (swift)

After the animations have been added to the animation object, the animation sequence is started:

animator.startAnimation()Code language: Swift (swift)

Once the touchesEnded method has been implemented, it is time to try out the app.

Building and Running the Animation App

Once all the code changes have been made and saved, click on the run button in the Xcode toolbar. Once the app has compiled, it will load into the iOS Simulator or connected iOS device.

When the app loads, the blue square should appear near the top left-hand corner of the screen. Tap the screen and watch the box glide and rotate to the new location, the size and color of the box changing as it moves:

Figure 61-1

Implementing Spring Timing

The final task in this tutorial is to try out the UISpringTimingParameters class to implement a spring effect at the end of the animation. Edit the ViewController.swift file and change the timing constant so that it reads as follows:

.
.
// let timing = UICubicTimingParameters(animationCurve: .easeInOut)

let timing = UISpringTimingParameters(mass: 0.5, stiffness: 0.5, 
              damping: 0.3, initialVelocity: CGVector(dx:1.0, dy: 0.0))
.
.Code language: Swift (swift)

Run the app once more, tap the screen, and note the spring effect on the box when it reaches the end location in the animation sequence.

Summary

UIKit animation provides an easy-to-implement interface to animation within iOS apps. From the simplest of tasks, such as gracefully fading out a user interface element, to basic animation and transformations, UIKit animation provides a variety of techniques for enhancing user interfaces. This chapter covered the basics of UIKit animation, including the UIViewPropertyAnimator, UISpringTimingParameters, and UICubicTimingParameters classes, before working step-by-step through an example to demonstrate the implementation of motion, rotation, and scaling animation.

An iOS Graphics Tutorial using Core Graphics and Core Image

As previously discussed in Drawing iOS 2D Graphics with Core Graphics, the Quartz 2D API is the primary mechanism by which 2D drawing operations are performed within iOS apps. Having provided an overview of Quartz 2D as it pertains to iOS development in that chapter, the focus of this chapter is to provide a tutorial that provides examples of how 2D drawing is performed. If you are new to Quartz 2D and have not yet read Drawing iOS 2D Graphics with Core Graphics, it is recommended to do so before embarking on this tutorial.

The iOS Drawing Example App

If you are reading this book sequentially and have created the LiveViewDemo project as outlined in the chapter entitled Interface Builder Live Views and iOS 16 Embedded Frameworks, then the code in this chapter may be placed in the draw method contained within the MyDrawView.swift file and the results viewed dynamically within the live view in the Main.storyboard file. On the other hand, if you have not yet completed the Interface Builder Live Views and iOS 16 Embedded Frameworks, follow the steps in the next three sections to create a new project, add a UIView subclass, and locate the draw method.

Creating the New Project

The app created in this tutorial will contain a subclassed UIView component within which the draw method will be overridden and used to perform various 2D drawing operations. Launch Xcode and create a new project using the iOS App template with the Swift and Storyboard options selected, entering Draw2D as the product name.

Creating the UIView Subclass

To draw graphics on the view, it is necessary to create a subclass of the UIView object and override the draw method. In the project navigator panel on the left-hand side of the main Xcode window, right-click on the Draw2D folder entry and select New File… from the resulting menu. In the New File window, select the iOS source Cocoa Touch Class icon and click Next. On the subsequent options screen, change the Subclass of menu to UIView and the class name to Draw2D. Click Next, and on the final screen, click on the Create button.

Select the Main.storyboard file followed by the UIView component in either the view controller canvas or the document outline panel. Display the Identity Inspector and change the Class setting from UIView to our new class named Draw2D:

Figure 60-1

Locating the draw Method in the UIView Subclass

Now that we have subclassed our app’s UIView, the next step is implementing the draw method in this subclass. Fortunately, Xcode has already created a template for this method for us. Select the Draw2D.swift file in the project navigator panel to locate this method. Having located the method in the file, remove the comment markers (/* and */) within which it is currently encapsulated:

import UIKit

class Draw2D: UIView {

    override func draw(_ rect: CGRect) {
        // Drawing code
    }
}Code language: Swift (swift)

In the remainder of this tutorial, we will modify the code in the draw method to perform various drawing operations.

Drawing a Line

To draw a line on a device screen using Quartz 2D, we first need to obtain the graphics context for the view:

let context = UIGraphicsGetCurrentContext()Code language: Swift (swift)

Once the context has been obtained, the width of the line we plan to draw needs to be specified:

context?.setLineWidth(3.0)Code language: Swift (swift)

Next, we need to create a color reference. We can do this by specifying the RGBA components of the required color (in this case, opaque blue):

let colorSpace = CGColorSpaceCreateDeviceRGB()
let components: [CGFloat] = [0.0, 0.0, 1.0, 1.0]
let color = CGColor(colorSpace: colorSpace, components: components)Code language: Swift (swift)

Using the color reference and the context, we can now specify that the color is to be used when drawing the line:

context?.setStrokeColor(color!)Code language: Swift (swift)

The next step is to move to the start point of the line that is going to be drawn:

context?.move(to: CGPoint(x: 50, y: 50))Code language: Swift (swift)

The above line of code indicates that the start point for the line is the top left-hand corner of the device display. We now need to specify the endpoint of the line, in this case, 300, 400:

context?.addLine(to: CGPoint(x: 300, y: 400))Code language: Swift (swift)

Having defined the line width, color, and path, we are ready to draw the line:

context?.strokePath()Code language: Swift (swift)

Bringing this all together gives us a draw method that reads as follows:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    context?.setLineWidth(3.0)
    let colorSpace = CGColorSpaceCreateDeviceRGB()
    let components: [CGFloat] = [0.0, 0.0, 1.0, 1.0]
    let color = CGColor(colorSpace: colorSpace, components: components)
    context?.setStrokeColor(color!)
    context?.move(to: CGPoint(x: 50, y: 50))
    context?.addLine(to: CGPoint(x: 300, y: 400))
    context?.strokePath()
}Code language: Swift (swift)

When compiled and run, the app should display as illustrated in Figure 60-2:

Figure 60-2

Note that we manually created the colorspace and color reference in the above example. As described in Drawing iOS 2D Graphics with Core Graphics, colors can also be created using the UIColor class. For example, the same result as outlined above can be achieved with fewer lines of code as follows:

override func draw(_ rect: CGRect) {
    let context = UIGraphicsGetCurrentContext()
    context?.setLineWidth(3.0)
    context?.setStrokeColor(UIColor.blue.cgColor)
    context?.move(to: CGPoint(x: 50, y: 50))
    context?.addLine(to: CGPoint(x: 300, y: 400))
    context?.strokePath()
}Code language: Swift (swift)

Drawing Paths

As you may have noticed, we draw a single line in the above example by defining the path between two points. Defining a path comprising multiple points allows us to draw using a sequence of straight lines connected using repeated calls to the addLine(to:) context method. Non-straight lines may also be added to a shape using calls to, for example, the addArc method.

The following code, for example, draws a diamond shape:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    context?.setLineWidth(3.0)
    context?.setStrokeColor(UIColor.blue.cgColor)
    context?.move(to: CGPoint(x:100, y: 100))
    context?.addLine(to: CGPoint(x: 150, y: 150))
    context?.addLine(to: CGPoint(x: 100, y: 200))
    context?.addLine(to: CGPoint(x: 50, y: 150))
    context?.addLine(to: CGPoint(x: 100, y: 100))
    context?.strokePath()
}Code language: Swift (swift)

When executed, the above code should produce output that appears as shown in Figure 60-3:

Figure 60-3

Drawing a Rectangle

Rectangles are drawn in much the same way as any other path is drawn, with the exception that the path is defined by specifying the x and y coordinates of the top left-hand corner of the rectangle together with the rectangle’s height and width. These dimensions are stored in a CGRect structure and passed through as an argument to the addRect method:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    context?.setLineWidth(4.0)
    context?.setStrokeColor(UIColor.blue.cgColor)
    let rectangle = CGRect(x: 90,y: 100,width: 200,height: 80)
    context?.addRect(rectangle)
    context?.strokePath()
}Code language: Swift (swift)

The above code will result in the following display when compiled and executed:

Figure 60-4

Drawing an Ellipse or Circle

Circles and ellipses are drawn by defining the rectangular area into which the shape must fit and then calling the addEllipse(in:) context method:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    context?.setLineWidth(4.0)
    context?.setStrokeColor(UIColor.blue.cgColor)
    let rectangle = CGRect(x: 85,y: 100,width: 200,height: 80)
    context?.addEllipse(in: rectangle)
    context?.strokePath()
}Code language: Swift (swift)

When compiled, the above code will produce the following graphics:

Figure 60-5

To draw a circle, simply define a rectangle with equal-length sides (a square, in other words).

Filling a Path with a Color

A path may be filled with color using a variety of Quartz 2D API functions. For example, rectangular and elliptical paths may be filled using the fill(rect:) and fillEllipse(in:) context methods, respectively. Similarly, a path may be filled using the fillPath method. Before executing a fill operation, the fill color must be specified using the setFillColor method.

The following example defines a path and then fills it with the color red:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    context?.move(to: CGPoint(x: 100, y: 100))
    context?.addLine(to: CGPoint(x: 150, y: 150))
    context?.addLine(to: CGPoint(x: 100, y: 200))
    context?.addLine(to: CGPoint(x: 50, y: 150))
    context?.addLine(to: CGPoint(x: 100, y: 100))
    context?.setFillColor(UIColor.red.cgColor)
    context?.fillPath()
}Code language: Swift (swift)

The above code produces the following graphics on the device or simulator display when executed:

Figure 60-6

The following code draws a rectangle with a blue border and then once again fills the rectangular space with red:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    context?.setLineWidth(4.0)
    context?.setStrokeColor(UIColor.blue.cgColor)
    let rectangle = CGRect(x: 85,y: 100,width: 200,height: 80)
    context?.addRect(rectangle)
    context?.strokePath()
    context?.setFillColor(UIColor.red.cgColor)
    context?.fill(rectangle)
}Code language: Swift (swift)

When added to the example app, the resulting display should appear as follows:

Figure 60-7

Drawing an Arc

An arc may be drawn by specifying two tangent points and a radius using the addArc context method, for example:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    context?.setLineWidth(4.0)
    context?.setStrokeColor(UIColor.blue.cgColor)
    context?.move(to: CGPoint(x: 100, y: 100))
    context?.addArc(tangent1End: CGPoint(x: 100, y: 200), 
		tangent2End: CGPoint(x: 300, y: 200), radius: 100)
    context?.strokePath()
}Code language: Swift (swift)

The above code will result in the following graphics output:

Figure 60-8

Drawing a Cubic Bézier Curve

A cubic Bézier curve may be drawn by moving to a start point and then passing two control points and an end point through to the addCurve(to:) method:

override func draw(_ rect: CGRect) 
{
        let context = UIGraphicsGetCurrentContext()
        context?.setLineWidth(4.0)
        context?.setStrokeColor(UIColor.blue.cgColor)
        context?.move(to: CGPoint(x: 30, y: 30))
        context?.addCurve(to: CGPoint(x: 20, y: 50),
                          control1: CGPoint(x: 300, y: 250),
                          control2: CGPoint(x: 300, y: 70))
        context?.strokePath()
}Code language: Swift (swift)

The above code will cause the curve illustrated in Figure 60-9 to be drawn when compiled and executed in our example app:

Figure 60-9

Drawing a Quadratic Bézier Curve

A quadratic Bézier curve is drawn using the addQuadCurve(to:) method, providing a control and end point as arguments having first moved to the start point:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    context?.setLineWidth(4.0)
    context?.setStrokeColor(UIColor.blue.cgColor)
    context?.move(to: CGPoint(x: 10, y: 200))
    context?.addQuadCurve(to: CGPoint(x: 300, y: 200), 
		control: CGPoint(x: 150, y: 10))
    context?.strokePath()
}Code language: Swift (swift)

The above code, when executed, will display a curve that appears as illustrated in the following figure:

Figure 60-10

Dashed Line Drawing

So far in this chapter, we have performed all our drawing with a solid line. Quartz also provides support for drawing dashed lines. This is achieved via the Quartz setLineDash method, which takes as its arguments the following:

  • context – The graphics context of the view on which the drawing is to take place
  • phase – A floating point value that specifies how far into the dash pattern the line starts
  • lengths – An array containing values for the lengths of the painted and unpainted sections of the line. For example, an array containing 5 and 6 would cycle through 5 painted unit spaces followed by 6 unpainted unit spaces.
  • count – A count of the number of items in the lengths array

For example, a [2,6,4,2] lengths array applied to a curve drawing of line thickness 5.0 will appear as follows:

Figure 60-11

The corresponding draw method code that drew the above line reads as follows:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    context?.setLineWidth(20.0)
    context?.setStrokeColor(UIColor.blue.cgColor)
    let dashArray:[CGFloat] = [2,6,4,2]
    context?.setLineDash(phase: 3, lengths: dashArray)
    context?.move(to: CGPoint(x: 10, y: 200))
    context?.addQuadCurve(to: CGPoint(x: 300, y: 200), 
		control: CGPoint(x: 150, y: 10))
    context?.strokePath()
}Code language: Swift (swift)

Drawing Shadows

In addition to drawing shapes, Core Graphics can also be used to create shadow effects. This is achieved using the setShadow method, passing through a graphics context, offset values for the position of the shadow relative to the shape for which the shadow is being drawn, and a value specifying the degree of blurring required for the shadow effect.

The following code, for example, draws an ellipse with a shadow:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    let myShadowOffset = CGSize (width: -10,  height: 15)

    context?.saveGState()
    context?.setShadow(offset: myShadowOffset, blur: 5)
    context?.setLineWidth(4.0)
    context?.setStrokeColor(UIColor.blue.cgColor)
    let rectangle = CGRect(x: 60,y: 170,width: 200,height: 80)
    context?.addEllipse(in: rectangle)
    context?.strokePath()
    context?.restoreGState()
}Code language: Swift (swift)

When executed, the above code will produce the effect illustrated in Figure 60-12:

Figure 60-12

Drawing Gradients

Gradients are implemented using the Core Graphics CGGradient class, which supports linear, radial, and axial gradients. The CGGradient class essentially involves the specification of two or more colors together with a set of location values. The location values indicate the points at which the gradient should switch from one color to another as the gradient is drawn along an axis line where 0.0 represents the start of the axis, and 1.0 is the endpoint. Assume, for example, that you wish to create a gradient that transitions through three different colors along the gradient axis, with each color being given an equal amount of space within the gradient. In this situation, three locations would be specified. The first would be 0.0 to represent the start of the gradient. Two more locations would then need to be specified for the transition points to the remaining colors. Finally, to equally divide the axis among the colors, these would need to be set to 0.3333 and 0.6666, respectively.

Having configured a CGGradient instance, a linear gradient is drawn via a call to the drawLinearGradient method of the context object, passing through the colors, locations, and start and end points as arguments.

The following code, for example, draws a linear gradient using four colors with four equally spaced locations:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()

    let locations: [CGFloat] = [ 0.0, 0.25, 0.5, 0.75 ]

    let colors = [UIColor.red.cgColor,
                  UIColor.green.cgColor,
                  UIColor.blue.cgColor,
                  UIColor.yellow.cgColor]

    let colorspace = CGColorSpaceCreateDeviceRGB()

    let gradient = CGGradient(colorsSpace: colorspace,
                  colors: colors as CFArray, locations: locations)

    var startPoint = CGPoint()
    var endPoint =  CGPoint()

    startPoint.x = 0.0
    startPoint.y = 0.0
    endPoint.x = 600
    endPoint.y = 600

    if let gradient = gradient {
        context?.drawLinearGradient(gradient,
                                start: startPoint, end: endPoint,
                                options: .drawsBeforeStartLocation)
    }
}Code language: Swift (swift)

When executed, the above code will generate the gradient shown in Figure 60-13:

Figure 60-13

Radial gradients involve drawing a gradient between two circles. When the circles are positioned apart from each other and given different sizes, a conical effect is achieved, as shown in Figure 60-14:

Figure 60-14

The code to draw the above radial gradient sets up the colors and locations for the gradient before declaring the center points and radius values for two circles. The gradient is then drawn via a call to the drawRadialGradient method:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()

    let locations: [CGFloat] = [0.0, 0.5, 1.0]

    let colors = [UIColor.red.cgColor,
                  UIColor.green.cgColor,
                  UIColor.cyan.cgColor]

    let colorspace = CGColorSpaceCreateDeviceRGB()

    let gradient = CGGradient(colorsSpace: colorspace,
                    colors: colors as CFArray, locations: locations)

    var startPoint =  CGPoint()
    var endPoint  = CGPoint()

    startPoint.x = 100
    startPoint.y = 100
    endPoint.x = 200
    endPoint.y = 200
    let startRadius: CGFloat = 10
    let endRadius: CGFloat = 75

    if let gradient = gradient {
        context?.drawRadialGradient(gradient, startCenter: startPoint,
                                startRadius: startRadius, 
                                endCenter: endPoint,
                                endRadius: endRadius, options: [])
    }
}Code language: Swift (swift)

Interesting effects may also be created by assigning a radius of 0 to the starting point circle and positioning it within the circumference of the endpoint circle:

override func draw(_ rect: CGRect)
{
    let context = UIGraphicsGetCurrentContext()
    let locations: [CGFloat] = [0.0, 1.0]

    let colors = [UIColor.white.cgColor,
                      UIColor.blue.cgColor]

    let colorspace = CGColorSpaceCreateDeviceRGB()

    let gradient = CGGradient(colorsSpace: colorspace,
                    colors: colors as CFArray, locations: locations)

    var startPoint = CGPoint()
    var endPoint = CGPoint()
    startPoint.x = 180
    startPoint.y = 180
    endPoint.x = 200
    endPoint.y = 200
    let startRadius: CGFloat = 0
    let endRadius: CGFloat = 75

    if let gradient = gradient {
        context?.drawRadialGradient (gradient, startCenter: startPoint,
                                 startRadius: startRadius, 
                                 endCenter: endPoint,
                                 endRadius: endRadius,
                                 options: .drawsBeforeStartLocation)
    }
}Code language: Swift (swift)

When executed, the above code creates the appearance of light reflecting on the surface of a shiny blue sphere:

Figure 60-15

Drawing an Image into a Graphics Context

An image may be drawn into a graphics context either by specifying the coordinates of the top left-hand corner of the image (in which case the image will appear full size) or resized so that it fits into a specified rectangular area. Before we can display an image in our example app, that image must first be added to the project resources.

Begin by locating the desired image using the Finder and then drag and drop that image onto the project navigator panel of the Xcode main project window.

The following example draw method code displays the image in a file named cat.png full size located at 0, 0:

override func draw(_ rect: CGRect)
{
    let myImage = UIImage(named: "myImage.png")
    let imagePoint = CGPoint(x: 0, y: 0)
    myImage?.draw(at: imagePoint)
}Code language: Swift (swift)

As is evident when the app is run, the size of the image far exceeds the available screen size:

Figure 60-16

Using the draw method of the UIImage object, however, we can scale the image to fit better on the screen. In this instance, it is useful to identify the screen size since this changes depending on the device on which the app is running. This can be achieved using the mainScreen and bounds methods of the UIScreen class. The mainScreen method returns another UIScreen object representing the device display. Calling the bounds method of that object returns the dimensions of the display in the form of a CGRect object:

override func draw(_ rect: CGRect)
{
    let myImage = UIImage(named: "myImage.png")
    let imageRect = UIScreen.main.bounds
    myImage?.draw(in: imageRect)
}Code language: Swift (swift)

This time, the entire image fits comfortably on the screen:

Figure 60-17

Image Filtering with the Core Image Framework

Having covered the concept of displaying images within an iOS app, now is a good time to provide a basic overview of the Core Image Framework.

Core Image was introduced with iOS 5 and provides a mechanism for filtering and manipulating still images and videos. Included with Core Image is a wide range of filters, together with the ability to build custom filters to meet specific requirements. Examples of filters that may be applied include cropping, color effects, blurring, warping, transformations, and gradients. A full list of filters is available in Apple’s Core Image Filter Reference document, located in the Apple Developer portal.

A CIImage object is typically initialized with a reference to the image to be manipulated. A CIFilter object is then created and configured with the type of filtering to be performed, together with any input parameters required by that filter. The CIFilter object is then instructed to perform the operation, and the modified image is subsequently returned as a CIImage object. The app’s CIContext reference may then be used to render the image for display to the user.

By way of an example of Core Image in action, we will modify the draw method of our Draw2D example app to render the previously displayed image in a sepia tone using the CISepiaTone filter. The first step, however, is to add the CoreImage Framework to the project. This is achieved by selecting the Draw2D target at the top of the project navigator and then selecting the Build Phases tab in the main panel. Next, unfold the Link Binary with Libraries section of the panel, click the + button, and locate and add the CoreImage.framework library from the resulting list.

Having added the framework, select the Draw2D.swift file and modify the draw method as follows:

override func draw(_ rect: CGRect) {
    
    if let myimage = UIImage(named: "myImage.png"), 
       let sepiaFilter = CIFilter(name: "CISepiaTone") {
        
        let cimage = CIImage(image: myimage)
        
        sepiaFilter.setDefaults()
        sepiaFilter.setValue(cimage, forKey: "inputImage")
        sepiaFilter.setValue(NSNumber(value: 0.8 as Float),
                              forKey: "inputIntensity")
        
        let image = sepiaFilter.outputImage
        
        let context = CIContext(options: nil)
        
        let cgImage = context.createCGImage(image!,
                                            from: image!.extent)
        
        let resultImage = UIImage(cgImage: cgImage!)
        let imageRect = UIScreen.main.bounds
        resultImage.draw(in: imageRect)
    }
}Code language: Swift (swift)

The method begins by loading the image file used in the previous section of this chapter. Since Core Image works on CIImage objects, it is necessary to convert the UIImage to a CIImage. Next, a new CIFilter object is created and initialized with the CISepiaTone filter. The filter is then set to the default settings before being configured with the input image (in this case, our cimage object) and the filter’s intensity value (0.8).

With the filter object configured, its outputImage method is called to perform the manipulation, and the resulting modified image is assigned to a new CImage object. The CIContext reference for the app is then obtained and used to convert the CImage object to a CGImageRef object. This, in turn, is converted to a UIImage object which is then displayed to the user using the object’s draw method. When compiled and run, the image will appear in a sepia tone.

Summary

By subclassing the UIView class and overriding the draw method, various 2D graphics drawing operations may be performed on the view canvas. In this chapter, we have explored some of the graphics drawing capabilities of Quartz 2D to draw various line types and paths and present images on the iOS device screen.

Introduced in iOS 5, the Core Image Framework is designed to filter and manipulate images and video. In this chapter, we have provided a brief overview of Core Image and worked through a simple example that applied a sepia tone filter to an image.

Interface Builder Live Views and iOS 16 Embedded Frameworks

Two related areas of iOS development will be covered in this chapter in the form of Live Views in Interface Builder and Embedded Frameworks, both designed to make the tasks of sharing common code between projects and designing dynamic user interfaces easier.

Embedded Frameworks

Apple defines a framework as “a collection of code and resources to encapsulate functionality that is valuable across multiple projects.” A typical iOS app project will use many Frameworks from the iOS SDK. For example, all apps use the Foundation Framework, while a game might also use the SpriteKit Framework.

Embedded Frameworks allow developers to create their own frameworks. Embedded frameworks are easy to create and provide several advantages, the most obvious of which is the ability to share common code between multiple app projects.

Embedded Frameworks are particularly useful when working with extensions. By nature, an extension will inevitably need to share code that already exists within the containing app. Rather than duplicate code between the app and the extension, a better solution is to place common code into an embedded framework.

Another benefit of embedded frameworks is the ability to publish code in the form of 3rd party frameworks that can be downloaded for use by other developers in their own projects.

However, one of the more intriguing features of embedded frameworks is that they facilitate a powerful feature of Interface Builder known as Live Views.

Interface Builder Live Views

Traditionally, designing a user interface layout using Interface Builder has involved placing static representations of view components onto a canvas. The app logic behind these views to implement dynamic behavior is then implemented within the view controller, and the app is compiled and run on a device or simulator to see the live user interface in action.

Live views allow the dynamic code behind the views to be executed from within the Interface Builder storyboard file as the user interface is being designed without compiling and running the app.

Live views also allow variables within the code behind a view to be exposed so that they can be accessed and modified in the Interface Builder Attributes Inspector panel, with the changes reflected in real-time within the live view.

The reason embedded frameworks and live views are covered in this chapter is that a prerequisite for live views is for the underlying code for a live view to be contained within an embedded framework.

The best way to better understand both embedded frameworks and live views is to see them in action in an example project.

Creating the Example Project

Launch Xcode and create a new project using the iOS App template with the Swift and Storyboard options selected, entering LiveViewDemo as the product name.

When the project has been created, select the Main.storyboard file and drag and drop a View object from the Library panel onto the view controller canvas. Resize the view, stretching it in each dimension until the blue dotted line indicates the recommended margin. Display the Auto Layout Add New Constraints menu and enable “spacing to nearest neighbor” constraints on all four sides of the view with the Constrain to margins option enabled as shown in Figure 59-1 before clicking on the Add 4 Constraints button:

Figure 59-1

Once the above steps are complete, the layout should resemble that illustrated in Figure 59-2 below:

Figure 59-2

Interface Builder Live Views and iOS 16 Embedded Frameworks

With the user interface designed, the next step is to add a framework to the project.

Adding an Embedded Framework

The framework to be added to the project will contain a UIView subclass containing some graphics drawing code.

Within Xcode, select the File -> New -> Target… menu option and, in the template selection panel, scroll to and select the Framework template (Figure 59-3):

Figure 59-3

Click on the Next button and, on the subsequent screen, enter MyDrawKit into the product name field before clicking on the Finish button.

Within the project navigator panel, a new folder will have been added named MyDrawKit, into which will be stored the files that make up the new framework. Ctrl-click on this entry and select the New File… menu option. In the template chooser panel, select Cocoa Touch Class before clicking on Next.

On the next screen, name the class MyDrawView and configure it as a subclass of UIView. Then, click the Next button and save the new class file into the MyDrawKit subfolder of the project directory.

Select the Main.storyboard file in the project navigator panel and click on the View object added in the previous section. Display the Identity Inspector in the Utilities panel and change the Class setting from UIView to MyDrawView:

Figure 59-4

Implementing the Drawing Code in the Framework

The code to perform the graphics drawing on the View will reside in the MyDrawView.swift file in the MyDrawKit folder. Locate this file in the project navigator panel and double-click on it to load it into a separate editing window (thereby allowing the Main.storyboard file to remain visible in Interface Builder).

Remove the comment markers (/* and */) from around the template draw method and implement the code for this method so that it reads as follows:

import UIKit
import QuartzCore

class MyDrawView: UIView {

    var startColor: UIColor = UIColor.white
    var endColor: UIColor = UIColor.blue
    var endRadius: CGFloat = 100

    override func draw(_ rect: CGRect) {
        let context = UIGraphicsGetCurrentContext()
        
        let colorspace = CGColorSpaceCreateDeviceRGB()
        let locations: [CGFloat] = [ 0.0, 1.0]
        
        if let gradient = CGGradient(colorsSpace: colorspace,
                            colors: [startColor.cgColor, endColor.cgColor] 
						as CFArray,
                            locations: locations) {
        
            var startPoint = CGPoint()
            var endPoint = CGPoint()
            
            let startRadius: CGFloat = 0
            
            startPoint.x = 130
            startPoint.y = 100
            endPoint.x = 130
            endPoint.y = 120
            
            context?.drawRadialGradient(gradient,
                   startCenter: startPoint, startRadius: startRadius,
                   endCenter: endPoint, endRadius: endRadius,
                   options: .drawsBeforeStartLocation)
        }
    }
}

Making the View Designable

At this point, the code has been added, and running the app on a device or simulator will show the view with the graphics drawn on it. The object of this chapter, however, is to avoid the need to compile and run the app to see the results of the code. To make the view “live” within Interface Builder, the class must be declared as being “Interface Builder designable.” This is achieved by adding an @IBDesignable directive immediately before the class declaration in the MyDrawView.swift file:

import UIKit
import QuartzCore

@IBDesignable
class MyDrawView: UIView {

    var startColor: UIColor = UIColor.white
    var endColor: UIColor = UIColor.blue
    var endRadius: CGFloat = 100
.
.
}

As soon as the directive is added to the file, Xcode will compile the class and render it within the Interface Builder storyboard canvas (Figure 59-5):

Figure 59-5

Changes to the MyDrawView code will now be reflected in the Interface Builder live view. To see this in action, right-click on the MyDrawView.swift file and select the Open in New Window entry in the resulting menu. Then, with the Main storyboard scene visible, change the endColor variable declaration in the MyDrawView.swift file so that it is assigned a different color and observe the color change take effect in the Interface Builder live view:

var endColor: UIColor = UIColor.red

Making Variables Inspectable

Although it is possible to modify variables by editing the code in the framework class, it would be easier if they could be changed just like any other property using the Attributes Inspector panel. This can be achieved simply by prefixing the variable declarations with the @IBInspectable directive as follows:

@IBDesignable
class MyDrawView: UIView {

    @IBInspectable var startColor: UIColor = UIColor.white
    @IBInspectable var endColor: UIColor = UIColor.red
    @IBInspectable var endRadius: CGFloat = 100
.
.
}

With changes to the code, select the View in the storyboard file and display the Attributes Inspector panel. The properties should now be listed for the view (Figure 59-6) and can be modified. Any changes to these variables made through the Attributes Inspector will take effect in real time without requiring Xcode to recompile the framework code. When compiled and run on a device or simulator, these settings will also be generated into the app.

Figure 59-6

Summary

This chapter has introduced two concepts: embedded frameworks and Interface Builder live views. Embedded frameworks allow developers to place source code into frameworks that can be shared between multiple app projects. Embedded frameworks also provide the basis for the live views feature of Interface Builder. Before the introduction of live views, it was necessary to compile and run an app to see dynamic user interface behavior in action. With live views, the dynamic behavior of a view can now be seen within Interface Builder, with code changes reflected in real-time.