RHEL 9 System and Process Monitoring

An essential part of running and administering an RHEL 9 system involves monitoring the overall system health regarding memory, swap, storage, and processor usage. This includes knowing how to inspect and manage the system and user processes running in the background. This chapter will outline some tools and utilities that can be used to monitor system resources and processes on an RHEL 9 system.

Managing Processes

Even when an RHEL 9 system appears idle, many system processes will run silently in the background to keep the operating system functioning. For example, when you execute a command or launch an app, user processes are started, running until the associated task is completed.

To obtain a list of active user processes you are currently running within the context of a single terminal or command-prompt session, use the ps command as follows:

$ ps
  PID TTY          TIME CMD
10395 pts/1    00:00:00 bash
13218 pts/1    00:00:00 ps

The output from the ps command shows that two user processes are running within the context of the current terminal window or command prompt session, the bash shell into which the command was entered, and the ps command itself.

To list all active processes running for the current user, use the ps command with the -a flag. This command will list all running processes that are associated with the user regardless of where they are running (for example, processes running in other terminal windows):

$ ps -a
    PID TTY          TIME CMD
   5442 tty2     00:00:00 gnome-session-b
   6350 pts/0    00:00:00 sudo
   6354 pts/0    00:00:00 su
   6355 pts/0    00:00:00 bash
   9849 pts/2    00:00:00 nano
   9850 pts/1    00:00:00 ps

As shown in the above output, the user is running processes related to the GNOME desktop, the shell session, the nano text editor, and the ps command.

To list the processes for a specific user, run ps with the -u flag followed by the user name:

# ps -u john
  PID TTY          TIME CMD
  914 ?        00:00:00 systemd
  915 ?        00:00:00 (sd-pam)
  970 ?        00:00:00 gnome-keyring-d
  974 tty1     00:00:00 gdm-x-session
.
.

Note that each process is assigned a unique process ID which can be used to stop the process by sending it a termination (TERM) signal via the kill command. For example:

$ kill 13217

The advantage of ending a process with the TERM signal is that it allows the process to exit gracefully, potentially saving any data that might otherwise be lost.

If the standard termination signal does not terminate the process, repeat the kill command with the -9 option. This command sends a KILL signal which should cause even frozen processes to exit but does not give the process a chance to exit gracefully, possibly resulting in data loss:

$ kill -9 13217

To list all of the processes running on a system (including all user and system processes), execute the following command:

$ ps -ax
    PID TTY      STAT   TIME COMMAND
      1 ?        Ss     0:22 /usr/lib/systemd/systemd rhgb --switched-root
      2 ?        S      0:00 [kthreadd]
      3 ?        I<     0:00 [rcu_gp]
      4 ?        I<     0:00 [rcu_par_gp]
      5 ?        I<     0:00 [netns]

To list all processes and include information about process ownership, CPU, and memory use, execute the ps command with the -aux option:

$ ps -ax
    PID TTY      STAT   TIME COMMAND
      1 ?        Ss     0:22 /usr/lib/systemd/systemd rhgb --switched-root
      2 ?        S      0:00 [kthreadd]
      3 ?        I<     0:00 [rcu_gp]
      4 ?        I<     0:00 [rcu_par_gp]
      5 ?        I<     0:00 [netns]

A Linux process can start its own sub-processes (referred to as spawning), resulting in a hierarchical parent-child relationship between processes. To view the process tree, use the ps command and include the -H option. Below is part of the tree output for a ps -aH command execution:

$ ps -aH
    PID TTY          TIME CMD
  10036 pts/3    00:00:00 ps
   6350 pts/0    00:00:00 sudo
   6354 pts/0    00:00:00   su
   6355 pts/0    00:00:00     bash
   5442 tty2     00:00:00 gnome-session-b

Process information may also be viewed via the System Monitor tool from the GNOME desktop. This tool can either be launched by searching for “System Monitor” within the desktop environment or from the command line as follows:

$ gnome-system-monitor

Once the System Monitor has launched, select the Processes button located in the toolbar to list the processes running on the system, as shown in Figure 34-1 below:

Figure 34-1

To change the processes listed (for example, to list all processes or just your own processes), use the menu as illustrated in Figure 34-2:

Figure 34-2

To filter the list of processes, click on the search button in the title bar and enter the process name into the search field:

Figure 34-3

To display additional information about a specific process, select it from the list and click on the button located in the bottom right-hand corner (marked A in Figure 34-4) of the dialog:

Figure 34-4

A dialog similar to that marked B in the above figure will appear when the button is clicked. Select a process from the list and click the End Process button (C) to terminate it.

To monitor CPU, memory, swap, and network usage, click on the Resources button in the title bar to display the screen shown in Figure 34-5:

Figure 34-5

Similarly, a summary of storage space used on the system can be viewed by selecting the File Systems toolbar button:

Figure 34-6

Real-time System Monitoring with top

As the chapter An Overview of the RHEL 9 Cockpit Web Interface outlined, the Cockpit web interface can perform basic system monitoring. The previous section also explained how the GNOME System Monitor tool could be used to monitor processes and system resources. This chapter also explored how the ps command can provide a snapshot of the processes running on an RHEL 9 system. However, the ps command does not provide a real-time view of the processes and resource usage on the system. The top command is an ideal tool for real-time monitoring of system resources and processes from the command prompt.

When running, top will list the processes running on the system ranked by system resource usage (with the most demanding process in the top position). The upper section of the screen displays memory and swap usage information together with CPU data for all CPU cores. All of this output is constantly updated, allowing the system to be monitored in real-time:

Figure 34-7

To limit the information displayed to the processes belonging to a specific user, start top with the -u option followed by the user name:

$ top -u john

For a complete listing of the features available in top, press the keyboard ‘h’ key or refer to the man page:

$ man top

Command-Line Disk and Swap Space Monitoring

Disk space can be monitored from within Cockpit and using the GNOME System Monitor. To identify disk usage from the command line, however, the df command provides a helpful and quick overview:

# df -h
Filesystem                            Size  Used Avail Use% Mounted on
devtmpfs                              4.0M     0  4.0M   0% /dev
tmpfs                                 1.8G     0  1.8G   0% /dev/shm
tmpfs                                 704M  9.6M  694M   2% /run
/dev/mapper/rhel-root00                70G  6.3G   64G   9% /
/dev/mapper/rhel-home00               224G  1.8G  222G   1% /home
/dev/sda1                            1014M  290M  725M  29% /boot
tmpfs                                 352M  144K  352M   1% /run/user/1000

To review current swap space and memory usage, run the free command:

# free
              total        used        free      shared  buff/cache   available
Mem:        3823720      879916     1561108      226220     1382696     2476300

To continuously monitor memory and swap levels, use the free command with the -s option, specifying the delay in seconds between each update (keeping in mind that the top tool may provide a better way to view this data in real time):

$ free -s 1
Mem:        3823720      879472     1561532      226220     1382716     2476744
Swap:       2097148           0     2097148

              total        used        free      shared  buff/cache   available
Mem:        3823720      879140     1559940      228144     1384640     2475152
Swap:       2097148           0     2097148
.
.

To monitor disk I/O from the command line, consider using the iotop command, which can be installed as follows:

# dnf install iotop

Once installed and executed (iotop must be run with system administrator privileges), the tool will display a real-time list of disk I/O on a per-process basis:

Figure 34-8

Summary

Even a system that appears to be doing nothing will have many system processes running in the background. Activities performed by users on the system will result in additional processes being started. Processes can also spawn their own child processes. Each process will use some system resources, including memory, swap space, processor cycles, disk storage, and network bandwidth. This chapter has explored a set of tools that can be used to monitor both process and system resources on a running system and, when necessary, kill errant processes that may be impacting the performance of a system.

Adding and Managing RHEL 9 Swap Space

An essential part of maintaining the performance of a RHEL 9 system involves ensuring that adequate swap space is available comparable to the memory demands placed on the system.

Therefore, this chapter provides an overview of swap management on RHEL 9.

What is Swap Space?

Computer systems have a finite amount of physical memory available to the operating system. When the operating system approaches the available memory limit, it frees up space by writing memory pages to disk. When the operating system requires any of those pages, they are read back into memory. The disk area allocated for this task is referred to as swap space.

Recommended Swap Space for RHEL 9

The swap recommended for RHEL 9 depends on several factors, including the amount of memory in the system, the workload imposed on that memory, and whether the system is required to support hibernation. The current guidelines for RHEL 9 swap space are as follows:

Amount of installed RAM

Recommended swap space

Recommended swap space if hibernation enabled

2GB or less

Installed RAM x 2

Installed RAM x 3

2GB – 8GB

Installed RAM x 1

Installed RAM x 2

8GB – 64GB

At least 4GB

Installed RAM x 1.5

64GB or more

At least 4GB

Hibernation not recommended

Table 33-1

When a system enters hibernation, the current system state is written to the hard disk, and the host machine is powered off. When the machine is subsequently powered on, the system’s state is restored from the hard disk drive. This differs from suspension, where the system state is stored in RAM. The machine then enters a sleep state whereby power is maintained to the system RAM while other devices are shut down.

Identifying Current Swap Space Usage

The current amount of swap used by a RHEL 9 system may be identified in several ways. One option is to output the /proc/swaps file:

# cat /proc/swaps
Filename	Type		Size		Used		Priority
/dev/dm-1    partition	3932156	0		-2

Alternatively, the swapon command may be used:

# swapon
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition 3.7G   0B   -2

To view the amount of swap space relative to the overall available RAM, the free command may be used:

# free
               total        used        free      shared  buff/cache   available
Mem:         3601420     1577696     1396172      404412     1273236     2023724
Swap:        3932156           0     3932156

Adding a Swap File to a RHEL 9 System

Additional swap space may be added to the system by creating a file and assigning it as swap. Begin by creating the swap file using the dd command. The size of the file can be changed by adjusting the count variable. The following command line, for example, creates a 2.0 GB file:

# dd if=/dev/zero of=/newswap bs=1024 count=2000000
2000000+0 records in
2000000+0 records out
2048000000 bytes (2.0 GB, 1.9 GiB) copied, 29.3601 s, 69.8 MB/s

Before converting the file to a swap file, it is essential to make sure the file has secure permissions set:

# chmod 0600 /newswap

Once a suitable file has been created, it needs to be converted into a swap file using the mkswap command:

# mkswap /newswap
Setting up swapspace version 1, size = 1.9 GiB (2047995904 bytes)
no label, UUID=28d314e9-492f-46f8-bdcf-3a734c4426db5

With the swap file created and configured, it can be added to the system in real-time using the swapon utility:

# swapon /newswap

Re-running swapon should now report that the new file is now being used as swap:

# swapon
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition 3.7G   1M   -2
/newswap  file       1.9G   0B   -3

The swap space may be removed dynamically by using the swapoff utility as follows:

# swapoff /newswap

Finally, modify the /etc/fstab file to automatically add the new swap at system boot time by adding the following line:

/newswap swap swap defaults 0 0

Adding Swap as a Partition

As an alternative to designating a file as swap space, entire disk partitions may also be designated as swap. The steps to achieve this are the same as those for adding a swap file. Before allocating a partition to swap, ensure that any existing data on the corresponding filesystem is either backed up or no longer needed and that the filesystem has been unmounted.

Assuming that a partition exists on a disk drive represented by /dev/sdb1, for example, the first step would be to convert this into a swap partition, once again using the mkswap utility:

# mkswap /dev/sdb1
Setting up swapspace version 1, size = 14.5 GiB (15524163584 bytes)
no label, UUID=306b7fee-eb20-4679-9f14-f94548683557

Next, add the new partition to the system swap and verify that it has indeed been added:

# swapon /dev/sdb1
[[email protected] ~]# swapon
NAME      TYPE       SIZE USED PRIO
/dev/dm-1 partition  3.7G 996K   -2
/dev/sdb1 partition 14.5G   0B   -3

Once again, the /etc/fstab file may be modified to automatically add the swap partition at boot time as follows:

/dev/sdb1 swap swap defaults 0 0

Adding Space to a RHEL 9 LVM Swap Volume

On systems using Logical Volume Management, an alternative to adding swap via file or disk partition is to extend the logical volume used for the swap space.

The first step is to identify the current amount of swap available and the volume group and logical volume used for the swap space using the lvdisplay utility (for more information on LVM, refer to the chapter entitled Adding a New Disk to a RHEL 9 Volume Group and Logical Volume):

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/rhel/swap00
  LV Name                swap00
  VG Name                rhel_rhel-server
  LV UUID                EbOScj-1qXw-bB9d-LU61-ZjdC-L7u5-Uhj8De
  LV Write Access        read/write
  LV Creation host, time demoserver, 2023-04-12 09:34:29 -0400
  LV Status              available
  # open                 2
  LV Size                3.75 GiB
  Current LE             960
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1
.
.

Clearly, the swap resides on a logical volume named swap00 which is part of the volume group named rhel. The next step is to verify if there is any space available on the volume group that can be allocated to the swap volume:

# vgs
  VG   #PV #LV #SN Attr   VSize   VFree
  rhel   2   3   0 wz--n- 197.66g <22.00g

If the amount of space available is sufficient to meet additional swap requirements, turn off the swap and extend the swap logical volume to use as much of the available space as needed to meet the system’s swap requirements:

# swapoff /dev/rhel/swap
# lvextend -L+8GB /dev/rhel/swap
    Logical volume rhel/swap successfully resized.

Next, reformat the swap volume and turn the swap back on:

# mkswap /dev/rhel/swap
mkswap: /dev/rhel/swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 12 GiB (12754874368 bytes)
no label, UUID=241a4818-e51c-4b8c-9bc9-1697fc2ce26e
 
# swapon /dev/rhel/swap

Having made the changes, check that the swap space has increased:

# swapon
NAME      TYPE       SIZE USED PRIO
/dev/dm-1 partition  12G   0B   -2

Adding Swap Space to the Volume Group

In the above section, we extended the swap logical volume to use space already available in the volume group. If no space is available in the volume group, it must be added before extending the swap.

Begin by checking the status of the volume group:

# vgs
  VG               #PV #LV #SN Attr   VSize    VFree
  rhel             1   3   0 wz--n- <297.09g    0

The above output indicates that no space is available within the volume group. However, suppose we have a requirement to add 14 GB to the swap on the system. This will require the addition of more space to the volume group. For this example, it will be assumed that a disk that is 16 GB in size and represented by /dev/sdb is available for addition to the volume group. Therefore, the first step is to turn this partition into a physical volume using pvcreate:

# pvcreate /dev/sdb
  Physical volume “/dev/sdb” successfully created.

If the creation fails with a message similar to “Device /dev/sdb excluded by a filter”, it may be necessary to wipe the disk before creating the physical volume:

# wipefs -a /dev/sdb
/dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 8 bytes were erased at offset 0x1fffffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdb: calling ioctl to re-read partition table: Success

Next, the volume group needs to be extended to use this additional physical volume:

# vgextend rhel /dev/sdb
  Volume group "rhel" successfully extended

At this point, the vgs command should report the addition of space from /dev/sdb to the volume group:

# vgs
  VG               #PV #LV #SN Attr   VSize   VFree
  rhel_rhel-server   2   3   0 wz--n- 311.54g <14.46g

Now that the additional space is available in the volume group, the swap logical volume may be extended to utilize the space. But first, turn off the swap using the swapoff utility:

# swapoff /dev/rhel/swap00

Next, extend the logical volume to use the new space:

# lvextend -L+14GB /dev/rhel_rhel-server/swap00
  Size of logical volume rhel_rhel-server/swap00 changed from 3.75 GiB (960 extents) to 17.75 GiB (4544 extents).
  Logical volume rhel_rhel-server/swap00 successfully resized.

Re-create the swap on the logical volume:

# mkswap /dev/rhel/swap00
mkswap: /dev/rhel/swap: warning: wiping old swap signature.
Setting up swapspace version 1, size = 11.9 GiB (12754874368 bytes)
no label, UUID=241a4818-e51c-4b8c-9bc9-1697fc2ce26e

Next, turn swap back on:

# swapon /dev/rhel/swap

Finally, use the swapon command to verify the addition of the swap space to the system:

# swapon
NAME      TYPE       SIZE USED PRIO
/dev/dm-1 partition 17.7G   0B   -2

Summary

Swap space is vital to any operating system when memory resources become constrained. By swapping out memory areas to disk, the system can continue to function and meet the needs of the processes and applications running on it.

RHEL 9 has a set of guidelines recommending the amount of disk-based swap space that should be allocated depending on the amount of RAM installed in the system. When these recommendations prove insufficient, additional swap space can be added to the system, typically without rebooting. This chapter outlines that swap space can be added as a file, disk, or disk partition or by extending existing logical volumes configured as swap space.

Adding a New Disk to a RHEL 9 Volume Group and Logical Volume

In the previous chapter, we looked at adding a new disk drive to a RHEL 9 system, creating a partition and file system, and then mounting that file system to access the disk. An alternative to creating fixed partitions and file systems is to use Logical Volume Management (LVM) to create logical disks comprising space from one or more physical or virtual disks or partitions. The advantage of using LVM is that space can be added to or removed from logical volumes without spreading data over multiple file systems.

Let us take, for example, the root (/home) file system of a RHEL 9-based server. Without LVM, this file system would be created with a specific size when the operating system is installed. If a new disk drive is installed, there is no way to allocate any of that space to the /home file system. The only option would be to create new file systems on the new disk and mount them at particular mount points. In this scenario, you would have plenty of space on the new file system, but the / home file system would still be nearly full. The only option would be to move files onto the new file system. With LVM, the new disk (or part thereof) can be assigned to the logical volume containing the root file system, thereby dynamically extending the space available.

In this chapter, we will look at the steps necessary to add new disk space to both a volume group and a logical volume to add additional space to the root file system of a RHEL 9 system.

An Overview of Logical Volume Management (LVM)

LVM provides a flexible and high-level approach to managing disk space. Instead of each disk drive being split into partitions of fixed sizes onto which fixed-size file systems are created, LVM provides a way to group disk space into logical volumes that can be easily resized and moved. In addition, LVM allows administrators to carefully control disk space assigned to different groups of users by allocating distinct volume groups or logical volumes to those users. When the space initially allocated to the volume is exhausted, the administrator can add more space without moving the user files to a different file system. LVM consists of the following components:

Volume Group (VG)

The Volume Group is the high-level container with one or more logical and physical volumes.

Physical Volume (PV)

A physical volume represents a storage device such as a disk drive or other storage media.

Logical Volume (LV)

A logical volume is equivalent to a disk partition and, as with a disk partition, can contain a file system.

Physical Extent (PE)

Each physical volume (PV) is divided into equal size blocks known as physical extents.

Logical Extent (LE)

Each logical volume (LV) is divided into equal size blocks called logical extents.

Suppose we are creating a new volume group called VolGroup001. This volume group needs physical disk space to function, so we allocate three disk partitions /dev/sda1, /dev/sdb1, and /dev/ sdb2. These become physical volumes in VolGroup001. We would then create a logical volume called LogVol001 within the volume group comprising the three physical volumes.

If we run out of space in LogVol001, we add more disk partitions as physical volumes and assign them to the volume group and logical volume.

Getting Information about Logical Volumes

As an example of using LVM with RHEL 9, we will work through an example of adding space to the / file system of a standard RHEL 9 installation. Anticipating the need for flexibility in the sizing of the root partition, RHEL 9 sets up the / file system as a logical volume (called root) within a volume group called rhel. Before making any changes to the LVM setup, however, it is essential first to gather information.

Running the mount command will output information about a range of mount points, including the following entry for the root filesystem:

/dev/mapper/rhel-root on / type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)

Information about the volume group can be obtained using the vgdisplay command:

# vgdisplay
  --- Volume group ---
  VG Name               rhel
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  4
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                3
  Open LV               3
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <297.09 GiB
  PE Size               4.00 MiB
  Total PE              76054
  Alloc PE / Size       76054 / <297.09 GiB
  Free  PE / Size       0 / 0
  VG UUID               8vZKNE-v6nY-uII2-NKk1-StmF-EkNp-NNKa9b

As we can see in the above example, the rhel volume group has a physical extent size of 4.00MiB and has a total of 297.09GB available for allocation to logical volumes. Currently, 76054 physical extents are allocated, equaling the total capacity. Therefore, we must add one or more physical volumes to increase the space allocated to any logical volumes in the rhel volume group. The vgs tool is also helpful for displaying a quick overview of the space available in the volume groups on a system:

# vgs
  VG               #PV #LV #SN Attr   VSize    VFree
  rhel    1   3   0 wz--n- <297.09g    0

Information about logical volumes in a volume group may similarly be obtained using the lvdisplay command:

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/rhel/swap
  LV Name                swap
  VG Name                rhel
  LV UUID                RckIC8-T5Or-vZf9-Er1e-IqW7-Q7Uc-f9cpvj
  LV Write Access        read/write
  LV Creation host, time demoserver, 2023-04-10 14:11:15 -0400
  LV Status              available
  # open                 2
  LV Size                3.75 GiB
  Current LE             960
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

  --- Logical volume ---
  LV Path                /dev/rhel/home
  LV Name                home
  VG Name                rhel
  LV UUID                MMpQ05-Gry0-9qGg-zvGQ-Dszn-kIkd-ZpaqK6
  LV Write Access        read/write
  LV Creation host, time demoserver, 2023-04-10 14:11:15 -0400
  LV Status              available
  # open                 1
  LV Size                <223.34 GiB
  Current LE             57174
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:2

  --- Logical volume ---
  LV Path                /dev/rhel/root
  LV Name                root
  VG Name                rhel
  LV UUID                OFqAB0-azjy-47DO-bzRo-ha2O-sHz5-jjMPcF
  LV Write Access        read/write
  LV Creation host, time demoserver, 2023-04-10 14:11:17 -0400
  LV Status              available
  # open                 1
  LV Size                70.00 GiB
  Current LE             17920
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0

As shown in the above example, 70 GiB of the space in volume group rhel is allocated to logical volume root (for the / file system), approximately 223.34 GiB to the home volume group (for / home), and 3.75 GiB to swap (for swap space).

Now that we know what space is being used, it is often helpful to understand which devices are providing the space (in other words, which devices are being used as physical volumes). To obtain this information, we need to run the pvdisplay command:

# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda2
  VG Name               rhel
  PV Size               <297.09 GiB / not usable 4.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              76054
  Free PE               0
  Allocated PE          76054
  PV UUID               siKTC2-fq47-LXTG-VWtc-Ma33-XRCm-5uF63v

Clearly, the space controlled by logical volume rhel is provided via a physical volume located on /dev/sda2.

Now that we know more about our LVM configuration, we can add space to the volume group and the logical volume contained within.

Adding Additional Space to a Volume Group from the Command-Line

Just as with the previous steps to gather information about the current Logical Volume Management configuration of a RHEL 9 system, changes to this configuration can be made from the command line.

In the remainder of this chapter, we will assume that a new disk has been added to the system and that the operating system sees it as /dev/sdb. We shall also assume this is a new disk with no existing partitions. If existing partitions are present, they should be backed up, and then the partitions should be deleted from the disk using the fdisk utility. For example, assuming a device represented by /dev/sdb containing two partitions as follows:

# fdisk -l /dev/sdb
Disk /dev/sdb: 14.46 GiB, 15525216256 bytes, 30322688 sectors
Disk model: USB 2.0 FD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x4c33060b

Device     Boot Start      End  Sectors  Size Id Type
/dev/sdb1        2048 30322687 30320640 14.5G 83 Linux

Once any filesystems on these partitions have been unmounted, they can be deleted as follows:

# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.37.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): d
Selected partition 1
Partition 1 has been deleted.

Command (m for help): w

The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Before moving to the next step, remove any entries in the /etc/fstab file for these filesystems so that the system does not attempt to mount them on the next reboot.

Once the disk is ready, the next step is to convert this disk into a physical volume using the pvcreate command (also wiping the dos signature if one exists):

# pvcreate /dev/sdb
WARNING: dos signature detected on /dev/sdb at offset 510. Wipe it? [y/n]: y
  Wiping dos signature on /dev/sdb.
  Physical volume "/dev/sdb" successfully created.

If the creation fails with a message that reads “Device /dev/<device> excluded by a filter”, it may be necessary to wipe the disk using the wipefs command before creating the physical volume:

# wipefs -a /dev/sdb
/dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 8 bytes were erased at offset 0x1fffffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdb: calling ioctl to re-read partition table: Success

With the physical volume created, we now need to add it to the volume group (in this case, rhel) using the vgextend command:

# vgextend rhel /dev/sdb
  Volume group "rhel" successfully extended

The new physical volume has now been added to the volume group and is ready to be allocated to a logical volume. To do this, we run the lvextend tool providing the size by which we wish to extend the volume. In this case, we want to extend the size of the logical volume home by 14 GB. Note that we need to provide the path to the logical volume, which can be obtained from the lvdisplay command (in this case, /dev/rhel/home):

# lvextend -L+14G /dev/rhel/home
  Size of logical volume rhel/home changed from <223.34 GiB (57174 extents) to <237.34 GiB (60758 extents).
  Logical volume rhel/home successfully resized.

The last step is to resize the file system residing on the logical volume to use the additional space. The way this is performed will depend on the filesystem type, which can be identified using the following df command and checking the Type column:

# df -T /home
Filesystem                   Type 1K-blocks    Used Available Use% Mounted on
/dev/mapper/rhel-home00      xfs  234070356 1669596 232400760   1% /home

If /home is formatted using the XFS filesystem, this can be achieved using the xfs_growfs utility:

# xfs_growfs /home
meta-data=/dev/mapper/rhel-home isize=512    agcount=4, agsize=14636544 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=58546176, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=28587, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
data blocks changed from 58546176 to 62216192

If, on the other hand, the filesystem is of type ext2, ext3, or ext4, the resize2fs utility should be used instead when performing the filesystem resize:

# resize2fs /dev/rhel/home

Once the resize completes, the file system will have been extended to use the additional space provided by the new disk drive. All this has been achieved without moving a single file or restarting the server. As far as users on the system are concerned, nothing has changed (except that there is now more disk space).

Adding Additional Space to a Volume Group Using Cockpit

In addition to the command-line utilities outlined so far in this chapter, it is also possible to access information about logical volumes and make volume group and logical volume changes from within the Cockpit web interface using the Storage page, as shown in Figure 32-1:

Figure 32-1

If the Storage option is not listed, the cockpit-storaged package will need to be installed, and the cockpit service restarted as follows:

# dnf install cockpit-storaged
# systemctl restart cockpit.socket

Once the Cockpit service has restarted, log back into the Cockpit interface, at which point the Storage option should now be visible.

To add a new disk drive to an existing volume group from within the Cockpit console, start at the above Storage page and click on a filesystem associated with the volume group to be extended from the list marked A above.

On the resulting screen, click on the + button highlighted in Figure 32-2 below to add a physical volume:

Figure 32-2

Select the new drive to be added to the volume group and click on the Add button:

Figure 32-3

On returning to the volume group screen, scroll down to the logical volume to be extended and click on it to unfold additional information. Figure 32-4, for example, shows details of the home logical volume:

Figure 32-4

To extend the logical volume using the new space, click the Grow button and use the slider in the resulting dialog to select how much space should be added to the volume. Then, click the Grow button to commit the change (the available space can be shared among different volume groups if required):

Figure 32-5

Once these steps are complete, the volume group will have been configured to use the newly added space.

Summary

Volume groups and logical volumes provide an abstract layer on top of the physical storage devices on a RHEL 9 system to provide a flexible way to allocate the space provided by multiple disk drives. This allows disk space allocations to be made and changed dynamically without the need to repartition disk drives and move data between filesystems. This chapter has outlined the basic concepts of volume groups and logical and physical volumes while demonstrating how to manage these using command-line tools and the Cockpit web interface.

Adding a New Disk Drive to a RHEL 9 System

One of the first problems users and system administrators encounter is that systems need more disk space to store data. Fortunately, disk space is now one of the cheapest IT commodities. In this and the next chapter, we will look at the steps necessary to configure RHEL 9 to use the space provided by installing a new physical or virtual disk drive.

Mounted File Systems or Logical Volumes

There are two ways to configure a new disk drive on a RHEL 9 system. One straightforward method is to create one or more Linux partitions on the new drive, create Linux file systems on those partitions and then mount them at specific mount points to be accessed. This approach will be covered in this chapter.

Another approach is adding new space to an existing volume group or creating a new one. When RHEL 9 is installed, a volume group is created and named rhel. Within this volume group are three logical volumes named root, home, and swap, used to store the / and /home file systems and swap partitions, respectively. We can increase the disk space available to the existing logical volumes by configuring the new disk as part of a volume group. For example, using this approach, we can increase the size of the /home file system by allocating some or all of the space on the new disk to the home volume. This topic will be discussed in detail in Adding a New Disk to a RHEL 9 Volume Group and Logical Volume.

Finding the New Hard Drive

This tutorial assumes that a new physical or virtual hard drive has been installed and is visible to the operating system. Once added, the operating system should automatically detect the new drive. Typically, the disk drives in a system are assigned device names beginning hd, sd, or nvme, followed by a letter to indicate the device number. The first device might be /dev/sda, the second /dev/sdb, etc.

The following is the output from a typical system with only one disk drive connected to a SATA controller:

# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2

This shows that the disk drive represented by /dev/sda is divided into two partitions, represented by /dev/sda1 and /dev/sda2.

The following output is from the same system after a second hard disk drive has been installed:

# ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sdb

The new hard drive has been assigned to the device file /dev/sdb. The drive has no partitions shown (because we have yet to create any).

At this point, we can create partitions and file systems on the new drive and mount them for access or add the disk as a physical volume as part of a volume group. To perform the former continue with this chapter; otherwise, read Adding a New Disk to a RHEL 9 Volume Group and Logical Volume for details on configuring Logical Volumes.

Creating Linux Partitions

The next step is to create one or more Linux partitions on the new disk drive. This is achieved using the fdisk utility, which takes as a command-line argument the device to be partitioned:

# fdisk /dev/sdb

Welcome to fdisk (util-linux 2.37.4).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0x64d68d00.

Command (m for help):

To view the current partitions on the disk, enter the p command:

Command (m for help): p
Disk /dev/sdb: 14.46 GiB, 15525216256 bytes, 30322688 sectors
Disk model: USB 2.0 FD
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x64d68d00

As we can see from the above fdisk output, the disk currently has no partitions because it was previously unused. The next step is to create a new partition on the disk, a task which is performed by entering n (for new partition) and p (for primary partition):

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):

In this example, we only plan to create one partition, which will be partition 1. Next, we need to specify where the partition will begin and end. Since this is the first partition, we need it to start at the first available sector, and since we want to use the entire disk, we specify the last sector as the end. Note that if you wish to create multiple partitions, you can specify the size of each partition by sectors, bytes, kilobytes, or megabytes:

Partition number (1-4, default 1): 1
First sector (2048-30322687, default 2048):
Last sector, +/-sectors or +/-size{K,M,G,T,P} (2048-30322687, default 30322687):

Created a new partition 1 of type 'Linux' and of size 14.5 GiB.

Command (m for help):

Now that we have specified the partition, we need to write it to the disk using the w command:

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

If we now look at the devices again, we will see that the new partition is visible as /dev/sdb1:

# ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sdb /dev/sdb1

The next step is to create a file system on our new partition.

Creating a File System on a RHEL 9 Disk Partition

We now have a new disk installed, it is visible to RHEL 9, and we have configured a Linux partition on the disk. The next step is to create a Linux file system on the partition so that the operating system can use it to store files and data. The easiest way to create a file system on a partition is to use the mkfs.xfs utility:

meta-data=/dev/sdb1              isize=512    agcount=4, agsize=947520 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1    bigtime=1 inobtcount=1
data     =                       bsize=4096   blocks=3790080, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

In this case, we have created an XFS file system. XFS is a high-performance file system that is the default filesystem type on RHEL 9 and includes several advantages in terms of parallel I/O performance and the use of journaling.

An Overview of Journaled File Systems

A journaling filesystem keeps a journal or log of the changes being made to the filesystem during disk writing that can be used to rapidly reconstruct corruptions that may occur due to events such as a system crash or power outage.

There are several advantages to using a journaling file system. First, the size and volume of data stored on disk drives have grown exponentially over the years. The problem with a non-journaled file system is that following a crash, the fsck (filesystem consistency check) utility has to be run. The fsck utility will scan the entire filesystem validating all entries and ensuring that blocks are allocated and referenced correctly. It will attempt to fix the problem if it finds a corrupt entry. The issues here are two-fold. First, the fsck utility will not always be able to repair the damage, and you will end up with data in the lost+found directory. An application uses this data, but the system no longer knows where it was referenced from. The other problem is the issue of time. Completing the fsck process on an extensive file system can take a long time, potentially leading to unacceptable downtime.

On the other hand, a journaled file system records information in a log area on a disk (the journal and log do not need to be on the same device) during each write. This is essentially an “intent to commit” data to the filesystem. The amount of information logged is configurable and ranges from not logging anything to logging what is known as the “metadata” (i.e., ownership, date stamp information, etc.) to logging the “metadata” and the data blocks that are to be written to the file. Once the log is updated, the system writes the actual data to the appropriate filesystem areas and marks an entry to say the data is committed.

After a crash, the filesystem can quickly be brought back online using the journal log, thereby reducing what could take minutes using fsck to seconds with the added advantage that there is considerably less chance of data loss or corruption.

Mounting a File System

Now that we have created a new file system on the Linux partition of our new disk drive, we need to mount it to be accessible and usable. To do this, we need to create a mount point. A mount point is simply a directory or folder into which the file system will be mounted. For this example, we will create a /backup directory to match our file system label (although these values don’t need to match):

# mkdir /backup

The file system may then be manually mounted using the mount command:

# mount /dev/sdb1 /backup

Running the mount command with no arguments shows us all currently mounted file systems (including our new file system):

# mount
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
.
.
/dev/sdb1 on /backup type xfs (rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota)

Configuring RHEL 9 to Mount a File System Automatically

To set up the system so that the new file system is automatically mounted at boot time, an entry needs to be added to the /etc/fstab file. The format for an fstab entry is as follows:

<device>	<dir>	<type>	<options>	<dump>	<fsck>

These entries can be summarized as follows:

  • <device> – The device on which the filesystem will be mounted.
  • <dir> – The directory that is to act as the mount point for the filesystem.
  • <type> – The filesystem type (xfs, ext4 etc.)
  • <options> – Additional filesystem mount options, for example, making the filesystem read-only or controlling whether any user can mount the filesystem. Run man mount to review a complete list of options. Setting this value to defaults will use the default settings for the filesystem (rw, suid, dev, exec, auto, nouser, async).
  • <dump> – Dictates whether the content of the filesystem is to be included in any backups performed by the dump utility. This setting is rarely used and can be disabled with a 0 value.
  • <fsck> – Whether the filesystem is checked by fsck after a system crash and the order in which filesystems are to be checked. For journaled filesystems such as XFS this should be set to 0 to indicate that the check is not required.

The following example shows an fstab file configured to automount our /backup partition on the /dev/sdb1 partition:

/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=b4fc85a1-0b25-4d64-8100-d50ea23340f7 /boot                   xfs     defaults        0 0
/dev/mapper/rhel-home   /home                   xfs     defaults        0 0
/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
/dev/sdb1               /backup                 xfs     defaults        0 0

The /backup filesystem will now automount each time the system restarts.

Adding a Disk Using Cockpit

In addition to working with storage using the command-line utilities outlined in this chapter, it is also possible to configure a new storage device using the Cockpit web console. To view the current storage configuration, log into the Cockpit console and select the Storage option as shown in Figure 31-1:

Figure 31-1

To locate the newly added storage, scroll to the bottom of the Storage page until the Drives section comes into view (note that the Drives section may also be located in the top right-hand corner of the screen):

Figure 31-2

In the case of the above figure, the new drive is the 15.5 GB drive. Select the new drive to display the Drive screen as shown in Figure 31-3:

Figure 31-3

Click on the Create partition button and use the dialog to specify how much space will be allocated to this partition, the filesystem type (XFS is recommended), and an optional label, filesystem mount point, and mount options. Note that additional partitions may be added to the drive if this new partition does not use all the available space. To change settings such as whether the filesystem is read-only or mounted at boot time, change the settings in the Mount options section:

Figure 31-4

Once the settings have been selected, click the Create partition button to commit the change. Upon completion of the creation process, the new partition will be added to the disk, the corresponding filesystem will be created and mounted at the designated mount point, and appropriate changes will be made to the /etc/fstab file.

Summary

This chapter has covered adding a physical or virtual disk drive to an existing RHEL 9 system. This is a relatively simple process of ensuring the new drive has been detected by the operating system, creating one or more partitions on the drive, and then making filesystems on those partitions. Although several filesystem types are available on RHEL 9, XFS is generally recommended. Once the filesystems are ready, they can be mounted using the mount command. So that the newly created filesystems mount automatically on system startup, additions can be made to the /etc/fstab configuration file.

Setting up a RHEL 9 Postfix Email Server

Along with acting as a web server, email is one of the primary uses of a RHEL 9 system, particularly in business environments. Given the importance and popularity of email, it is surprising to some people to find out how complex the email structure is on a Linux system. This complexity can often be overwhelming to the RHEL 9 newcomer.

The good news is that much of the complexity is there to allow experienced email administrators to implement complicated configurations for large-scale enterprise installations. However, for most Linux administrators, setting up a basic email system is relatively straightforward so that users can send and receive electronic mail.

This chapter of RHEL 9 Essentials will explain the basics of Linux-based email configuration and step through configuring a basic email environment. To provide the essentials, we will leave the complexities of the email system for more advanced books on the subject.

The Structure of the Email System

Several components make up a complete email system. Below is a brief description of each one:

Mail User Agent

The typical user will likely be most familiar with this part of the system. The Mail User Agent (MUA), or mail client, is the application that is used to write, send and read email messages. Anyone who has written and sent a message on any computer has used a Mail User Agent of one type or another.

Typical Graphical MUAs on Linux are Evolution, Thunderbird, and KMail. For those who prefer a text-based mail client, there are also the more traditional pine and mail tools. 30.1.2 Mail Transfer Agent

The Mail Transfer Agent (MTA) is the part of the email system that transfers email messages from one computer to another (either on the same local network or over the internet to a remote system). Once configured correctly, most users will only directly interact with their chosen MTA if they wish to re-configure it. Many MTA choices are available for Linux, including Sendmail, Postfix, Fetchmail, Qmail, and Exim.

Mail Delivery Agent

Another part of the infrastructure typically hidden from the user, the Mail Delivery Agent (MDA), sits in the background and performs filtering of the email messages between the Mail Transfer Agent and the mail client (MUA). The most popular form of MDA is a spam filter to remove all unwanted email messages from the system before they reach the inbox of the user’s mail client (MUA). Popular MDAs are Spamassassin and Procmail. It is important to note that some Mail User Agent applications (such as Evolution, Thunderbird, and KMail) include their own MDA filtering. Others, such as Pine and Basla, do not. This can be a source of confusion for the Linux beginner.

SMTP

SMTP is an acronym for Simple Mail Transport Protocol. The email systems use this protocol to transfer mail messages from one server to another. This protocol is the communication language that the MTAs use to talk to each other and transfer messages back and forth. 30.1.5 SMTP Relay

SMTP Relay is a protocol that allows an external SMTP server to send emails instead of hosting a local SMTP server. This will typically involve using a service such as MailJet, SendGrid, or MailGun. These services avoid configuring and maintaining your own SMTP server and often provide additional benefits such as analytics.

Configuring a RHEL 9 Email Server

Many systems use the Sendmail MTA to transfer email messages; on many Linux distributions, this is the default Mail Transfer Agent. Unfortunately, Sendmail is a complex system that can be difficult for beginners and experienced users to understand and configure. It is also falling from favor because it is considered slower at processing email messages than many of the more recent MTAs available.

Many system administrators are now using Postfix or Qmail to handle email. Both are faster and easier to configure than Sendmail.

For this chapter, therefore, we will look at Postfix as an MTA because of its simplicity and popularity. However, if you prefer to use Sendmail, many books specialize in the subject and will do the subject much more justice than we can in this chapter.

As a first step, this chapter will cover configuring a RHEL 9 system to act as a full email server. Later in the chapter, the steps to use an SMTP Relay service will also be covered.

Postfix Pre-Installation Steps

The first step before installing Postfix is to ensure that Sendmail is not already running on your system. You can check for this using the following command:

# systemctl status sendmail

If sendmail is not installed, the tool will display a message similar to the following:

Unit sendmail.service could not be found.Code language: CSS (css)

If sendmail is running on your system, it is necessary to stop it before installing and configuring Postfix. To stop sendmail, run the following command:

# systemctl stop sendmail

The next step is to ensure that sendmail does not get restarted automatically when the system is rebooted:

# systemctl disable sendmail

Sendmail is now switched off and configured to not auto-start when the system is booted. Optionally, to altogether remove sendmail from the system, run the following command:

# dnf remove sendmail

Firewall/Router Configuration

Since sending and receiving email messages involves network connections, the firewall must be configured to allow SMTP traffic. If firewalld is active, use the firewall-cmd tool will as follows:

# firewall-cmd --permanent --add-service=smtp

It will also be essential to configure any other firewall or router between the server and the internet to allow connections on ports 25, 143, and 587 and, if necessary, to configure port forwarding for those ports to the corresponding ports on the email server. With these initial steps completed, we can now install Postfix.

Installing Postfix on RHEL 9

By default, the RHEL 9 installation process installs postfix for most configurations. To verify if postfix is already installed, use the following rpm command:

# rpm -q postfix

If rpm reports that postfix is not installed, it may be installed as follows:

# dnf install postfix

The dnf tool will download and install postfix and configure a special postfix user in the /etc/ passwd file.

Configuring Postfix

The main configuration settings for postfix are located in the /etc/postfix/main.cf file. Many resources on the internet provide detailed information on postfix, so this section will focus on the basic options required to get email up and running. Even though the apt installation set up some basic configuration options, it tends to miss some settings and guess incorrectly for others, so carefully review the main.cf file.

The key options in the main.cf file are as follows:

myhostname = mta1.domain.com
mydomain = domain.com
myorigin = $mydomain
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
inet_interfaces = $myhostname
mynetworks = subnet

Other settings will have either been set up for you by the installation process or are only needed if you are feeling adventurous and want to configure a more sophisticated email system.

The format of myhostname is host.domain.extension. If, for example, your Linux system is named MyLinuxHost and your internet domain is MyDomain.com you would set the myhostname option as follows:

myhostname = mylinuxhost.mydomain.com

The mydomain setting is just the domain part of the above setting. For example:

mydomain = mydomain.com

The myorigin setting defines the name of the domain from which the output email appears to come from when it arrives in the recipient’s inbox and should be set to your domain name:

myorigin = $mydomain

One of the most crucial parameters, mydestination relates to incoming messages and declares the domains for which this server is the final delivery destination. Any incoming email messages addressed to a domain name, not on this list will be considered a relay request which, subject to the mynetworks setting (outlined below), will typically result in a delivery failure.

The inet_interfaces setting defines the network interfaces on the system via which postfix is permitted to receive email and is generally set to all:

inet_interfaces = all

The mynetworks setting defines which external systems are trusted to use the server as an SMTP relay. Possible values for this setting are as follows:

  • host – Only the local system is trusted. Attempts by all external clients to use the server as a relay will be rejected.
  • subnet – Only systems on the same network subnet can use the server as a relay. If, for example, the server has an IP address of 192.168.1.29, a client system with an IP address of 192.168.1.30 could use the server as a relay.
  • class – Any systems within the same IP address class (A, B, and C) may use the server as a relay.

Trusted IP addresses may be defined manually by specifying subnets, address ranges, or referencing pattern files. The following example declares the local host and the subnet 192.168.0.0 as trusted IP addresses:

mynetworks = 192.168.0.0/24, 127.0.0.0/8

For this example, set the property to subnet so that any other systems on the same local network as the server can send email via SMTP relay. In contrast, external systems are prevented from doing so:

mynetworks = subnet

Configuring DNS MX Records

When you register and configure your domain name with a registrar, several default values will have been configured in the DNS settings. One of these is the so-called Mail Exchanger (MX) record. This record defines where email addressed to your domain should be sent and is usually set by default to a mail server provided by your registrar. If you are hosting your own mail server, the MX record should be set to your domain or the IP address of your mail server. The steps to make this change will depend on your domain registrar but generally involves editing the DNS information for the domain and either adding or editing an existing MX record so that it points to your email server.

Starting Postfix on a RHEL 9 System

Once the /etc/postfix/main.cf file is configured with the correct settings, it is now time to start up postfix. This can be achieved from the command line as follows:

# systemctl start postfix

If postfix was already running, make sure the configuration changes are loaded using the following command:

# systemctl reload postfix

To configure postfix to start automatically at system startup, run the following command:

# systemctl enable postfix

The postfix process should now start up. The best way to verify everything works is to check your mail log. This is typically in the /var/log/maillog file and should now contain an entry resembling the following output:

Mar 25 11:21:48 demo-server postfix/postfix-script[5377]: starting the Postfix mail system
Mar 25 11:21:48 demo-server postfix/master[5379]: daemon started -- version 3.3.1, configuration /etc/postfix

As long as no error messages have been logged, you have successfully installed and started postfix and are ready to test the postfix configuration.

Testing Postfix

An easy way to test the postfix configuration is to send email messages between local users on the system. To perform a quick test, use the mail tool as follows (where name and mydomain are replaced by the name of a user on the system and your domain name, respectively):

When prompted, enter a subject for the email message and then enter the message body text. To send the email message, press Ctrl-D. For example:

# mail [email protected]
Subject: Test email message
This is a test message.
EOT

Rerun the mail command, this time as the other user, and verify that the message was sent and received:

$ mail
Heirloom Mail version 12.5 7/5/10. Type ? for help.
"/var/spool/mail/demo": 1 message 1 new
>N  1 root        Mon Mar 25 13:36  18/625   "Test email message"
&

Check the log file (/var/log/maillog) for errors if the message does not appear. Successful mail delivery will appear in the log file as follows:

Mar 25 13:41:37 demo-server postfix/pickup[7153]: 94FAF61E8F4A: uid=0 from=<root>
Mar 25 13:41:37 demo-server postfix/cleanup[7498]: 94FAF61E8F4A: message-id=<[email protected]>
Mar 25 13:41:37 demo-server postfix/qmgr[7154]: 94FAF61E8F4A: from=<[email protected]>, size=450, nrcpt=1 (queue active)
Mar 25 13:41:37 demo-server postfix/local[7500]: 94FAF61E8F4A: to=<[email protected]>, relay=local, delay=0.12, delays=0.09/0.01/0/0.02, dsn=2.0.0, status=sent (delivered to mailbox)
Mar 25 13:41:37 demo-server postfix/qmgr[7154]: 94FAF61E8F4A: removed

Once the local email is working, try sending an email to an external address (such as a GMail account). Also, test that incoming mail works by sending an email from an external account to a user on your domain. In each case, check the /var/log/maillog file for explanations of any errors.

Sending Mail via an SMTP Relay Server

An SMTP Relay service is an alternative to configuring a mail server to handle outgoing email messages. As previously discussed, several services are available, most of which can be found by performing a web search for “SMTP Relay Service”. Most of these services will require you to verify your domain in some way and will provide MX records with which to update your DNS settings. You will also be provided with a username and password, which must be added to the postfix configuration. The remainder of this section assumes that postfix is installed on your system and that all of the initial steps required by your chosen SMTP Relay provider have been completed.

Begin by editing the /etc/postfix/main.cf file and configure the myhostname parameter with your domain name:

myhostname = mydomain.com

Next, create a new file in /etc/postfix named sasl_passwd and add a line containing the mail server host provided by the relay service and the user name and password. For example:

[smtp.myprovider.com]:587 [email protected]:mypassword

Note that port 587 has also been specified in the above entry. Without this setting, postfix will default to using port 25, which is blocked by default by most SMTP relay service providers. With the password file created, use the postmap utility to generate the hash database containing the mail credentials:

# postmap /etc/postfix/sasl_passwd

Before proceeding, take some additional steps to secure your postfix credentials:

# chown root:root /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db
# chmod 0600 /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db

Edit the main.cf file once again and add an entry to specify the relay server:

relayhost = [smtp.myprovider.com]:587

Remaining within the main.cf file, add the following lines to configure the authentication settings for the SMTP server:

smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous

Finally, restart the postfix service:

# systemctl restart postfix

Once the service has restarted, try sending and receiving mail using either the mail tool or your preferred mail client.

Summary

A complete, end-to-end email system consists of a Mail User Agent (MUA), Mail Transfer Agent (MTA), Mail Delivery Agent (MDA), and the SMTP protocol. RHEL 9 provides several options in terms of MTA solutions, one of the more popular being Postfix. This chapter has outlined how to install, configure and test postfix on a RHEL 9 system to act as a mail server and send and receive email using a third-party SMTP relay server.

Setting Up a RHEL 9 Web Server

The Apache web server is among the many packages that make up the RHEL 9 operating system. The scalability and resilience of RHEL 9 make it an ideal platform for hosting even the most heavily trafficked websites.

This chapter will explain how to configure a RHEL 9 system using Apache to act as a web server, including both secure (HTTPS) and insecure (HTTP) configurations.

Requirements for Configuring a RHEL 9 Web Server

To set up your own website, you need a computer (or cloud server instance), an operating system, a web server, a domain name, a name server, and an IP address.

In terms of an operating system, we will assume you are using RHEL 9. As previously mentioned, RHEL 9 supports the Apache web server, which can easily be installed once the operating system is up and running. In addition, a domain name can be registered with any domain name registration service.

If you are running RHEL 9 on a cloud instance, the IP address assigned by the provider will be listed in the server overview information. However, if you are hosting your own server and your internet service provider (ISP) has assigned a static IP address, you must associate your domain with that address. This is achieved using a name server, and all domain registration services will provide this service.

If you do not have a static IP address (i.e., your ISP provides you with a dynamic address that changes frequently), you can use one of several free Dynamic DNS (DDNS or DynDNS for short) services to map your dynamic IP address to your domain name.

Once you have configured your domain name and your name server, the next step is to install and configure your web server.

Installing the Apache Web Server Packages

The current release of RHEL typically does not install the Apache web server by default. To check whether the server is already installed, run the following command:

# rpm -q httpd

If rpm generates output similar to the following, the Apache server is already installed:

httpd-2.4.53-7.el9_1.5.x86_64

Alternatively, if rpm generates a “package httpd is not installed” message, the next step is to install it. To install Apache, run the following command at the command prompt:

# dnf install httpd

Configuring the Firewall

Before starting and testing the Apache web server, the firewall must be modified to allow the webserver to communicate with the outside world. By default, the HTTP and HTTPS protocols use ports 80 and 443, respectively, so depending on which protocols are being used, either one or both of these ports will need to be opened. When opening the ports, be sure to specify the firewall zone that applies to the internet-facing network connection:

# firewall-cmd --permanent --zone=<zone> --add-port=80/tcp
# firewall-cmd --permanent --zone=<zone> --add-port=443/tcp

After opening the necessary ports, be sure to reload the firewall settings:

# firewall-cmd --reload

On cloud-hosted servers, enabling the appropriate port for the server instance within the cloud console may also be necessary. Check the documentation for the cloud provider for steps to do this.

Port Forwarding

Suppose the RHEL 9 system hosting the web server sits on a network protected by a firewall (another computer running a firewall, router, or wireless base station containing built-in firewall protection). In that case, you must configure the firewall to forward ports 80 and 443 to your web server system. The mechanism for performing this differs between firewalls and devices, so check your documentation to find out how to configure port forwarding.

Starting the Apache Web Server

Once the Apache server is installed and the firewall configured, the next step is to verify that the server is running and start it if necessary.

To check the status of the Apache service from the command line, enter the following at the command prompt:

# systemctl status httpd

If the above command indicates that the httpd service is not running, it can be launched from the command line as follows:

# systemctl start httpd

If you would like the Apache httpd service to start automatically when the system boots, run the following command:

# systemctl enable httpd

Testing the Web Server

Once the installation is complete, the next step is verifying the web server is running.

If you have access (either locally or remotely) to the desktop environment of the server, start up a web browser and enter http://127.0.0.1 in the address bar (127.0.0.1 is the loop-back network address which tells the system to connect to the local machine). If everything is set up correctly, the browser should load the RHEL shown in Figure 29-1:

Figure 29-1

If the desktop environment is unavailable, connect either from another system on the same local network as the server, or use the external IP address assigned to the system if it is hosted remotely.

Configuring the Apache Web Server for Your Domain

The next step in setting up your web server is configuring it for your domain name. To configure the web server, begin by changing directory to /etc/httpd, which, in turn, contains several files and sub-directories. Change directory into the conf sub-directory, where you will find a file named httpd.conf containing the configuration settings for the Apache server.

Edit the httpd.conf file using your preferred editor with super-user privileges to ensure you have permission to access and modify the file. Once loaded, several settings need to be changed to match your environment.

The most common way to configure Apache for a specific domain is to add virtual host entries to the httpd.conf file. This allows a single Apache server to support multiple websites simply by adding a virtual host entry for each site domain. Within the httpd.conf file, add a virtual host entry for your domain as follows:

<VirtualHost *:80>
    ServerAdmin [email protected]
    ServerName www.myexample.com
    DocumentRoot /var/www/myexample
    ErrorLog logs/myexample_error_log
    CustomLog logs/myexample_access_log combined
</VirtualHost>

The ServerAdmin directive in the above virtual host entry defines an administrative email address for people wishing to contact the webmaster for your site. Change this to an appropriate email address where you can be contacted.

Next, the ServerName is declared so the web server knows the domain name associated with this virtual host.

Since each website supported by the server will have its own set of files, the DocumentRoot setting is used to specify the location of the files for this website domain. The tradition is to use / var/www/domain-name, for example:

DocumentRoot /var/www/myexample

Finally, entries are added for the access history and error log files.

Create the /var/www/<domain name> directory as declared in the httpd.conf file and place an index.html file in it containing some basic HTML. For example:

<html>
<title>Sample Web Page</title>
<body>
Welcome to MyExample.com
</body>
</html>

The last step is to restart the httpd service to make sure it picks up our new settings:

# systemctl restart httpd

Finally, check that the server configuration works by opening a browser window and navigating to the site using the domain name instead of the IP address. The web page that loads should be defined in the index.html file created above.

The Basics of a Secure Website

The web server and website created in this chapter use the HTTP protocol on port 80 and, as such, are considered to be insecure. The problem is that the traffic between the web server and the client (typically a user’s web browser) is transmitted in clear text. In other words, the data is unencrypted and susceptible to interception. While not a problem for general web browsing, this is a severe weakness when performing tasks such as logging into websites or transferring sensitive information such as identity or credit card details.

These days, websites are expected to use HTTPS, which uses either Secure Socket Layer (SSL) or Transport Layer Security (TLS) to establish secure, encrypted communication between a web server and a client. This security is established through the use of public, private, and session encryption together with certificates.

To support HTTPS, a website must have a certificate issued by a trusted authority known as a Certificate Authority (CA). When a browser connects to a secure website, the web server sends back a copy of the website’s SSL certificate, which also contains a copy of the site’s public key. The browser then validates the authenticity of the certificate with trusted certificate authorities. If the certificate is valid, the browser uses the public key sent by the server to encrypt a session key and pass it to the server. The server decrypts the session key using the private key to send an encrypted acknowledgment to the browser. Once this process is complete, the browser and server use the session key to encrypt all subsequent data transmissions until the session ends.

Configuring Apache for HTTPS

By default, the Apache server does not include the necessary module to implement a secure HTTPS website. The first step, therefore, is to install the Apache mod_ssl module on the server system as follows:

# dnf install mod_ssl

Restart httpd after the installation completes to load the new module into the Apache server:

# systemctl restart httpd

Check that the module has loaded into the server using the following command:

# httpd -M | grep ssl_module
 ssl_module (shared)

Once the ssl module is installed, repeat the steps from the previous section of this chapter to create a configuration file for the website, this time using the sites-available/default-ssl.conf file as the template for the site configuration file. Assuming the module is installed, the next step is to generate an SSL certificate for the website.

Obtaining an SSL Certificate

The certificate for a website must be obtained from a Certificate Authority. Several options are available at a range of prices. By far the best option, however, is to obtain a free certificate from Let’s Encrypt at the following URL:

https://letsencrypt.org/

Obtaining a certificate from Let’s Encrypt involves installing and running the Certbot tool. This tool will scan the Apache configuration files on the server and provides the option to generate certificates for any virtual hosts configured on the system. It will then generate the certificate and add virtual host entries to the Apache configuration for the corresponding websites.

Follow the steps on the Let’s Encrypt website to download and install Certbot on your RHEL 9 system, then run the certbot tool as follows to generate and install the certificate:

# certbot --apache

After requesting an email address and seeking terms of service acceptance, Certbot will list the domains found in the httpd.conf file and allow the selection of one or more sites for which a certificate will be installed. Certbot will then perform some checks before obtaining and installing the certificate on the system:

hich names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: www.myexample.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for www.myexample.com
Waiting for verification...
Cleaning up challenges
Created an SSL vhost at /etc/httpd/conf/httpd-le-ssl.conf
Deploying Certificate to VirtualHost /etc/httpd/conf/httpd-le-ssl.conf
Enabling sit

Certbot will also create a new file named httpd-le-ssl.conf in the /etc/httpd/conf directory containing a secure virtual host entry for each domain name for which a certificate has been generated. These entries will be similar to the following:

<IfModule mod_ssl.c>
<VirtualHost *:443>
    ServerAdmin [email protected]
    ServerName www.myexample.com
    DocumentRoot /var/www/myexample
    ErrorLog logs/myexample_error_log
    CustomLog logs/myexample_access_log combined
 
SSLCertificateFile /etc/letsencrypt/live/www.myexample.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/www.myexample.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>

Finally, Certbot will ask whether the server should redirect future HTTP web requests to HTTPS. In other words, if a user attempts to access http://www.myexample.com, the web server will redirect the user to https://www.myexample.com:

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2

If you are currently testing the HTTPS configuration and would like to keep the HTTP version live until later, select the No redirect option. Otherwise, redirecting to HTTPS is generally recommended.

Once the certificate has been installed, test it in a browser at the following URL (replacing myexample.com with your own domain name):

https://www.ssllabs.com/ssltest/analyze.html?d=www.myexample.com

If the certificate configuration is successful, the SSL Labs report will provide a high rating, as shown in Figure 29-2:

Figure 29-2

As a final test, open a browser window and navigate to your domain using the https:// prefix. The page should load as before, and the browser should indicate that the connection between the browser and server is secure (usually indicated by a padlock icon in the address bar, which can be clicked for additional information):

Figure 29-3

Summary

A RHEL 9 system can host websites by installing the Apache web server. Insecure (HTTP) and secure (HTTPS) websites can be deployed on RHEL 9. Secure websites use either Secure Socket Layer (SSL) or Transport Layer Security (TLS) to establish encrypted communication between the web server and client through public, private, and session encryption, together with a certificate issued by a trusted Certificate Authority.

An RHEL 9 Containers Tutorial

Now that the basics of Linux Containers have been covered in the previous chapter, this chapter will demonstrate how to create and manage containers using the Podman, Skopeo, and Buildah tools on RHEL 9. By the end of this chapter, you will have a clearer understanding of how to create and manage containers on RHEL 9. In addition, you will have gained a knowledge foundation on which to continue exploring the power of Linux Containers.

Installing the Container Tools

Before starting with containers, the first step is to install all of the container tools outlined in the previous chapter using the following command:

# dnf install container-tools

Logging in to the Red Hat Container Registry

To begin with, a container will be created using an existing image provided within the Red Hat Container Registry. Before an image can be pulled from the registry to the local system, however, you must first log into the registry using your existing Red Hat credentials using the podman tool as follows:

# podman login registry.redhat.io
Username: yourusername
Password: yourpassword
Login Succeeded!

Pulling a Container Image

The RHEL 9 Universal Base Image (UBI) will be pulled from the registry for this example. Before pulling an image, however, information about the image repository can be obtained using the skopeo tool, for example:

# skopeo inspect docker://registry.redhat.io/ubi9/ubi-init
{
    "Name": "registry.redhat.io/ubi9/ubi-init",
    "Digest": "sha256:82352ed85d5dd55efd8d32a7eae998afded2769fc1b517cc4383301da1936a58",
    "RepoTags": [
        "9.1.0-15",
        "9.1.0-12",
        "9.0.0-16",
        "9.0.0-19",
        "9.1.0",
        "9.0.0",
        "9.0.0-16.1655192132",
        "9.0.0-26.1666626006",
        "9.0.0-26.1665072052",
        "9.0.0-16.1655192132-source",
        "9.1.0-5.1669633213-source",
        "9.0.0-26.1666626006-source",
        "9.1.0-12-source",
        "9.1.0-5.1669025017-source",
        "9.1.0-5.1669633213",
        "9.0.0-26.1665072052-source",
        "9.1.0-5.1669025017",
        "9.1",
        "9.0.0-28",
        "9.0.0-29",
        "9.1.0-12.1675789285-source",
        "9.0.0-23",
        "9.0.0-26",
        "9.0.0-26-source",
        "9.0.0-28-source",
        "9.1.0-5-source",
        "9.0.0-29-source",
        "9.1.0-15-source",
        "9.1.0-5",
        "9.0.0-19-source",
        "9.0.0-16-source",
        "9.0.0-23-source",
        "9.1.0-12.1675789285",
        "latest"
    ],
    "Created": "2023-02-22T13:55:53.957676474Z",
    "DockerVersion": "",
    "Labels": {
        "architecture": "x86_64",
        "build-date": "2023-02-22T13:54:21",
        "com.redhat.component": "ubi9-init-container",
        "com.redhat.license_terms": "https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI",
        "description": "The Universal Base Image Init is designed to run an init system as PID 1 for running multi-services inside a container. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly."
.
.

This ubi-init image is the RHEL 9 base image for building minimal operating system containers and will be used as the basis for the example in this chapter.

Having verified that this is indeed the correct image, the following podman command will pull the image to the local system:

# podman pull docker://registry.redhat.io/ubi9/ubi-init
Trying to pull docker://registry.redhat.io/ubi9/ubi-init...Getting image source signatures
Copying blob 340ff6d7f58c done
Copying blob 0e8ea260d026 done
Copying blob c3bd58a6898a done
Copying config a4933472b1 done
Writing manifest to image destination
Storing signatures
a4933472b168b6bd21bc4922dc1e72bb2805d41743799f5a823cdeca9a9a6613

Verify that the image has been stored by asking podman to list all local images:

# podman  images
REPOSITORY                         TAG      IMAGE ID       CREATED       SIZE
registry.redhat.io/ubi9/ubi-init   latest   a4933472b168   6 weeks ago   254 MB

Details about a local image may be obtained by running the podman inspect command:

# podman inspect registry.redhat.io/ubi9/ubi-init

This command should output the same information as the skopeo command performed on the remote image earlier in this chapter.

Running the Image in a Container

The image pulled from the registry is a fully operational image ready to run in a container without modification. To run the image, use the podman run command. In this case, the –rm option will be specified to indicate that we want to run the image in a container, execute one command and then have the container exit. In this case, the cat tool will be used to output the content of the /etc/ passwd file located on the container root filesystem:

# podman run --rm registry.redhat.io/ubi9/ubi-init cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:65534:65534:Kernel Overflow User:/:/sbin/nologin
systemd-coredump:x:999:997:systemd Core Dumper:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
tss:x:59:59:Account used for TPM access:/dev/null:/sbin/nologin
systemd-oom:x:995:995:systemd Userspace OOM Killer:/:/usr/sbin/nologin

Compare the content of the /etc/passwd file within the container with the /etc/passwd file on the host system. Note that it lacks all of the additional users on the host confirming that the cat command was executed within the container environment. Also, note that the container started, ran the command, and exited within seconds. Compare this to the amount of time it takes to start a full operating system, perform a task, and shut down a virtual machine, and you begin to appreciate the speed and efficiency of containers.

To launch a container, keep it running, and access the shell, the following command can be used:

# podman run --name=mycontainer -it registry.redhat.io/ubi9/ubi-init /bin/bash
[[email protected] /]#

In this case, an additional command-line option has been used to assign the name “mycontainer” to the container. Though optional, this makes the container easier to recognize and reference as an alternative to using the automatically generated container ID.

While the container is running, run podman in a different terminal window to see the status of all containers on the system

# podman ps -a
CONTAINER ID  IMAGE                               COMMAND    CREATED        STATUS            PORTS  NAMES
dbed2be11730  registry.redhat.io/ubi9/ubi:latest  /bin/bash  5 minutes ago  Up 2 minutes ago         mycontainer

To execute a command in a running container from the host, use the podman exec command, referencing the name of the running container and the command to be executed. The following command, for example, starts up a second bash session in the container named mycontainer:

# podman exec -it mycontainer /bin/bash
[[email protected] /]#

Note that though the above example referenced the container name, the same result can be achieved using the container ID as listed by the podman ps -a command:

# podman exec -it dbed2be11730 /bin/bash
[[email protected] /]#

Alternatively, the podman attach command will also attach to a running container and access the shell prompt:

# podman attach mycontainer
[[email protected] /]#

Once the container is up and running, additional configuration changes can be made and packages installed like any other RHEL 9 system.

Managing a Container

Once launched, a container will continue to run until it is stopped via podman, or the command launched when the container was run exits. Running the following command on the host, for example, will cause the container to exit:

# podman stop mycontainer

Alternatively, pressing the Ctrl-D keyboard sequence within the last remaining bash shell of the container would cause both the shell and container to exit. Once it has exited, the status of the container will change accordingly:

# podman ps -a
CONTAINER ID  IMAGE                               COMMAND    CREATED        STATUS                     PORTS  NAMES
dbed2be11730  registry.redhat.io/ubi9/ubi:latest  /bin/bash  9 minutes ago  Exited (0) 14 seconds ago         mycontainer

Although the container is no longer running, it still exists and contains all the configuration and file system changes. If you installed packages, made configuration changes, or added files, these changes will persist within “mycontainer”. To verify this, restart the container as follows:

# podman start mycontainer

After starting the container, use the podman exec command again to execute commands within the container as outlined previously. For example, to once again gain access to a shell prompt:

# podman exec -it mycontainer /bin/bash

A running container may also be paused and resumed using the podman pause and unpause commands as follows:

# podman pause mycontainer
# podman unpause mycontainer

Saving a Container to an Image

Once the container guest system is configured to your requirements, there is a good chance that you will want to create and run more than one container of this particular type. To do this, the container needs to be saved as an image to local storage to be used as the basis for additional container instances. This is achieved using the podman commit command combined with the name or ID of the container and the name by which the image will be stored, for example:

# podman commit mycontainer myrhel_image

Once the image has been saved, check that it now appears in the list of images in the local repository:

# podman images
REPOSITORY                         TAG      IMAGE ID       CREATED         SIZE
localhost/myrhel_image             latest   4d207635db6c   9 seconds ago   239 MB
registry.redhat.io/ubi9/ubi-init   latest   a4933472b168   6 weeks ago     254 MB

The saved image can now be used to create additional containers identical to the original:

# podman run --name=mycontainer2 -it localhost/myrhel_image /bin/bash

Removing an Image from Local Storage

To remove an image from local storage once it is no longer needed, run the podman rmi command, referencing either the image name or ID as output by the podman images command. For example, to remove the image named myrhel_image created in the previous section, run podman as follows:

# podman rmi localhost/myrhel_image

Before an image can be removed, any containers based on that image must first be removed.

Removing Containers

Even when a container has exited or been stopped, it still exists and can be restarted anytime. If a container is no longer needed, it can be deleted using the podman rm command as follows after the container has been stopped:

# podman rm mycontainer2

Building a Container with Buildah

Buildah allows new containers to be built from existing containers, an image, or entirely from scratch. Buildah also includes the ability to mount the file system of a container so that it can be accessed and modified from the host.

The following buildah command, for example, will build a container from the RHEL 9 Base image (if the image has not already been pulled from the registry, buildah will download it before creating the container):

# buildah from registry.redhat.io/ubi9/ubi-init

The result of running this command will be a container named ubi-init-working-container that is ready to run:

# buildah run ubi-init-working-container cat /etc/passwd

Building a Container from Scratch

Building a container from scratch creates an empty container. Once created, packages may be installed to meet the requirements of the container. This approach is useful when creating a container that only needs the minimum of packages installed.

The first step in building from scratch is to run the following command to build the empty container:

# buildah from scratch
working-container

After the build is complete, a new container will be created named working-container:

# buildah containers
CONTAINER ID  BUILDER  IMAGE ID     IMAGE NAME                       CONTAINER NAME
dbed2be11730     *     cb642e6a9917 registry.redhat.io/ubi9/ubi:latest dbed2be1173000c099ff29c96eae59aed297b82412b240a8ed29ecec4d39a8ba
17df816ea0bb     *     a4933472b168 registry.redhat.io/ubi9/ubi-init:latest ubi-init-working-container
65b424a31039     *                  scratch                          working-container

The empty container is now ready to have some packages installed. Unfortunately, this cannot be performed within the container because not even the bash or dnf tools exist. So instead, the container filesystem needs to be mounted on the host system, and the packages are installed using dnf with the system root set to the mounted container filesystem. Begin this process by mounting the container’s filesystem as follows:

# buildah mount working-container
/var/lib/containers/storage/overlay/20b46cf0e2994d1ecdc4487b89f93f6ccf41f72788da63866b6bf80984081d9a/merged

If the file system was successfully mounted, buildah will output the mount point for the container file system. Now that we have access to the container filesystem, the dnf command can install packages into the container using the –installroot option to point to the mounted container file system. The following command, for example, installs the bash, CoreUtils, and dnf packages on the container filesystem (where <container_fs_mount> is the mount path output previously by the buildah mount command) :

# dnf install --releasever=9.1 --installroot <container_fs_mount> bash coreutils dnf

Note that the –releasever option indicates to dnf that the packages for RHEL 9.1 are to be installed within the container.

After the installation completes, unmount the scratch filesystem as follows:

# buildah umount working-container

Once dnf has performed the package installation, the container can be run, and the bash command prompt can be accessed as follows:

# buildah run working-container bash
bash-5.1#

Container Bridge Networking

As outlined in the previous chapter, container networking is implemented using the Container Networking Interface (CNI) bridged network stack. The following command shows the typical network configuration on a host system on which containers are running:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:23:24:52:52:57 brd ff:ff:ff:ff:ff:ff
    altname enp0s25
    inet 192.168.86.35/24 brd 192.168.86.255 scope global dynamic noprefixroute eno1
       valid_lft 68525sec preferred_lft 68525sec
    inet6 fd7f:86f:716a:0:223:24ff:fe52:5257/64 scope global dynamic noprefixroute
       valid_lft 1619sec preferred_lft 1619sec
    inet6 fe80::223:24ff:fe52:5257/64 scope link noprefixroute
       valid_lft forever preferred_lft forever
3: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master podman0 state UP group default qlen 1000
    link/ether 3e:3a:54:b8:d8:e1 brd ff:ff:ff:ff:ff:ff link-netns netns-658a3069-5c69-185f-ef34-621107da5a71
    inet6 fe80::3c3a:54ff:feb8:d8e1/64 scope link
       valid_lft forever preferred_lft forever
6: podman0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 3e:3a:54:b8:d8:e1 brd ff:ff:ff:ff:ff:ff
    inet 10.88.0.1/16 brd 10.88.255.255 scope global podman0
       valid_lft forever preferred_lft forever
    inet6 fe80::45e:83ff:fe68:4067/64 scope link
       valid_lft forever preferred_lft forever

In the above example, the host has an interface named eno1 connected to the external network with an IP address of 192.168.86.35. In addition, a virtual interface has been created named podman0 and assigned the IP address of 10.88.0.1. Running the same ip command on a container running on the host might result in the following output:

# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: [email protected]: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether b6:fb:88:16:8b:5e brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 10.88.0.8/16 brd 10.88.255.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 fe80::b4fb:88ff:fe16:8b5e/64 scope link
       valid_lft forever preferred_lft forever

In this case, the container has an IP address of 10.88.0.8. Running the ping command on the host will verify that the host and containers are indeed on the same subnet:

# ping 10.88.0.8
PING 10.88.0.8 (10.88.0.8) 56(84) bytes of data.
64 bytes from 10.88.0.28: icmp_seq=1 ttl=64 time=0.056 ms
64 bytes from 10.88.0.28: icmp_seq=2 ttl=64 time=0.039 ms
.
.

We can also use the podman network ls command to obtain a list of container networks currently available on the host system:

# podman network ls
NETWORK ID    NAME        DRIVER
2f259bab93aa  podman      bridge

The following command can be used to display detailed information about a container network, in this case, the above podman network:

# podman network inspect podman
[
     {
          "name": "podman",
          "id": "2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9",
          "driver": "bridge",
          "network_interface": "podman0",
          "created": "2023-04-11T08:49:20.489892751-04:00",
          "subnets": [
               {
                    "subnet": "10.88.0.0/16",
                    "gateway": "10.88.0.1"
               }
          ],
          "ipv6_enabled": false,
          "internal": false,
          "dns_enabled": false,
          "ipam_options": {
               "driver": "host-local"
          }
     }
]

New container networks are created using the following syntax:

podman network create <network name>

For example, to create a new network named demonet, the following command would be used:

# podman network create demonet

The presence of the new network may be verified using the podman network ls command:

# podman network ls
NETWORK ID    NAME        DRIVER
9692930055dc  demonet     bridge
2f259bab93aa  podman      bridge

Once a network has been created, it can be assigned to a container as follows:

# podman network connect demonet mycontainer

Conversely, a container can be disconnected from a network using the podman network disconnect command:

# podman network disconnect demonet mycontainer

When the new container network is created, a configuration file is generated in the /etc/containers/ networks directory with the name <network name>.json. In the case of the above example, a file named demonet.json that reads as follows will have been created:

{
     "name": "demonet",
     "id": "9692930055dc6ceea5cc4d0720df548daa521fe62e23ca3659a9815664bca2f6",
     "driver": "bridge",
     "network_interface": "podman1",
     "created": "2023-04-10T16:54:35.618056082-04:00",
     "subnets": [
          {
               "subnet": "10.89.0.0/24",
               "gateway": "10.89.0.1"
          }
     ],
     "ipv6_enabled": false,
     "internal": false,
     "dns_enabled": true,
     "ipam_options": {
          "driver": "host-local"
     }
}

Modifications can be made to this file to change settings such as the subnet address range, IPv6 support, and network type (set to bridge for this example) for implementing different network configurations.

Finally, we can use the following command to delete the custom network:

# podman network rm demonet

Managing Containers in Cockpit

In addition to the command-line tools outlined in this chapter, Linux containers may be created and managed using the Cockpit web interface. Assuming that Cockpit is installed and enabled on the system (a topic covered in “An Overview of the Cockpit Web Interface”) and that the steps to log into the Red Hat Container Registry outlined at the start of the chapter have been completed, sign into Cockpit and select the Podman Containers option. If the Podman module is not installed within Cockpit, open a terminal window and run the following command:

# dnf install cockpit-podman

If necessary, click the button to start Podman service, at which point the screen should resemble Figure 28-1:

Figure 28-1

The first step is to download an image to use as the basis for a container (unless one has already been downloaded using the command-line tools). To perform the download, click the menu button marked by the arrow Figure 28-1 above and select the Download new image option. When the search dialog appears, enter a keyword into the search dialog. Figure 28-2, for example, shows the result of searching for the RHEL 9 universal base images:

Figure 28-2

Once the image has been located, select it and click the Download button to pull it from the registry. Once downloaded, it will appear in the images list as shown in Figure 28-3:

Figure 28-3

To run the image as a container, click on the Create container button and configure the container in the resulting dialog:

Figure 28-4

Note that options are provided to map ports on the host to ports on the container (useful, for example, if the container is hosting a web server), to limit the memory allocated to the container, and to specify volumes to use as the container storage. An option is also available to specify the program to run when the container starts. Once configured, click the Create and run button to launch the container.

Once running, the container will appear in the Containers section, as illustrated in Figure 28-5 below:

Figure 28-5

The highlighted menu button allows actions such as starting, stopping, pausing, and saving the container to be performed. In addition, the container may also now be accessed and managed from the command line using the steps outlined earlier in the chapter.

Summary

This chapter has worked through creating and managing Linux Containers on RHEL 9 using the podman, skopeo, and buildah tools together with the Cockpit web interface, including using container images obtained from a repository and the creation of a new image built entirely from scratch.

An Introduction to RHEL 9 Containers

The preceding chapters covered the concept of virtualization, emphasizing creating and managing virtual machines using KVM. This chapter will introduce a related technology in the form of Linux Containers. While there are some similarities between virtual machines and containers, key differences will be outlined in this chapter, along with an introduction to the concepts and advantages of Linux Containers. The chapter will also overview some RHEL 9 container management tools. Once the basics of containers have been covered in this chapter, the next chapter will work through some practical examples of creating and running containers on RHEL 9.

Linux Containers and Kernel Sharing

In simple terms, Linux containers are a lightweight alternative to virtualization. A virtual machine contains and runs the entire guest operating system in a virtualized environment. The virtual machine, in turn, runs on top of an environment such as a hypervisor that manages access to the physical resources of the host system.

Containers work by using a concept referred to as kernel sharing, which takes advantage of the architectural design of Linux and UNIX-based operating systems.

To understand how kernel sharing and containers work, it helps first to understand the two main components of Linux or UNIX operating systems. At the core of the operating system is the kernel. The kernel, in simple terms, handles all the interactions between the operating system and the physical hardware. The second key component is the root file system which contains all the libraries, files, and utilities necessary for the operating system to function. Taking advantage of this structure, containers each have their own root file system but share the host operating system’s kernel. This structure is illustrated in the architectural diagram in Figure 27-1 below.

This type of resource sharing is made possible by the ability of the kernel to dynamically change the current root file system (a concept known as change root or chroot) to a different root file system without having to reboot the entire system. Linux containers are essentially an extension of this capability combined with a container runtime, the responsibility of which is to provide an interface for executing and managing the containers on the host system. Several container runtimes are available, including Docker, lxd, containerd, and CRI-O. Earlier versions of RHEL used Docker by default, but Podman has supplanted this as the default in RHEL 9.

Figure 27-1

Container Uses and Advantages

The main advantage of containers is that they require considerably less resource overhead than virtualization allowing many container instances to be run simultaneously on a single server. They can be started and stopped rapidly and efficiently in response to demand levels. In addition, containers run natively on the host system providing a level of performance that a virtual machine cannot match.

Containers are also highly portable and can be easily migrated between systems. Combined with a container management system such as Docker, OpenShift, and Kubernetes, it is possible to deploy and manage containers on a vast scale spanning multiple servers and cloud platforms, potentially running thousands of containers.

Containers are frequently used to create lightweight execution environments for applications. In this scenario, each container provides an isolated environment containing the application together with all of the runtime and supporting files required by that application to run. The container can then be deployed to any other compatible host system that supports container execution and runs without any concerns that the target system may not have the necessary runtime configuration for the application – all of the application’s dependencies are already in the container.

Containers are also helpful when bridging the gap between development and production environments. By performing development and QA work in containers, they can be passed to production and launched safely because the applications run in the same container environments in which they were developed and tested.

Containers also promote a modular approach to deploying large and complex solutions. Instead of developing applications as single monolithic entities, containers can be used to design applications as groups of interacting modules, each running in a separate container.

One possible drawback of containers is that the guest operating systems must be compatible with the version of the kernel being shared. It is not, for example, possible to run Microsoft Windows in a container on a Linux system. Nor is it possible for a Linux guest system designed for the 2.6 version of the kernel to share a 2.4 version kernel. These requirements are not, however, what containers were designed for. Rather than being seen as limitations, these restrictions should be considered some of the key advantages of containers in providing a simple, scalable, and reliable deployment platform.

RHEL 9 Container Tools

RHEL 9 provides several tools for creating, inspecting, and managing containers. The main tools are as follows:

  • buildah – A command-line tool for building container images.
  • podman – A command-line based container runtime and management tool. Performs tasks such as downloading container images from remote registries and inspecting, starting, and stopping images.
  • skopeo – A command-line utility used to convert container images, copy images between registries and inspect images stored in registries without downloading them.
  • runc – A lightweight container runtime for launching and running containers from the command line.
  • OpenShift – An enterprise-level container application management platform consisting of command-line and web-based tools.

All of the above tools comply with the Open Container Initiative (OCI), a set of specifications designed to ensure that containers conform to the same standards between competing tools and platforms.

Container Catalogs, Repositories, and Registries

The Red Hat Container Catalog (RHCC) provides a set of pre-built images tested by Red Hat and can be downloaded and used as the basis for your own container images. The RHCC can be accessed at the following URL and allows searches to be performed for specific images:

https://catalog.redhat.com/software/containers/search

After completing a search, the catalog will display a list of matching repositories. A repository in this context is a collection of associated images. Figure 27-2, for example, shows a partial list of the container image repositories available for RHEL 9 related containers:

Figure 27-2

Selecting a repository from the list will display detailed information about the repository. When reviewing a repository in the catalog, key pieces of information are the repository name and the location of the registry where the repository is stored. Both specifications must be referenced when the container image is downloaded for use.

Container Networking

By default, containers are connected to a network using a Container Networking Interface (CNI) bridged network stack. In the bridged configuration, all the containers running on a server belong to the same subnet and, as such, can communicate with each other. The containers are also connected to the external network by bridging the host system’s network connection. Similarly, the host can access the containers via a virtual network interface (usually named podman0) which will have been created as part of the container tool installation.

Summary

Linux Containers offer a lightweight alternative to virtualization and take advantage of the structure of the Linux and Unix operating systems. Linux Containers share the host operating system’s kernel, with each container having its own root file system containing the files, libraries, and applications. As a result, containers are highly efficient and scalable and provide an ideal platform for building and deploying modular enterprise-level solutions. In addition, several tools and platforms are available for building, deploying, and managing containers, including third-party solutions and those provided by Red Hat.

Managing KVM using the RHEL 9 virsh Command-Line Tool

In previous chapters, we have covered the installation and configuration of KVM-based guest operating systems on RHEL 9. This chapter explores additional areas of the virsh tool that have not been covered in previous chapters and how it may be used to manage KVM-based guest operating systems from the command line.

The virsh Shell and Command-Line

The virsh tool is both a command-line tool and an interactive shell environment. When used in the command-line mode, the command is issued at the command prompt with sets of arguments appropriate to the task.

To use the options as command-line arguments, use them at a terminal command prompt, as shown in the following example:

# virsh <option>

The virsh tool, when used in shell mode, provides an interactive environment from which to issue sequences of commands.

To run commands in the virsh shell, run the following command:

# virsh
Welcome to virsh, the virtualization interactive terminal.
 
Type:  'help' for help with commands
       'quit' to quit
 
virsh #

At the virsh # prompt, enter the options you wish to run. The following virsh session, for example, lists the current virtual machines, starts a virtual machine named FedoraVM, and then obtains another listing to verify the VM is running:

# virsh
Welcome to virsh, the virtualization interactive terminal.
 
Type:  'help' for help with commands
       'quit' to quit
 
virsh # list
 Id    Name                           State
----------------------------------------------------
 8     RHEL9VM                       running
 9     CentOS9VM                     running

virsh # start FedoraVM
Domain FedoraVM started
 
virsh # list
 Id    Name                           State
----------------------------------------------------
 8     RHEL9VM                       running
 9     CentOS9VM                     running
10     FedoraVM                      running
 
virsh#

The virsh tool supports a wide range of commands, a complete listing of which may be obtained using the help option:

# virsh help

Additional details on the syntax for each command may be obtained by specifying the command after the help directive:

# virsh help restore
  NAME
    restore - restore a domain from a saved state in a file
 
  SYNOPSIS
    restore <file> [--bypass-cache] [--xml <string>] [--running] [--paused]
 
  DESCRIPTION
    Restore a domain.
 
  OPTIONS
    [--file] <string>  the state to restore
    --bypass-cache   avoid file system cache when restoring
    --xml <string>   filename containing updated XML for the target
    --running        restore domain into running state
    --paused         restore domain into paused state

In the remainder of this chapter, we will look at some of these commands in more detail.

Listing Guest System Status

The status of the guest systems on a RHEL 9 virtualization host may be viewed at any time using the list option of the virsh tool. For example:

# virsh list

The above command will display output containing a line for each guest similar to the following:

irsh # list
 Id    Name                           State
----------------------------------------------------
 8     RHEL9VM                       running
 9     CentOS9VM                     running
10     FedoraVM                      running

Starting a Guest System

A guest operating system can be started using the virsh tool combined with the start option followed by the name of the guest operating system to be launched. For example:

# virsh start myGuestOS

Shutting Down a Guest System

The shutdown option of the virsh tool, as the name suggests, is used to shut down a guest operating system:

# virsh shutdown guestName

Note that the shutdown option allows the guest operating system to perform an orderly shutdown when it receives the instruction. To instantly stop a guest operating system, the destroy option may be used (with the risk of file system damage and data loss):

# virsh destroy guestName

Suspending and Resuming a Guest System

A guest system can be suspended and resumed using the virsh tool’s suspend and resume options. For example, to suspend a specific system:

# virsh suspend guestName

Similarly, to resume the paused system:

# virsh resume guestName

A suspended session will be lost if the host system is rebooted. Also, be aware that a suspended system continues to reside in memory. Therefore, to save a session such that it no longer takes up memory and can be restored to its exact state (even after a reboot), it is necessary to save and restore the guest.

Saving and Restoring Guest Systems

A running guest operating system can be saved and restored using the virsh utility. When saved, the current status of the guest operating system is written to disk and removed from system memory. A saved system may subsequently be restored at any time (including after a host system reboot).

To save a guest:

# virsh save guestName path_to_save_file

To restore a saved guest operating system session:

# virsh restore path_to_save_file

Rebooting a Guest System

To reboot a guest operating system:

# virsh reboot guestName

Configuring the Memory Assigned to a Guest OS

To configure the memory assigned to a guest OS, use the setmem option of the virsh command. For example, the following command reduces the memory allocated to a guest system to 256MB:

# virsh setmem guestName 256

Note that acceptable memory settings must fall within the memory available to the current Domain. This may be increased using the setmaxmem option.

Summary

The virsh tool provides various options for creating, monitoring, and managing guest virtual machines. As outlined in this chapter, the tool can be used in either command-line or interactive modes.

Creating a RHEL 9 KVM Networked Bridge Interface

By default, the KVM virtualization environment on RHEL 9 creates a virtual network to which virtual machines may connect. It is also possible to configure a direct connection using a MacVTap driver. However, as outlined in the chapter entitled An Overview of RHEL 9 Virtualization Techniques, this approach does not allow the host and guest systems to communicate.

This chapter aims to cover the steps involved in creating a network bridge on RHEL 9, enabling guest systems to share one or more of the host system’s physical network connections while still allowing the guest and host systems to communicate.

In the remainder of this chapter, we will explain how to configure a RHEL 9 network bridge for KVM-based guest operating systems.

Getting the Current Network Manager Settings

A network bridge can be created using the NetworkManager command-line interface tool (nmcli). The NetworkManager is installed and enabled by default on RHEL 9 systems and is responsible for detecting and connecting to network devices and providing an interface for managing networking configurations.

A list of current network connections on the host system can be displayed as follows:

# nmcli con show
NAME         UUID                                  TYPE      DEVICE
eno1         99d40009-6bb1-4182-baad-a103941c90ff  ethernet  eno1
virbr0       7cb1265e-ffb9-4cb3-aaad-2a6fe5880d38  bridge    virbr0

The above output shows that the host has an Ethernet network connection established via a device named eno1 and the default bridge interface named virbr0, which provides access to the NAT-based virtual network to which KVM guest systems are connected by default.

Similarly, the following command can be used to identify the devices (both virtual and physical) that are currently configured on the system:

# nmcli device show
GENERAL.DEVICE:                         eno1
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         AC:16:2D:11:16:73
GENERAL.MTU:                            1500
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     eno1
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         192.168.86.59/24
IP4.GATEWAY:                            192.168.86.1
IP4.ROUTE[1]:                           dst = 0.0.0.0/0, nh = 192.168.86.1, mt = 100
IP4.ROUTE[2]:                           dst = 192.168.86.0/24, nh = 0.0.0.0, mt = 100
IP4.DNS[1]:                             192.168.86.1
IP4.DOMAIN[1]:                          lan
IP6.ADDRESS[1]:                         fe80::6deb:f739:7d67:2242/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 100
IP6.ROUTE[2]:                           dst = ff00::/8, nh = ::, mt = 256, table=255
 
GENERAL.DEVICE:                         virbr0
GENERAL.TYPE:                           bridge
GENERAL.HWADDR:                         52:54:00:59:30:22
GENERAL.MTU:                            1500
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     virbr0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
IP4.ADDRESS[1]:                         192.168.122.1/24
IP4.GATEWAY:                            --
IP4.ROUTE[1]:                           dst = 192.168.122.0/24, nh = 0.0.0.0, mt = 0
IP6.GATEWAY:                            --
.
.

The above partial output indicates that the host system on which the command was executed contains a physical Ethernet device (eno1) and a virtual bridge (virbr0).

The virsh command may also be used to list the virtual networks currently configured on the system:

# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes

Currently, the only virtual network present is the default network provided by virbr0. Now that some basic information about the current network configuration has been obtained, the next step is to create a network bridge connected to the physical network device (in this case, eno1).

Creating a Network Manager Bridge from the Command-Line

The first step in creating the network bridge is adding a new connection to the configuration. This can be achieved using the nmcli tool, specifying that the connection is to be a bridge and providing names for both the connection and the interface:

# nmcli con add ifname br0 type bridge con-name br0

Once the connection has been added, a bridge slave interface needs to be established between physical device eno1 (the slave) and the bridge connection br0 (the master) as follows:

# nmcli con add type bridge-slave ifname eno1 master br0

At this point, the NetworkManager connection list should read as follows:

# nmcli con show
NAME               UUID                                  TYPE      DEVICE
eno1               66f0abed-db43-4d79-8f5e-2cbf8c7e3aff  ethernet  eno1
virbr0             0fa934d5-0508-47b7-a119-33a232b03f64  bridge    virbr0
br0                59b6631c-a283-41b9-bbf9-56a60ec75653  bridge    br0
bridge-slave-eno1  395bb34b-5e02-427a-ab31-762c9f878908  ethernet  --

The next step is to start up the bridge interface. If the steps to configure the bridge are being performed over a network connection (i.e., via SSH) this step can be problematic because the current eno1 connection must be closed down before the bridge connection can be brought up. This means the current connection will be lost before the bridge connection can be enabled to replace it, potentially leaving the remote host unreachable.

If you are accessing the host system remotely, this problem can be avoided by creating a shell script to perform the network changes. This will ensure that the bridge interface is enabled after the eno1 interface is brought down, allowing you to reconnect to the host after the changes are complete. Begin by creating a shell script file named bridge.sh containing the following commands:

#!/bin/bash
nmcli con down eno1
nmcli con up br0

Once the script has been created, execute it as follows:

# sh ./bridge.sh

When the script executes, the connection will be lost when the eno1 connection is brought down. After waiting a few seconds, however, it should be possible to reconnect to the host once the br0 connection has been activated.

If you are working locally on the host, the two nmcli commands can be run within a terminal window without any risk of losing connectivity:

# nmcli con down eno1
# nmcli con up br0

Once the bridge is up and running, the connection list should now include both the bridge and the bridge-slave connections:

# nmcli con show
NAME               UUID                                  TYPE      DEVICE
br0                59b6631c-a283-41b9-bbf9-56a60ec75653  bridge    br0
bridge-slave-eno1  395bb34b-5e02-427a-ab31-762c9f878908  ethernet  eno1
virbr0             0fa934d5-0508-47b7-a119-33a232b03f64  bridge    virbr0
eno1               66f0abed-db43-4d79-8f5e-2cbf8c7e3aff  ethernet  --

Note that the eno1connection is still listed but is no longer active. To exclude inactive connections from the list, use the –active flag when requesting the list:

# nmcli con show --active
NAME               UUID                                  TYPE      DEVICE
br0                c2fa30cb-b1a1-4107-80dd-b1765878ab4f  bridge    br0
bridge-slave-eno1  21e8c945-cb94-4c09-99b0-17af9b5a7319  ethernet  eno1
virbr0             a877302e-ea02-42fe-a3c1-483440aae774  bridge    virbr0

Declaring the KVM Bridged Network

At this point, the bridge connection is on the system but is not visible to the KVM environment. Running the virsh command should still list the default network as being the only available network option:

# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes

Before a virtual machine can use the bridge, it must be declared and added to the KVM network configuration. This involves the creation of a definition file and, once again, using the virsh command-line tool.

Begin by creating a definition file for the bridge network named bridge.xml that reads as follows:

<network>
  <name>br0</name>
  <forward mode="bridge"/>
  <bridge name="br0" />
</network>

Next, use the file to define the new network:

# virsh net-define ./bridge.xml

Once the network has been defined, start it and, if required, configure it to autostart each time the system reboots:

# virsh net-start br0
# virsh net-autostart br0

Once again, list the networks to verify that the bridge network is now accessible within the KVM environment:

# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 br0                  active     yes           yes
 default              active     yes           yes

Using a Bridge Network in a Virtual Machine

To create a virtual machine that uses the bridge network, use the virt-install –network option and specify the br0 bridge name. For example:

# virt-install --name demo_vm_guest --memory 1024 --disk path=/tmp/demo_vm_guest. img,size=10 --network network=br0 --cdrom /home/demo/rhel-baseos-9.1-x86_64-dvd. iso

When the guest operating system is running, it will appear on the same physical network as the host system and will no longer be on the NAT-based virtual network.

The bridge may also be selected for virtual machines within the Cockpit interface by editing the virtual machine, locating the Network interfaces section, and clicking the Edit button as highlighted in Figure 25-1 below:

Figure 25-1

Within the resulting interface settings dialog, change the Interface type menu to Bridge to LAN and set the Source to br0 as shown in Figure 25-2:

Figure 25-2

Similarly, when creating a new virtual machine using the virt-manager tool, the bridge will be available within the Network selection menu:

Figure 25-3

To modify an existing virtual machine so that it uses the bridge, use the virsh edit command. This command loads the XML definition file into an editor where changes can be made and saved:

# virsh edit GuestName

By default, the file will be loaded into the vi editor. To use a different editor, change the $EDITOR environment variable, for example:

# export EDITOR=gedit

To change from the default virtual network, locate the <interface> section of the file, which will read as follows for a NAT-based configuration:

<interface type='network'>
      <mac address='<your mac address here>'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

Alternatively, if the virtual machine was using a direct connection, the entry may read as follows:

<interface type='direct'>
      <mac address='<your mac address here>'/>
      <source dev='eno1' mode='vepa'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>

To use the bridge, change the source network property to read as follows before saving the file:

<interface type='network'>
      <mac address='<your mac address here>'/>
      <source network='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

If the virtual machine is already running, the change will not take effect until it is restarted.

Creating a Bridge Network using nm-connection-editor

If either local or remote desktop access is available on the host system, much of the bridge configuration process can be performed using the nm-connection-editor graphical tool. To use this tool, open a Terminal window within the desktop and enter the following command:

# nm-connection-editor

When the tool has loaded, the window shown in Figure 25-4 will appear listing the currently configured network connections (essentially the same output as that generated by the nmcli con show command):

Figure 25-4

To create a new connection, click on the ‘+’ button in the window’s bottom left-hand corner.

Then, from the resulting dialog (Figure 25-5), select the Bridge option from the menu:

Figure 25-5

With the bridge option selected, click the Create button to proceed to the bridge configuration screen. Begin by changing both the connection and interface name fields to br0 before clicking on the Add button located to the right of the Bridge connections list, as highlighted in Figure 25-6:

Figure 25-6

From the connection type dialog (Figure 25-7), change the menu setting to Ethernet before clicking on the Create button:

Figure 25-7

Another dialog will now appear in which the bridge slave connection needs to be configured. Within this dialog, select the physical network to which the bridge is to connect (for example, eno1) from the Device menu:

Figure 25-8

Click on the Save button to apply the changes and return to the Editing br0 dialog (as illustrated in Figure 25-6 above). Within this dialog, click on the Save button to create the bridge. On returning to the main window, the new bridge and slave connections should now be listed:

Figure 25-9

All that remains is to bring down the original eno1 connection and bring up the br0 connection using the steps outlined in the previous chapter (remembering to perform these steps in a shell script if the host is being accessed remotely):

# nmcli con down eno1
# nmcli con up br0

It will also be necessary, as it was when creating the bridge using the command-line tool, to add this bridge to the KVM network configuration. To do so, repeat the steps outlined in the “Declaring the KVM Bridged Network” section above. Once this step has been taken, the bridge is ready to be used by guest virtual machines.

Summary

By default, KVM virtual machines are connected to a virtual network that uses NAT to provide access to the network to which the host system is connected. If the guests are required to appear on the network with their own IP addresses, they need to be configured to share the physical network interface of the host system. This chapter outlines that this can be achieved using the nmcli or nm-connection-editor tools to create a networked bridge interface.