Ubuntu 20.04 System and Process Monitoring

An important part of running and administering an Ubuntu system involves monitoring the overall system health in terms of memory, swap, storage and processor usage. This includes knowing how to inspect and manage both the system and user processes that are running in the background. This chapter will outline some of the tools and utilities that can be used to monitor both system resources and processes on an Ubuntu system.

1.1  Managing Processes

Even when an Ubuntu system appears to be idle, many system processes will be running silently in the background to keep the operating system functioning. Each time you execute a command or launch an app, user processes are started which will run until the associated task is completed.

To obtain a list of active user processes you are currently running within the context of a single terminal or command-prompt session use the ps command as follows:

$ ps
  PID TTY          TIME CMD
10395 pts/1    00:00:00 bash
13218 pts/1    00:00:00 ps

The output from the ps command shows that there are currently two user processes running within the context of the current terminal window or command prompt session, the bash shell into which the command was entered, and the ps command itself.

To list all of the active processes running for the current user, use the ps command with the -a flag. This will list all running processes that are associated with the user regardless of where they are running (for example processes running in other terminal windows):

$ ps -a
  PID TTY          TIME CMD
  976 tty1     00:00:22 Xorg
 1026 tty1     00:00:00 gnome-session-b
.
.
13217 pts/0    00:00:00 nano
13265 pts/2    00:00:00 cat
13272 pts/1    00:00:00 ps

As shown in the above output, the user has some processes running that relate to the GNOME desktop in addition to the nano text editor, the cat command and the ps command.

To list the processes for a specific user, run ps with the -u flag followed by the user name:

# ps -u john
  PID TTY          TIME CMD
  914 ?        00:00:00 systemd
  915 ?        00:00:00 (sd-pam)
  970 ?        00:00:00 gnome-keyring-d
  974 tty1     00:00:00 gdm-x-session
.
.

Note that each process is assigned a unique process ID which can be used to stop the process by sending it a termination (TERM) signal via the kill command. For example:

$ kill 13217

The advantage of ending a process with the TERM signal is that it gives the process the opportunity to exit gracefully, potentially saving any data that might otherwise be lost.

If the standard termination signal does not terminate the process, repeat the kill command with the -9 option. This sends a KILL signal which should cause even frozen processes to exit, but does not give the process a chance to exit gracefully possibly resulting in data loss:

$ kill -9 13217

To list all of the processes running on a system (including all user and system processes), execute the following command:

$ ps -ax
  PID TTY      STAT   TIME COMMAND
    1 ?        Ss     0:05 /sbin/init splash
    2 ?        S      0:00 [kthreadd]
    3 ?        I<     0:00 [rcu_gp]
    4 ?        I<     0:00 [rcu_par_gp]
.
.

To list all processes and include information about process ownership, CPU and memory use, execute the ps command with the -aux option:

$ ps -aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.2 225860  9540 ?        Ss   10:37   0:05 /sbin/init splash
root         2  0.0  0.0      0     0 ?        S    10:37   0:00 [kthreadd]
root         3  0.0  0.0      0     0 ?        I<   10:37   0:00 [rcu_gp]
root         4  0.0  0.0      0     0 ?        I<   10:37   0:00 [rcu_par_gp]
.
.
demo     13924  0.0  0.1  28272  3836 pts/2    S+   14:57   0:00 man ps
demo     13934  0.0  0.0  16952  1048 pts/2    S+   14:57   0:00 pager
demo     14068  0.0  0.0  46772  3560 pts/1    R+   15:02   0:00 ps aux
.
.

A Linux process can start its own sub-processes (referred to as spawning) resulting in a hierarchical parent-child relationship between processes. To view the process tree, use the ps command and include the -f option. Figure 37-1, for example, shows part of the tree output for a ps -af command execution:

Figure 37-1

From within the GNOME desktop, process information may also be viewed via the System Monitor tool. This tool can either be launched by searching for “System Monitor” within the desktop environment, or from the command-line as follows:

$ gnome-system-monitor

Once the System Monitor has launched, select the Processes button located in the toolbar to list the processes running on the system as shown in Figure 37-2 below:

Figure 37-2

To change the processes listed (for example to list all processes, or just your own processes), use the menu as illustrated in Figure 37-3:

Figure 37-3

To filter the list of processes, click on the search button in the title bar and enter the process name into the search field:

Figure 37-4

To display additional information about a specific process, select it from the list and click on the button located in the bottom right-hand corner (marked A in Figure 37-5) of the dialog:

Figure 37-5

When the button is clicked a dialog similar to that marked B in the above figure will appear. To terminate a process, select it from the list and click on the End Process button (C).

To monitor CPU, memory, swap and network usage, simply click on the Resources button in the title bar to display the screen shown in Figure 37-6:

Figure 37-6

Similarly, a summary of storage space used on the system can be viewed by selecting the File Systems toolbar button:

Figure 37-7

1.2  Real-time System Monitoring with htop

As outlined in the chapter entitled “An Overview of the Ubuntu Cockpit Web Interface”, the Cockpit web interface can be used to perform some basic system monitoring. The previous section also explained how the GNOME System Monitor tool can be used to monitor processes and system resources. In this chapter we have also explored how the ps command can be used to provide a snapshot view of the processes running on an Ubuntu system. The ps command does not, however, provide a real-time view of the processes and resource usage on the system. To monitor system resources and processes in real-time from the command prompt, the htop command is an ideal tool. Though not generally installed by default, htop may be installed as follows:

# apt install htop

Once installed, launch htop as follows:

$ htop

When running, htop will list the processes running on the system ranked by system resource usage (with the most demanding process in the top position). The upper section of the screen displays a graph showing memory and swap usage information together with CPU data for all CPU cores. All of this output is constantly updated, allowing the system to be monitored in realtime:

Figure 37-8

To limit the information displayed to the processes belonging to a specific user, start htop with the -u option followed by the user name:

$ htop -u john

For a full listing of the features available in htop, press the keyboard F1 key or refer to the man page:

$ man htop

1.3 Command-Line Disk and Swap Space Monitoring

Disk space can, of course, be monitored both from within Cockpit and using the GNOME System Monitor. To identify disk usage from the command line, however, the df command provides a useful overview:

# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.8G     0  1.8G   0% /dev
tmpfs           374M  1.7M  372M   1% /run
/dev/sda5        92G  7.2G   80G   9% /
tmpfs           1.9G     0  1.9G   0% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup
.
.

In the above output, the root filesystem (/) is currently only using 7.2GB of space leaving 80GB of available space.

To review current swap space and memory usage, run the free command:

# free
              total        used        free      shared  buff/cache   available
Mem:        3823720      879916     1561108      226220     1382696     2476300

To continuously monitor memory and swap levels, use the free command with the -s option, specifying the delay in seconds between each update (keeping in mind that the htop tool may provide a better way to view this data in real-time):

$ free -s 1
Mem:        3823720      879472     1561532      226220     1382716     2476744
Swap:       2097148           0     2097148

              total        used        free      shared  buff/cache   available
Mem:        3823720      879140     1559940      228144     1384640     2475152
Swap:       2097148           0     2097148

To monitor disk I/O from the command-line, consider using the iotop command which can be installed as follows:

# apt install iotop

Once installed and executed (iotop must be run with system administrator privileges), the tool will display a real-time list of disk I/O on a per process basis:

Figure 37-9

1.4  Summary

Even a system that appears to be doing nothing will have many system processes running in the background. Activities performed by users on the system will result in additional processes being started. Processes can also spawn their own child processes. Each of these processes will use some amount of system resources including memory, swap space, processor cycles, disk storage and network bandwidth. This chapter has explored a set of tools that can be used to monitor both process and system resources on a running system and, when necessary, kill errant processes that may be impacting the performance of a system.

Adding and Managing Ubuntu 20.04 Swap Space

An important part of maintaining the performance of an Ubuntu system involves ensuring that adequate swap space is available comparable to the memory demands placed on the system. The goal of this chapter, therefore, is to provide an overview of swap management on Ubuntu.

1.1  What is Swap Space?

Computer systems have a finite amount of physical memory that is made available to the operating system. When the operating system begins to approach the limit of the available memory it frees up space by writing memory pages to disk. When any of those pages are required by the operating system they are subsequently read back into memory. The area of the disk allocated for this task is referred to as swap space.

1.2  Recommended Swap Space for Ubuntu

The amount of swap recommended for Ubuntu depends on a number of factors including the amount of memory in the system, the workload imposed on that memory and whether the system is required to support hibernation. The current guidelines for Ubuntu swap space are as follows:

Amount of installed RAMRecommended swap spaceRecommended swap space if hibernation enabled
1GB1GB2GB
2GB1GB3GB
3GB2GB5GB
4GB2GB6GB
5GB2GB7GB
6GB2GB8GB
8GB3GB11GB
12GB3GB15GB
16GB4GB32GB
24GB5GB48GB

Table 36-1

For systems with memory configurations exceeding 24GB refer to the following web page for swap space guidelines:

https://help.ubuntu.com/community/SwapFaq

When a system enters hibernation, the current system state is written to the hard disk and the host machine is powered off. When the machine is subsequently powered on, the state of the system is restored from the hard disk drive. This differs from suspension where the system state is stored in RAM. The machine then enters a sleep state whereby power is maintained to the system RAM while other devices are shut down.

1.3  Identifying Current Swap Space Usage

The current amount of swap used by an Ubuntu system may be identified in a number of ways. One option is to output the /proc/swaps file:

# cat /proc/swaps
Filename               Type       Size        Used     Priority
/dev/dm-1              partition  4169724     41484    -2

Alternatively, the swapon command may be used:

# swapon
NAME      TYPE      SIZE  USED PRIO
/dev/dm-1 partition   4G 40.5M   -2

To view the amount of swap space relative to the overall available RAM, the free command may be used:

# free
        total        used        free      shared  buff/cache   available
Mem:  4035436     1428276     2224596       21968      382564     2360172
Swap: 4169724       41484     4128240

1.4  Adding a Swap File to an Ubuntu System

Additional swap may be added to the system by creating a file and assigning it as swap. Begin by creating the swap file using the dd command. The size of the file can be changed by adjusting the count= variable. The following command-line, for example, creates a 2.0 GB file:

# dd if=/dev/zero of=/newswap bs=1024 count=2000000
2000000+0 records in
2000000+0 records out
2048000000 bytes (2.0 GB, 1.9 GiB) copied, 3.62697 s, 565 MB/s

Before converting the file to a swap file, it is important to make sure the file has secure permissions set:

# chmod 0600 /newswap

Once a suitable file has been created, it needs to be converted into a swap file using the mkswap command:

# mkswap /newswap
Setting up swapspace version 1, size = 1.9 GiB (2047995904 bytes)
no label, UUID=4ffc238d-7fde-4367-bd98-c5c46407e535

With the swap file created and configured it can be added to the system in real-time using the swapon utility:

# swapon /newswap

Re-running swapon should now report that the new file is now being used as swap:

# swapon
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition   4G   0B   -2
/newswap  file       1.9G   0B   -3

The swap space may be removed dynamically by using the swapoff utility as follows:

# swapoff /newswap

Finally, modify the /etc/fstab file to automatically add the new swap at system boot time by adding the following line:

/newswap    swap    swap   defaults 0 0

1.5  Adding Swap as a Partition

As an alternative to designating a file as swap space, entire disk partitions may also be designated as swap. The steps to achieve this are largely the same as those for adding a swap file. Before allocating a partition to swap, however, make sure that any existing data on the corresponding filesystem is either backed up or no longer needed and that the filesystem has been unmounted.

Assuming that a partition exists on a disk drive represented by /dev/sdb1, for example, the first step would be to convert this into a swap partition, once again using the mkswap utility:

# mkswap /dev/sdb1
mkswap: /dev/sdb1: warning: wiping old xfs signature.
Setting up swapspace version 1, size = 8 GiB (8587833344 bytes)
no label, UUID=a899c8ec-c410-4569-ba18-ddea03370c7f 

Next, add the new partition to the system swap and verify that it has indeed been added:

# swapon /dev/sdb1
# swapon
NAME      TYPE      SIZE USED PRIO
/dev/dm-1 partition   4G   0B   -2
/dev/sdb1 partition   8G   0B   -3

Once again, the /etc/fstab file may be modified to automatically add the swap partition at boot time as follows:

/dev/sdb1 defaults 0 0

1.6  Adding Space to an Ubuntu LVM Swap Volume

On systems using Logical Volume Management, an alternative to adding swap via file or disk partition is to extend the logical volume used for the swap space.

The first step is to identify the current amount of swap available and the volume group and logical volume used for the swap space using the lvdisplay utility (for more information on LVM, refer to the chapter entitled “Adding a New Disk to an Ubuntu Volume Group and Logical Volume”):

# lvdisplay
.
.
  --- Logical volume ---
  LV Path                /dev/vgubuntu/swap_1
  LV Name                swap_1
  VG Name                vgubuntu
  LV UUID                nJPip0-Q6dx-Mfe3-4Aao-gWAa-swDk-7ZiPdP
  LV Write Access        read/write
  LV Creation host, time ubuntu, 2020-01-13 13:16:18 -0500
  LV Status              available
  # open                 2
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

Clearly the swap resides on a logical volume named swap_1 which is part of the volume group named vgubuntu. The next step is to verify if there is any space available on the volume group that can be allocated to the swap volume:

# vgs
  VG        #PV #LV #SN Attr   VSize   VFree 
  vgubuntu   2   3   0 wz--n- 197.66g <22.00g

If the amount of space available is sufficient to meet additional swap requirements, turn off the swap and extend the swap logical volume to use as much of the available space as needed to meet the system’s swap requirements:

# lvextend -L+8GB /dev/vgubuntu/swap_1
    Logical volume ubuntu_vg/swap_1 successfully resized.

Next, reformat the swap volume and turn the swap back on:

# mkswap /dev/vgubuntu/swap_1
mkswap: /dev/vgubuntu/swap_1: warning: wiping old swap signature.
Setting up swapspace version 1, size = 12 GiB (12754874368 bytes)
no label, UUID=241a4818-e51c-4b8c-9bc9-1697fc2ce26e
 
# swapon /dev/vgubuntu/swap_1

Having made the changes, check that the swap space has increased:

# swapon
NAME      TYPE       SIZE USED PRIO
/dev/dm-1 partition  12G   0B   -2

1.7  Adding Swap Space to the Volume Group

In the above section we extended the swap logical volume to use space that was already available in the volume group. If no space is available in the volume group then it will need to be added before the swap can be extended.

Begin by checking the status of the volume group:

# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  vgubuntu   1   2   0 wz--n- <73.75g    0

The above output indicates that no space is available within the volume group. Suppose, however, that we have a requirement to add 8 GB to the swap on the system. Clearly, this will require the addition of more space to the volume group. For the purposes of this example it will be assumed that a disk that is 8 GB in size and represented by /dev/sdb is available for addition to the volume group. The first step is to turn this partition into a physical volume:

# pvcreate /dev/sdb
  Physical volume "/dev/sdb" successfully created.

If the creation fails with a message similar to “Device /dev/sdb excluded by a filter”, it may be necessary to wipe the disk before creating the physical volume:

# wipefs -a /dev/sdb
/dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 8 bytes were erased at offset 0x1fffffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdb: calling ioctl to re-read partition table: Success

Next, the volume group needs to be extended to use this additional physical volume:

# vgextend vgubuntu /dev/sdb
  Volume group "vgubuntu" successfully extended

At this point the vgs command should report the addition of the 10 GB of space to the volume group:

# vgs
  VG        #PV #LV #SN Attr   VSize  VFree  
  vgubuntu   2   2   0 wz--n- 83.74g <10.00g

Now that the additional space is available in the volume group, the swap logical volume may be extended to utilize the space. First, turn off the swap using the swapoff utility:

# swapoff /dev/vgubuntu/swap_1

Next, extend the logical volume to use the new space:

# lvextend -L+9.7GB /dev/vgubuntu/swap_1
  Rounding size to boundary between physical extents: 9.70 GiB.
  Size of logical volume vgubuntu/swap_1 changed from 980.00 MiB (245 extents) to 10.66 GiB (2729 extents).
  Logical volume vgubuntu/swap_1 successfully resized.

Re-create the swap on the logical volume:

# mkswap /dev/vgubuntu/swap_1
mkswap: /dev/vgubuntu/swap_1: warning: wiping old swap signature.
Setting up swapspace version 1, size = 10.7 GiB (11446251520 bytes)
no label, UUID=447fb9e5-5473-4f2c-96f8-839b1457d3ed

Next, turn swap back on:

# swapon /dev/vgubuntu/swap_1

Finally, use the swapon command to verify the addition of the swap space to the system:

# swapon
NAME      TYPE       SIZE USED PRIO
/dev/dm-1 partition 10.7G   0B   -2

1.8  Summary

Swap space is a vital component of just about any operating system in terms of handling situations where memory resources become constrained. By swapping out areas of memory to disk, the system is able to continue to function and meet the needs of the processes and applications running on it.

Ubuntu has a set of guidelines recommending the amount of disk-based swap space that should be allocated depending on the amount of RAM installed in the system. In situations where these recommendations prove to be insufficient, additional swap space can be added to the system, typically without the need to reboot. As outlined in this chapter, swap space can be added in the form of a file, disk or disk partition or by extending existing logical volumes that have been configured as swap space.

Adding a New Disk to an Ubuntu 20.04 Volume Group and Logical Volume

In the previous chapter we looked at adding a new disk drive to an Ubuntu system, creating a partition and file system and then mounting that file system so that the disk can be accessed. An alternative to creating fixed partitions and file systems is to use Logical Volume Management (LVM) to create logical disks made up of space from one or more physical or virtual disks or partitions. The advantage of using LVM is that space can be added to or removed from logical volumes as needed without the need to spread data over multiple file systems.

Let us take, for example, the root (/) file system of an Ubuntu-based server. Without LVM this file system would be created with a certain size when the operating system is installed. If a new disk drive is installed there is no way to allocate any of that space to the / file system. The only option would be to create new file systems on the new disk and mount them at particular mount points. In this scenario you would have plenty of space on the new file system but the / file system would still be nearly full. The only option would be to move files onto the new file system. With LVM, the new disk (or part thereof) can be assigned to the logical volume containing the root file system thereby dynamically extending the space available.

In this chapter we will look at the steps necessary to add new disk space to both a volume group and a logical volume for the purpose of adding additional space to the root file system of an Ubuntu system.

1.1  An Overview of Logical Volume Management (LVM)

LVM provides a flexible and high level approach to managing disk space. Instead of each disk drive being split into partitions of fixed sizes onto which fixed size file systems are created, LVM provides a way to group together disk space into logical volumes which can be easily resized and moved. In addition, LVM allows administrators to carefully control disk space assigned to different groups of users by allocating distinct volume groups or logical volumes to those users. When the space initially allocated to the volume is exhausted the administrator can simply add more space without having to move the user files to a different file system. LVM consists of the following components:

1.1.1  Volume Group (VG)

The Volume Group is the high level container which holds one or more logical volumes and physical volumes.

1.1.2  Physical Volume (PV)

A physical volume represents a storage device such as a disk drive or other storage media.

1.1.3  Logical Volume (LV)

A logical volume is the equivalent to a disk partition and, as with a disk partition, can contain a file system.

1.1.4  Physical Extent (PE)

Each physical volume (PV) is divided into equal size blocks known as physical extents.

1.1.5  Logical Extent (LE)

Each logical volume (LV) is divided into equal size blocks called logical extents.

Suppose we are creating a new volume group called VolGroup001. This volume group needs physical disk space in order to function so we allocate three disk partitions /dev/sda1, /dev/sdb1 and /dev/sdb2. These become physical volumes in VolGroup001. We would then create a logical volume called LogVol001 within the volume group made up of the three physical volumes.

If we run out of space in LogVol001 we simply add more disk partitions as physical volumes and assign them to the volume group and logical volume.

1.2  Getting Information about Logical Volumes

As an example of using LVM with Ubuntu we will work through an example of adding space to the / file system of a standard Ubuntu installation. Anticipating the need for flexibility in the sizing of the root partition (assuming, of course, that LVM partitioning option was selected during the Ubuntu installation process), Ubuntu sets up the / file system as a logical volume (called root) within a volume group called vgubuntu. Before making any changes to the LVM setup, however, it is important to first gather information.

Running the mount command will output information about a range of mount points, including the following entry for the root filesystem:

/dev/mapper/vgubuntu-root on / type ext4 (rw,relatime,errors=remount-ro) 

Information about the volume group can be obtained using the vgdisplay command:

# vgdisplay
  --- Volume group ---
  VG Name               vgubuntu
  System ID             
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  3
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                2
  Open LV               2
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               <73.75 GiB
  PE Size               4.00 MiB
  Total PE              18879
  Alloc PE / Size       18879 / <73.75 GiB
  Free  PE / Size       0 / 0   
  VG UUID               hqaagb-OgB5-3DhK-qLoN-bRHU-jsFm-LrdXtT

As we can see in the above example, the vgubuntu volume group has a physical extent size of 4.00MiB and has a total of approximately 73GB available for allocation to logical volumes. Currently 18879 physical extents are allocated equaling the total capacity. If we want to increase the space allocated to any logical volumes in the vgubuntu volume group, therefore, we will need to add one or more physical volumes. The vgs tool is also useful for displaying a quick overview of the space available in the volume groups on a system:

# vgs
  VG        #PV #LV #SN Attr   VSize   VFree
  vgubuntu   1   2   0 wz--n- <73.75g    0

Information about logical volumes in a volume group may similarly be obtained using the lvdisplay command:

# lvdisplay
  --- Logical volume ---
  LV Path                /dev/vgubuntu/root
  LV Name                root
  VG Name                vgubuntu
  LV UUID                iLfsLf-pVzy-yCfd-wKim-EdbW-efvm-J6p1f4
  LV Write Access        read/write
  LV Creation host, time ubuntu, 2020-04-06 11:17:53 -0400
  LV Status              available
  # open                 1
  LV Size                <72.79 GiB
  Current LE             18634
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:0
   
  --- Logical volume ---
  LV Path                /dev/vgubuntu/swap_1
  LV Name                swap_1
  VG Name                vgubuntu
  LV UUID                14Cr74-x5EW-V1k1-c5z8-8NUn-DTqC-PLEg7F
  LV Write Access        read/write
  LV Creation host, time ubuntu, 2020-04-06 11:17:54 -0400
  LV Status              available
  # open                 2
  LV Size                980.00 MiB
  Current LE             245
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     256
  Block device           253:1

As shown in the above example approximately 72 GiB of the space in volume group vgubuntu is allocated to logical volume root (for the / file system) and 980 MiB to swap_1 (for swap space).

Now that we know what space is being used it is often helpful to understand which devices are providing the space (in other words which devices are being used as physical volumes). To obtain this information we need to run the pvdisplay command:

# pvdisplay
  --- Physical volume ---
  PV Name               /dev/sda1
  VG Name               vgubuntu
  PV Size               <73.75 GiB / not usable 2.00 MiB
  Allocatable           yes (but full)
  PE Size               4.00 MiB
  Total PE              18879
  Free PE               0
  Allocated PE          18879
  PV UUID               nwp55K-Chay-x5eB-kZcc-sonL-cm3E-3SWnKG

Clearly the space controlled by logical volume vgubuntu is provided via a physical volume located on /dev/sda1.

Now that we know a little more about our LVM configuration we can embark on the process of adding space to the volume group and the logical volume contained within.

1.3  Adding Additional Space to a Volume Group from the Command-Line

Just as with the previous steps to gather information about the current Logical Volume Management configuration of an Ubuntu system, changes to this configuration can be made from the command-line.

In the remainder of this chapter we will assume that a new disk has been added to the system and that it is being seen by the operating system as /dev/sdb. We shall also assume that this is a new disk that does not contain any existing partitions. If existing partitions are present they should be backed up and then the partitions deleted from the disk using the fdisk utility. For example, assuming a device represented by /dev/sdb containing one partition as follows:

# fdisk -l /dev/sdb
Disk /dev/sdb: 10 GiB, 10737418240 bytes, 20971520 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 6C7E15CA-C9B1-4FEB-B10A-BE75F8B6D483
 
Device     Start      End  Sectors Size Type
/dev/sdb1   2048 20971486 20969439  10G Linux filesystem

Once any filesystems on this partition have been unmounted, they can be deleted as follows:

# fdisk  /dev/sdb
 
Welcome to fdisk (util-linux 2.31.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
 
Command (m for help): d
Selected partition 1
Partition 1 has been deleted.
 
Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

Before moving to the next step, be sure to remove any entries in the /etc/fstab file for these filesystems so that the system does not attempt to mount them on the next reboot.

Once the disk is ready, the next step is to convert this disk into a physical volume using the pvcreate command (also wiping the dos signature if one exists):

# pvcreate /dev/sdb 
Physical volume "/dev/sdb" successfully created.

If the creation fails with a message that reads “Device /dev/<device> excluded by a filter”, it may be necessary to wipe the disk using the wipefs command before creating the physical volume:

# wipefs -a /dev/sdb
/dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 8 bytes were erased at offset 0x1fffffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdb: calling ioctl to re-read partition table: Success

With the physical volume created we now need to add it to the volume group (in this case vgubuntu) using the vgextend command:

# vgextend vgubuntu /dev/sdb
  Volume group "vgubuntu" successfully extended

The new physical volume has now been added to the volume group and is ready to be allocated to a logical volume. To do this we run the lvextend tool providing the size by which we wish to extend the volume. In this case we want to extend the size of the logical volume by 9 GB. Note that we need to provide the path to the logical volume which can be obtained from the lvdisplay command (in this case /dev/vgubuntu/root):

# lvextend -L+9G /dev/vgubuntu/root
  Size of logical volume vgubuntu/root changed from <72.79 GiB (18634 extents) to <81.79 GiB (20938 extents).
  Logical volume vgubuntu/root successfully resized.

The last step in the process is to resize the file system residing on the logical volume so that it uses the additional space. The way this is performed will depend on the filesystem type which can be identified using the following command and checking the Type column:

# df -T /
Filesystem                  Type 1K-blocks    Used Available Use% Mounted on
/dev/mapper/vgubuntu-root   ext4  83890408 5186940  74496972   7% /

If root is formatted using the XFS filesystem, this can be achieved using the xfs_growfs utility:

# xfs_growfs /

If, on the other hand, the filesystem is of type ext2, ext3, or ext4, the resize2fs utility should be used instead when performing the filesystem resize:

# resize2fs /dev/vgubuntu/root

Once the resize completes, the file system will have been extended to use the additional space provided by the new disk drive. All this has been achieved without moving a single file or even having to restart the server. As far as any users on the system are concerned nothing has changed (except, of course, that there is now more disk space).

1.4  Summary

Volume groups and logical volumes provide an abstract layer on top of the physical storage devices on an Ubuntu system to provide a flexible way to allocate the space provided by multiple disk drives. This allows disk space allocations to be made and changed dynamically without the need to repartition disk drives and move data between filesystems. This chapter has outlined the basic concepts of volume groups, logical volumes and physical volumes while demonstrating how to manage these using command-line tools.

Adding a New Disk Drive to an Ubuntu 20.04 System

One of the first problems encountered by users and system administrators these days is that systems tend to run out of disk space to store data. Fortunately disk space is now one of the cheapest IT commodities. In the next two chapters we will look at the steps necessary to configure Ubuntu to use the space provided via the installation of a new physical or virtual disk drive.

1.1  Mounted File Systems or Logical Volumes

There are two ways to configure a new disk drive on an Ubuntu system. One very simple method is to create one or more Linux partitions on the new drive, create Linux file systems on those partitions and then mount them at specific mount points so that they can be accessed. This approach will be covered in this chapter.

Another approach is to add the new space to an existing volume group or create a new volume group. When Ubuntu is installed with the logical volume management option selected a volume group is created and named vgubuntu. Within this volume group are two logical volumes named root and swap_1 that are used to store the / and swap partitions respectively. By configuring the new disk as part of a volume group we are able to increase the disk space available to the existing logical volumes. Using this approach we are able, therefore, to increase the size of the /home file system by allocating some or all of the space on the new disk to the home volume. This topic will be discussed in detail in “Adding a New Disk to an Ubuntu Volume Group and Logical Volume”.

1.2  Finding the New Hard Drive

This tutorial assumes that a new physical or virtual hard drive has been installed on the system and is visible to the operating system. Once added, the new drive should automatically be detected by the operating system. Typically, the disk drives in a system are assigned device names beginning hd or sd followed by a letter to indicate the device number. For example, the first device might be /dev/sda, the second /dev/sdb and so on.

The following is output from a typical system with only one disk drive connected to a SATA controller:

# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2

This shows that the disk drive represented by /dev/sda is itself divided into 2 partitions, represented by /dev/sda1 and /dev/sda2.

The following output is from the same system after a second hard disk drive has been installed:

# ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sdb

As shown above, the new hard drive has been assigned to the device file /dev/sdb. Currently the drive has no partitions shown (because we have yet to create any).

At this point we have a choice of creating partitions and file systems on the new drive and mounting them for access or adding the disk as a physical volume as part of a volume group. To perform the former continue with this chapter, otherwise read “Adding a New Disk to an Ubuntu Volume Group and Logical Volume” for details on configuring Logical Volumes.

1.3  Creating Linux Partitions

The next step is to create one or more Linux partitions on the new disk drive. This is achieved using the fdisk utility which takes as a command-line argument the device to be partitioned:

# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.
 
Device does not contain a recognized partition table.
Created a new DOS disklabel with disk identifier 0xbd09c991.
 
Command (m for help):

In order to view the current partitions on the disk enter the p command:

Command (m for help): p
Disk /dev/sdb: 8 GiB, 8589934592 bytes, 16777216 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xbd09c991

As we can see from the above fdisk output, the disk currently has no partitions because it is a previously unused disk. The next step is to create a new partition on the disk, a task which is performed by entering n (for new partition) and p (for primary partition):

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1):  

In this example we only plan to create one partition which will be partition 1. Next we need to specify where the partition will begin and end. Since this is the first partition we need it to start at the first available sector and since we want to use the entire disk we specify the last sector as the end. Note that if you wish to create multiple partitions you can specify the size of each partition by sectors, bytes, kilobytes or megabytes.

Partition number (1-4, default 1): 1
First sector (2048-16777215, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-16777215, default 16777215): 
 
Created a new partition 1 of type 'Linux' and of size 8 GiB.
 
Command (m for help): 

Now that we have specified the partition, we need to write it to the disk using the w command:

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.

If we now look at the devices again we will see that the new partition is visible as /dev/sdb1:

# ls /dev/sd*
/dev/sda /dev/sda1 /dev/sda2 /dev/sdb /dev/sdb1

The next step is to create a file system on our new partition.

1.4  Creating a File System on a  Disk Partition

We now have a new disk installed, it is visible to Ubuntu and we have configured a Linux partition on the disk. The next step is to create a Linux file system on the partition so that the operating system can use it to store files and data. The easiest way to create a file system on a partition is to use the mkfs.xfs utility:

# apt install xfsprogs
# mkfs.xfs /dev/sdb1
meta-data=/dev/sdb1       isize=512    agcount=4, agsize=524224 blks
         =                sectsz=512   attr=2, projid32bit=1
         =                crc=1        finobt=1, sparse=1, rmapbt=0
         =                reflink=1
data     =                bsize=4096   blocks=2096896, imaxpct=25
         =                sunit=0      swidth=0 blks
naming   =version 2       bsize=4096   ascii-ci=0, ftype=1
log      =internal log    bsize=4096   blocks=2560, version=2
         =                sectsz=512   sunit=0 blks, lazy-count=1
realtime =none            extsz=4096   blocks=0, rtextents=0

In this case we have created an XFS file system. XFS is a high performance file system and includes a number of advantages in terms of parallel I/O performance and the use of journaling.

1.5  An Overview of Journaled File Systems

A journaling filesystem keeps a journal or log of the changes that are being made to the filesystem during disk writing that can be used to rapidly reconstruct corruptions that may occur due to events such as a system crash or power outage.

There are a number of advantages to using a journaling file system. Both the size and volume of data stored on disk drives has grown exponentially over the years. The problem with a nonjournaled file system is that following a crash the fsck (filesystem consistency check) utility has to be run. The fsck utility will scan the entire filesystem validating all entries and making sure that blocks are allocated and referenced correctly. If it finds a corrupt entry it will attempt to fix the problem. The issues here are two-fold. First, the fsck utility will not always be able to repair damage and you will end up with data in the lost+found directory. This is data that was being used by an application but the system no longer knows where it was referenced from. The other problem is the issue of time. It can take a very long time to complete the fsck process on a large file system, potentially leading to unacceptable down time.

A journaled file system, on the other hand, records information in a log area on a disk (the journal and log do not need to be on the same device) during each write. This is a essentially an “intent to commit” data to the filesystem. The amount of information logged is configurable and ranges from not logging anything, to logging what is known as the “metadata” (i.e. ownership, date stamp information etc), to logging the “metadata” and the data blocks that are to be written to the file. Once the log is updated the system then writes the actual data to the appropriate areas of the filesystem and marks an entry in the log to say the data is committed.

After a crash the filesystem can very quickly be brought back on-line using the journal log, thereby reducing what could take minutes using fsck to seconds with the added advantage that there is considerably less chance of data loss or corruption.

1.6  Mounting a File System

Now that we have created a new file system on the Linux partition of our new disk drive we need to mount it so that it is accessible and usable. In order to do this we need to create a mount point. A mount point is simply a directory or folder into which the file system will be mounted. For the purposes of this example we will create a /backup directory to match our file system label (although it is not necessary that these values match):

# mkdir /backup

The file system may then be manually mounted using the mount command:

# mount /dev/sdb1 /backup

Running the mount command with no arguments shows us all currently mounted file systems (including our new file system):

# mount
sysfs on /sys type sysfs (rw,nosuid,nodev,noexec,relatime,seclabel)
proc on /proc type proc (rw,nosuid,nodev,noexec,relatime)
.
.
/dev/sdb1 on /backup type xfs (rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota)

1.7  Configuring Ubuntu to Automatically Mount a File System

In order to set up the system so that the new file system is automatically mounted at boot time an entry needs to be added to the /etc/fstab file. The format for an fstab entry is as follows:

<device>	<dir>	<type>	<options>	<dump>	<fsck>

These entries can be summarized as follows:

  • <device> – The device on which the filesystem is to be mounted.
  • <dir> – The directory that is to act as the mount point for the filesystem.
  • <type> – The filesystem type (xfs, ext4 etc.)
  • <options> – Additional filesystem mount options, for example making the filesystem read-only or controlling whether the filesystem can be mounted by any user. Run man mount to review a full list of options. Setting this value to defaults will use the default settings for the filesystem (rw, suid, dev, exec, auto, nouser, async).
  • <dump> – Dictates whether the content of the filesystem is to be included in any backups performed by the dump utility. This setting is rarely used and can be disabled with a 0 value.
  • <fsck> – Whether the filesystem is checked by fsck after a system crash and the order in which filesystems are to be checked. For journaled filesystems such as XFS this should be set to 0 to indicate that the check is not required.

The following example shows an fstab file configured to automount our /backup partition on the /dev/sdb1 partition:

/dev/mapper/vgubuntu-root   /        ext4   errors=remount-ro 0       1
/dev/mapper/vgubuntu-swap_1 none     swap    sw               0       0
/dev/sdb1                   /backup  xfs     defaults         0       0

The /backup filesystem will now automount each time the system restarts.

1.8  Adding a Disk Using Cockpit

In addition to working with storage using the command-line utilities outlined in this chapter, it is also possible to configure a new storage device using the Cockpit web console. To view the current storage configuration, log into the Cockpit console and select the Storage option as shown in Figure 34-1:

Figure 34-1

To locate the newly added storage, scroll to the bottom of the Storage page until the Drives section comes into view (note that the Drives section may also be located in the top right-hand corner of the screen):

Figure 34-2

In the case of the above figure, the new drive is the 10 GiB drive. Select the new drive to display the Drive screen as shown in Figure 34-3:

Figure 34-3

Click on the Create Partition Table button and, in the resulting dialog, accept the default settings before clicking on the Format button:

Figure 34-4

On returning to the main Storage screen, click on the Create Partition button and use the dialog to specify how much space is to be allocated to this partition, the filesystem type (XFS is recommended) and an optional label, filesystem mount point and mount options. Note that if this new partition does not use all of the available space, additional partitions may subsequently be added to the drive. To change settings such as whether the filesystem is read-only or mounted at boot time, change the Mounting menu option to Custom and adjust the toggle button settings:

Figure 34-5

Once the settings have been selected, click on the Create partition button to commit the change. On completion of the creation process the new partition will be added to the disk, the corresponding filesystem created and mounted at the designated mount point and appropriate changes made to the /etc/fstab file.

1.9  Summary

This chapter has covered the topic of adding an additional physical or virtual disk drive to an existing Ubuntu system. This is a relatively simple process of making sure the new drive has been detected by the operating system, creating one or more partitions on the drive and then making filesystems on those partitions. Although a number of different filesystem types are available on Ubuntu, XFS is generally the recommended option. Once the filesystems are ready, they can be mounted using the mount command. So that the newly created filesystems mount automatically on system startup, additions can be made to the /etc/fstab configuration file.

Configuring an Ubuntu 20.04 Postfix Email Server

Along with acting as a web server, email is one of the primary uses of an Ubuntu system, particularly in business environments. Given both the importance and popularity of email it is surprising to some people to find out how complex the email structure is on a Linux system and this complexity can often be a little overwhelming to the Ubuntu newcomer.

The good news is that much of the complexity is there to allow experienced email administrators to implement complicated configurations for large scale enterprise installations. The fact is, for most Linux administrators it is relatively straight forward to set up a basic email system so that users can send and receive electronic mail.

In this chapter of Ubuntu Essentials, we will explain the basics of Linux-based email configuration and step through configuring a basic email environment. In the interests of providing the essentials, we will leave the complexities of the email system for more advanced books on the subject.

1.1  The structure of the Email System

There are a number of components that make up a complete email system. Below is a brief description of each one:

1.1.1  Mail User Agent

This is the part of the system that the typical user is likely to be most familiar with. The Mail User Agent (MUA), or mail client, is the application that is used to write, send and read email messages. Anyone who has written and sent a message on any computer has used a Mail User Agent of one type or another.

Typical Graphical MUA’s on Linux are Evolution, Thunderbird and KMail. For those who prefer a text based mail client, there are also the more traditional pine and mail tools.

1.1.2  Mail Transfer Agent

The Mail Transfer Agent (MTA) is the part of the email system that does much of the work of transferring the email messages from one computer to another (either on the same local network or over the internet to a remote system). Once configured correctly, most users will not have any direct interaction with their chosen MTA unless they wish to re-configure it for any reason. There are many choices of MTA available for Linux including sendmail, Postfix, Fetchmail, Qmail and Exim.

1.1.3  Mail Delivery Agent

Another part of the infrastructure that is typically hidden from the user, the Mail Delivery Agent (MDA) sits in the background and performs filtering of the email messages between the Mail Transfer Agent and the mail client (MUA). The most popular form of MDA is a spam filter to remove all the unwanted email messages from the system before they reach the inbox of the user’s mail client (MUA). Popular MDAs are Spamassassin and Procmail. It is important to note that some Mail User Agent applications (such as Evolution, Thunderbird and KMail) include their own MDA filtering. Others, such as Pine and Basla, do not. This can be a source of confusion to the Linux beginner.

1.1.4  SMTP

SMTP is an acronym for Simple Mail Transport Protocol. This is the protocol used by the email systems to transfer mail messages from one server to another. This protocol is essentially the communications language that the MTAs use to talk to each other and transfer messages back and forth.

1.1.5  SMTP Relay

SMTP Relay is a protocol that allows an external SMTP server to be used to send emails instead of hosting a local SMTP server. This will typically involve using a service such as MailJet, SendGrid or MailGun. These services avoid the necessity to configure and maintain your own SMTP server and often provide additional benefits such as analytics.

1.2   Configuring an Ubuntu Email Server

Many systems use the Sendmail MTA to transfer email messages and on many Linux distributions this is the default Mail Transfer Agent. Sendmail is, however, a complex system that can be difficult for beginner and experienced user alike to understand and configure. It is also falling from favor because it is considered to be slower at processing email messages than many of the more recent MTAs available.

Many system administrators are now using Postfix or Qmail to handle email. Both are faster and easier to configure than Sendmail.

For the purposes of this chapter, therefore, we will look at Postfix as an MTA because of its simplicity and popularity. If you would prefer to use Sendmail there are many books that specialize in the subject and that will do the subject much more justice than we can in this chapter.

As a first step, this chapter will cover the configuration of an Ubuntu system to act as a full email server. Later in the chapter, the steps to make use of an SMTP Relay service will also be covered.

1.3  Postfix Pre-Installation Steps

The first step before installing Postfix is to make sure that Sendmail is not already running on your system. You can check for this using the following command:

# systemctl status sendmail

If sendmail is not installed, the tool will display a message similar to the following:

Unit sendmail.service could not be found.

If sendmail is running on your system it is necessary to stop it before installing and configuring Postfix. To stop sendmail, run the following command:

# systemctl stop sendmail

The next step is to ensure that sendmail does not get restarted automatically when the system is rebooted:

# systemctl disable sendmail

Sendmail is now switched off and configured so that it does not auto start when the system is booted. Optionally, to completely remove sendmail from the system, run the following command:

# apt remove sendmail

1.4  Firewall/Router Configuration

Since the sending and receiving of email messages involves network connections, the firewall will need to be configured to allow SMTP traffic. If firewalld is active, use the firewall-cmd tool will as follows:

# firewall-cmd --permanent --add-service=smtp

Alternatively, if ufw is enabled, configure it to allow SMTP traffic using the following command:

# ufw allow Postfix

It will also be important to configure any other firewall or router between the server and the internet to allow connections on port 25, 143 and 587 and, if necessary, to configure port forwarding for those ports to the corresponding ports on the email server. With these initial steps completed, we can now move on to installing Postfix.

1.5  Installing Postfix on Ubuntu

By default, the Ubuntu installation process installs postfix for most configurations. To verify if postfix is already installed, use the following apt command:

# apt -qq list postfix

If apt reports that postfix is not installed, it may be installed as follows:

# apt install postfix

In most cases the Internet Site option will be the most useful option. This will configure some basic settings designed to send and receive messages associated with your web site domain name.

With this option selection, tap the Enter key to proceed to the next screen:

Figure 33-1

On the above screen, enter your domain name before pressing the Enter key once again.

1.6  Configuring Postfix

The main configuration settings for postfix are located in the /etc/postfix/main.cf file. There are many resources on the internet that provide detailed information on postfix so this section will focus on the basic options required to get email up and running. Even though the apt installation set up some basic configuration options, it tends to miss some settings and guess incorrectly for others so be sure to carefully review the main.cf file. The key options in the main.cf file are:

myhostname = mta1.domain.com
mydomain = domain.com
myorigin = $mydomain
mydestination = $myhostname, localhost.$mydomain, localhost, $mydomain
inet_interfaces = $myhostname
mynetworks = subnet

Other settings will have either been set up for you by the installation process or are not needed unless you are feeling adventurous and want to configure a more sophisticated email system.

The format of myhostname is host.domain.extension. If, for example, your Linux system is named MyLinuxHost and your internet domain is MyDomain.com you would set the myhostname option as follows:

myhostname = mylinuxhost.mydomain.com

The mydomain setting is just the domain part of the above setting. For example:

mydomain = mydomain.com

The myorigin setting defines the name of the domain from which output email appears to come from when it arrives in the recipient’s inbox and should be set to your domain name:

myorigin = $mydomain

Perhaps one of the most crucial parameters, mydestination relates to incoming messages and declares the domains for which this server is the final delivery destination. Any incoming email messages addressed to a domain name not on this list will be considered a relay request which, subject to the mynetworks setting (outlined below), will typically result in a delivery failure.

The inet_interfaces setting defines the network interfaces on the system via which postfix is permitted to receive email and is generally set to all:

inet_interfaces = all

The mynetworks setting defines which external systems are trusted to use the server as an SMTP relay. Possible values for this setting are as follows:

  • host – Only the local system is trusted. Attempts by all external clients to use the server as a relay will be rejected.
  • subnet – Only systems on the same network subnet are permitted to use the server as a relay. If, for example, the server has an IP address of 192.168.1.29, a client system with an IP address of 192.168.1.30 would be able to use the server as a relay.
  • class – Any systems within the same IP address class (A, B and C) may use the server as a relay.

Trusted IP addresses may also be defined manually by specifying subnets, address ranges or referencing pattern files. The following example declares the local host and the subnet 192.168.0.0 as trusted IP addresses.

mynetworks = 192.168.0.0/24, 127.0.0.0/8

For this example, set the property to subnet so that any other systems on the same local network as the server are able to send email via SMTP relay while external systems are prevented from doing so.

mynetworks = subnet

The key settings within a main.cf file might, therefore, read as follows:

smtpd_banner = $myhostname ESMTP $mail_name (Ubuntu)
biff = no

# appending .domain is the MUA's job.
append_dot_mydomain = no
readme_directory = no

compatibility_level = 2

smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = hostname.myexample.com
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydomain = myexample.com
myorigin = $mydomain
mydestination = $myhostname, myexample.com, localhost.localdomain, localhost
relayhost =
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
inet_protocols = all

1.7  Configuring DNS MX Records

When you registered and configured your domain name with a registrar, a number of default values will have been configured in the DNS settings. One of these is the so-called Mail Exchanger (MX) record. This record essentially defines where email addressed to your domain should be sent and is usually set by default to a mail server provided by your registrar. If you are hosting your own mail server, the MX record should be set to your domain or the IP address of your mail server. The steps on how to make this change will depend on your domain registrar but generally involves editing the DNS information for the domain and either adding or editing an existing MX record so that it points to your email server.

1.8  Starting Postfix on an Ubuntu System

Once the /etc/postfix/main.cf file is configured with the correct settings it is now time to start up postfix. This can be achieved from the command-line as follows:

# systemctl start postfix

If postfix was already running, make sure the configuration changes are loaded using the following command:

# systemctl reload postfix

To configure postfix to start automatically at system startup, run the following command:

# systemctl enable postfix

The postfix process should now start up. The best way to verify that everything is working is to check your mail log. This is typically in the /var/log/mail.log file and should now contain an entry resembling the following output:

Mar 25 11:21:48 demo-server postfix/postfix-script[5377]: starting the Postfix mail system
Mar 25 11:21:48 demo-server postfix/master[5379]: daemon started -- version 3.3.1, configuration /etc/postfix 

As long as no error messages have been logged, you have successfully installed and started postfix and are ready to test the postfix configuration.

1.9  Testing Postfix

An easy way to test the postfix configuration is to send an email message between local users on the system. To perform a quick test, use the mail tool as follows (where name and mydomain are replaced by the name of a user on the system and your domain name respectively):

If the mail tool is not available, it can be installed as follows:

# apt install mailutils

When prompted, enter a subject for the email message and then enter message body text. To send the email message, simply press Ctrl-D. For example:

# mail [email protected]
Subject: Test email message
This is a test message.
EOT

Run the mail command again, this time as the other user and verify that the message was sent and received:

$ mail
"/var/mail/demo": 1 message 1 new
>N   1 demo               Wed Apr 15 15:30  13/475   Test email message
?

If the message does not appear, check the log file (/var/log/mail.log) for errors. A successful mail delivery will appear in the log file as follows:

Mar 25 13:41:37 demo-server postfix/pickup[7153]: 94FAF61E8F4A: uid=0 from=<root>
Mar 25 13:41:37 demo-server postfix/cleanup[7498]: 94FAF61E8F4A: message-id=<[email protected]>
Mar 25 13:41:37 demo-server postfix/qmgr[7154]: 94FAF61E8F4A: from=<[email protected]>, size=450, nrcpt=1 (queue active)
Mar 25 13:41:37 demo-server postfix/local[7500]: 94FAF61E8F4A: to=<[email protected]>, relay=local, delay=0.12, delays=0.09/0.01/0/0.02, dsn=2.0.0, status=sent (delivered to mailbox)
Mar 25 13:41:37 demo-server postfix/qmgr[7154]: 94FAF61E8F4A: removed

Once local email is working, try sending an email to an external address (such as a GMail account), Also, test that incoming mail works by sending an email from an external account to a user on your domain. In each case, check the /var/log/mail.log file for explanations of any errors.

1.10  Sending Mail via an SMTP Relay Server

An alternative to configuring a mail server to handle outgoing email messages is to use an SMTP Relay service. As previously discussed, a number of services are available, most of which can be found by performing a web search for “SMTP Relay Service”. Most of these services will require you to verify your domain in some way and will provide MX records with which to update your DNS settings. You will also be provided with a username and password which need to be added to the postfix configuration. The remainder of this section makes the assumption that postfix is already installed on your system and that all of the initial steps required by your chosen SMTP Relay provider have been completed.

Begin by editing the /etc/postfix/main.cf file and configuring the myhostname parameter with your domain name:

myhostname = mydomain.com

Next, create a new file in /etc/postfix named sasl_passwd and add a line containing the mail server host provided by the relay service and the user name and password. For example:

[smtp.myprovider.com]:587 [email protected]:mypassword

Note that port 587 has also been specified in the above entry. Without this setting, postfix will default to using port 25 which is blocked by default by most SMTP relay service providers. With the password file created, use the postmap utility to generate the hash database containing the mail credentials:

# postmap /etc/postfix/sasl_passwd

Before proceeding, take some additional steps to secure your postfix credentials:

# chown root:root /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db
# chmod 0600 /etc/postfix/sasl_passwd /etc/postfix/sasl_passwd.db

Edit the main.cf file once again and add an entry to specify the relay server:

relayhost = [smtp.myprovider.com]:587

Remaining within the main.cf file, add the following lines to configure the authentication settings for the SMTP server:

smtp_use_tls = yes
smtp_sasl_auth_enable = yes
smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd
smtp_tls_CAfile = /etc/ssl/certs/ca-bundle.crt
smtp_sasl_security_options = noanonymous
smtp_sasl_tls_security_options = noanonymous

Finally, restart the postfix service:

# systemctl restart postfix

Once the service has restarted, try sending and receiving mail using either the mail tool or your preferred mail client.

1.11 Summary

A complete, end-to-end email system consists of a Mail User Agent (MUA), Mail Transfer Agent (MTA), Mail Delivery Agent (MDA) and the SMTP protocol. Ubuntu provides a number of options in terms of MTA solutions, one of the more popular being Postfix. This chapter has outlined how to install, configure and test postfix on an Ubuntu system both to act as a mail server and to send and receive email using a third party SMTP relay server.

Setting Up an Ubuntu 20.04 Web Server

Among the many packages that make up the Ubuntu operating system is the Apache web server. In fact the scalability and resilience of Ubuntu makes it an ideal platform for hosting even the most heavily trafficked web sites.

In this chapter we will explain how to configure an Ubuntu system using Apache to act as a web server, including both secure (HTTPS) and insecure (HTTP) configurations.

1.1  Requirements for Configuring an Ubuntu Web Server

To set up your own web site you need a computer (or cloud server instance), an operating system, a web server, a domain name, a name server and an IP address.

In terms of an operating system, we will, of course, assume you are using Ubuntu. As previously mentioned, Ubuntu supports the Apache web server which can easily be installed once the operating system is up and running. A domain name can be registered with any domain name registration service.

If you are running Ubuntu on a cloud instance, the IP address assigned by the provider will be listed in the server overview information. If you are hosting your own server and your internet service provider (ISP) has assigned a static IP address then you will need to associate your domain with that address. This is achieved using a name server and all domain registration services will provide this service for you.

If you do not have a static IP address (i.e. your ISP provides you with a dynamic address which changes frequently) then you can use one of a number of free Dynamic DNS (DDNS or DynDNS for short) services which map your dynamic IP address to your domain name.

Once you have your domain name and your name server configured the next step is to install and configure your web server.

1.2  Installing the Apache Web Server Packages

The current release of Ubuntu typically does not install the Apache web server by default. To check whether the server is already installed, run the following command:

# apt -qq list apache

If apt generates output similar to the following, the apache server is already installed:

apache2/bionic-updates,bionic-security,now 2.4.29-1ubuntu4.13 amd64 [installed]

If the apt output does not list the package or include the [installed] status, run the following command at the command prompt to perform the Apache installation:

# apt install apache2

1.3  Configuring the Firewall

Before starting and testing the Apache web server, the firewall will need to be modified to allow the web server to communicate with the outside world. By default, the HTTP and HTTPS protocols use ports 80 and 443 respectively so, depending on which protocols are being used, either one or both of these ports will need to be opened. If your Ubuntu system is being protected by the Uncomplicated Firewall, the following command can be used to enable only insecure web traffic (HTTP):

# ufw allow Apache

To enable only secure (HTTPS) traffic:

# ufw allow 'Apache Secure'

Alternatively, enable both secure and insecure web traffic as follows:

# ufw allow 'Apache Full'

If you are using firewalld, the following commands can be used to open the HTTP and HTTPS ports. When opening the ports, be sure to specify the firewall zone that applies to the internet facing network connection:

# firewall-cmd --permanent --zone=<zone> --add-port=80/tcp
# firewall-cmd --permanent --zone=<zone> --add-port=443/tcp

After opening the necessary ports, be sure to reload the firewall settings:

# firewall-cmd --reload

On cloud hosted servers, it may also be necessary to enable the appropriate port for the server instance within the cloud console. Check the documentation for the cloud provider for steps on how to do this.

1.4  Port Forwarding

If the Ubuntu system hosting the web server sits on a network protected by a firewall (either another computer running a firewall, or a router or wireless base station containing built-in firewall protection) you will need to configure the firewall to forward port 80 and/or port 443 to your web server system. The mechanism for performing this differs between firewalls and devices so check your documentation to find out how to configure port forwarding.

1.5  Starting the Apache Web Server

Once the Apache server is installed and the firewall configured, the next step is to verify that the server is running and start it if necessary.

To check the status of the Apache service from the command-line, enter the following at the command-prompt:

# systemctl status apache2

If the above command indicates that the httpd service is not running, it can be launched from the command-line as follows:

# systemctl start apache2

If you would like the Apache httpd service to start automatically when the system boots, run the following command:

# systemctl enable apache2

1.6  Testing the Web Server

Once the installation is complete the next step is to verify the web server is up and running.

If you have access (either locally or remotely) to the desktop environment of the server, simply start up a web browser and enter http://127.0.0.1 in the address bar (127.0.0.1 is the loop-back network address which tells the system to connect to the local machine). If everything is set up correctly, the browser should load the page shown in Figure 32-1:

Figure 32-1

If the desktop environment is not available, connect either from another system on the same local network as the server, or using the external IP address assigned to the system if it is hosted remotely.

1.7  Configuring the Apache Web Server for Your Domain

The next step in setting up your web server is to configure it for your domain name. To configure the web server, begin by changing directory to /etc/apache2 which, in turn, contains a number of files and sub-directories. The main configuration file is named apache2.conf and serves as the central point for organizing the modular configuration files located in the sub-directories. For example, the apache2.conf file includes a line to import the configuration settings declared in the files located in the sites-enabled folder:

# Include the virtual host configurations:
IncludeOptional sites-enabled/*.conf

Similarly, the apache2.conf file imports the ports.conf file, which defines the ports on which the Apache server listens for network traffic.

To configure a web site domain on Ubuntu, begin by changing directory to /etc/apache2. In this directory you will find two sub-directories, sites-available and sites-enabled. Change directory into sites-available. In this directory you will find a default file which may be used as a template for your own site.

Copy the default file to a new file with a name which matches your domain name. For example:

# cp 000-default.conf myexample.conf

Edit your myexample.com file using your favorite editor where it will appear as follows:

<VirtualHost *:80>
        # The ServerName directive sets the request scheme, hostname and port that
        # the server uses to identify itself. This is used when creating
        # redirection URLs. In the context of virtual hosts, the ServerName
        # specifies what hostname must appear in the request's Host: header to
        # match this virtual host. For the default virtual host (this file) this
        # value is not decisive as it is used as a last resort host regardless.
        # However, you must set it for any further virtual host explicitly.
        #ServerName www.example.com

        ServerAdmin [email protected]
        DocumentRoot /var/www/html

        # Available loglevels: trace8, ..., trace1, debug, info, notice, warn,
        # error, crit, alert, emerg.
        # It is also possible to configure the loglevel for particular
        # modules, e.g.
        #LogLevel info ssl:warn

        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined

        # For most configuration files from conf-available/, which are
        # enabled or disabled at a global level, it is possible to
        # include a line for only one particular virtual host. For example the
        # following line enables the CGI configuration for this host only
        # after it has been globally disabled with "a2disconf".
        #Include conf-available/serve-cgi-bin.conf
</VirtualHost>

The ServerAdmin directive defines an administrative email address for people wishing to contact the web master for your site. Change this to an appropriate email address where you can be contacted:

ServerAdmin [email protected]

Next the ServerName directive needs to be uncommented (in other words remove the ‘#’ character prefix) and defined so that the web server knows which virtual host this configuration file refers to:

ServerName myexample.com

In the next stage we need to define where the web site files are going to be located using the DocumentRoot directive. The tradition is to use /var/www/domain-name:

DocumentRoot /var/www/myexample.com

Having completed the changes we now need to enable the site as follows:

# a2ensite myexample.conf

This command creates a symbolic link from the myexample.conf file in the sites-available directory to the sites-enabled folder.

With the site enabled, run the following command to disable the default test site:

# a2dissite 000-default.conf

Next, create the /var/www/myexample.com directory and place an index.html file in it. For example:

<html>
<title>Sample Web Page</title>
<body>
Welcome to MyExample.com
</body>
</html>

With these changes made, run the apache2ctl command to check the configuration files for errors:

# apache2ctl configtest
Syntax OK

If no errors are reported, reload the Apache web server to make sure it picks up our new settings:

# systemctl reload apache2

Finally, check that the server configuration is working by opening a browser window and navigating to the site using the domain name instead of the IP address. The web page that loads should be the one defined in the index.html file created above.

1.8  The Basics of a Secure Web Site

The web server and web site created so far in this chapter use the HTTP protocol on port 80 and, as such, is considered to be insecure. The problem is that the traffic between the web server and the client (typically a user’s web browser) is transmitted in clear text. In other words the data is unencrypted and susceptible to interception. While not a problem for general web browsing, this is a serious weakness when performing tasks such as logging into web sites or transferring sensitive information such as identity or credit card details.

These days, web sites are expected to use HTTPS which uses either Secure Socket Layer (SSL) or Transport Layer Security (TLS) to establish secure, encrypted communication between web server and client. This security is established through the use of public, private and session encryption together with certificates.

To support HTTPS, a web site must have a certificate issued by a trusted authority known as a Certificate Authority (CA). When a browser connects to a secure web site, the web server sends back a copy of the web site’s SSL certificate which also contains a copy of the site’s public key. The browser then validates the authenticity of the certificate with trusted certificate authorities.

If the certificate is found to be valid, the browser uses the public key sent by the server to encrypt a session key and passes it to the server. The server decrypts the session key using the private key and uses it to send an encrypted acknowledgment to the browser. Once this process is complete, the browser and server use the session key to encrypt all subsequent data transmissions until the session ends.

1.9  Configuring Apache for HTTPS

By default, the Apache server does not include the necessary module to implement a secure HTTPS web site. The first step, therefore, is to enable the Apache mod_ssl module on the server system as follows:

# a2enmod ssl

Restart httpd after the installation completes to load the new module into the Apache server:

# systemctl restart apache2

Check that the module has loaded into the server using the following command:

# apache2ctl -M | grep ssl_module
 ssl_module (shared)

Once the ssl module is installed, repeat the steps from the previous section of this chapter to create a configuration file for the website, this time using the sites-available/default-ssl.conf file as the template for the site configuration file.

Assuming that the module is installed, the next step is to generate an SSL certificate for the web site.

1.10 Obtaining an SSL Certificate

The certificate for a web site must be obtained from a Certificate Authority. A number of options are available at a range of prices. By far the best option, however, is to obtain a free certificate from Let’s Encrypt at the following URL:

https://letsencrypt.org/

The process of obtaining a certificate from Let’s Encrypt simply involves installing and running the Certbot tool. This tool will scan the Apache configurationfiles on the server and provides the option to generate certificates for any virtual hosts configured on the system. It will then generate the certificate and add virtual host entries to the Apache configuration specifically for the corresponding web sites.

Use the following steps to install the certbot tool on your Ubuntu system:

# apt update
# apt install software-properties-common
# add-apt-repository universe
# add-apt-repository ppa:certbot/certbot
# apt install certbot python-certbot-apache

Once certbot is installed, run it as follows:

# certbot --apache

After requesting an email address and seeking terms of service acceptance, Certbot will list the domains found in the sites-available folder and provide the option to select one or more of those sites for which a certificate is to be installed. Certbot will then perform some checks before obtaining and installing the certificate on the system:

Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: myexample.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for ebooktricity.com
Enabled Apache rewrite module
Waiting for verification...
Cleaning up challenges
Created an SSL vhost at /etc/apache2/sites-available/myexample-le-ssl.conf
Deploying Certificate to VirtualHost /etc/apache2/sites-available/myexample-le-ssl.conf
Enabling available site: /etc/apache2/sites-available/myexample-le-ssl.conf

Certbot will also have created a new file named myexample-le-ssl.conf in the /etc/apache2/sitesavailable directory containing a secure virtual host entry for each domain name for which a certificate has been generated and enabled the site so that a link to the file is made in the /etc/ apache2/sites-enabled directory. These entries will be similar to the following:

<IfModule mod_ssl.c>
<VirtualHost *:443>
.
.
        ServerName myexample.com
        ServerAdmin [email protected]
        DocumentRoot /var/www/myexample.com
.
.
        ErrorLog ${APACHE_LOG_DIR}/error.log
        CustomLog ${APACHE_LOG_DIR}/access.log combined
.
.
SSLCertificateFile /etc/letsencrypt/live/myexample.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/myexample.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
</IfModule>

Finally, Certbot will ask whether future HTTP web requests should be redirected by the server to HTTPS. In other words, if a user attempts to access http://www.myexample.com the web server will redirect the user to https://www.myexample.com:

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 2

If you are currently testing the HTTPS configuration and would like to keep the HTTP version live until later, select the No redirect option. Otherwise, redirecting to HTTPS is generally recommended.

Once the certificate has been installed, test it in a browser at the following URL (replacing myexample.com with your own domain name):

https://www.ssllabs.com/ssltest/analyze.html?d=www.myexample.com

If the certificate configuration was successful, the SSL Labs report will provide a high rating as shown in Figure 32-2:

Figure 32-2

As a final test, open a browser window and navigate to your domain using the https:// prefix. The page should load as before and the browser should indicate that the connection between the browser and server is secure (usually indicated by a padlock icon in the address bar which can be clicked for additional information):

Figure 32-3

1.11  Summary

An Ubuntu system can be used to host web sites by installing the Apache web server. Both insecure (HTTP) and secure (HTTPS) web sites can be deployed on Ubuntu. Secure web sites use either Secure Socket Layer (SSL) or Transport Layer Security (TLS) to establish encrypted communication between the web server and client through the use of public, private and session encryption together with a certificate issued by a trusted Certificate Authority.

Working with Containers on Ubuntu 20.04

Now that the basics of Linux Containers have been covered in the previous chapter, this chapter will demonstrate how to create and manage containers using the Podman, Skopeo and Buildah tools on Ubuntu. It is intended that by the end of this chapter you will have a clearer understanding of how to create and manage containers on Ubuntu and will have gained a knowledge foundation on which to continue exploring the power of Linux Containers.

1.1   Installing the Container Tools

Before starting with containers, the first step is to install all of the container tools outlined in the previous chapter using the following commands:

# apt install curl
# . /etc/os-release
# sh -c "echo 'deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
# curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key | sudo apt-key add -
# apt update
# apt install podman skopeo buildah

1.2  Pulling a Container Image

For this example, the most recent Ubuntu release will be pulled from the registry. Before pulling an image, however, information about the image repository can be obtained using the skopeo tool, for example:

$ skopeo inspect docker://docker.io/ubuntu
{
    "Name": "docker.io/library/ubuntu",
    "Digest": "sha256:bec5a2727be7fff3d308193cfde3491f8fba1a2ba392b7546b43a051853a341d",
    "RepoTags": [
        "10.04",
        "12.04.5",
        "12.04",
        "12.10",
        "13.04",
        "13.10",
        "14.04.1",
        "14.04.2",
        "14.04.3",
        "14.04.4",
        "14.04.5",
        "14.04",
        "14.10",
        "15.04",
.
.
    ],
    "Created": "2020-03-20T19:20:22.835345724Z",
    "DockerVersion": "18.09.7",
    "Labels": null,
    "Architecture": "amd64",
    "Os": "linux",
    "Layers": [
        "sha256:5bed26d33875e6da1d9ff9a1054c5fef3bbeb22ee979e2acf72528de007b",
        "sha256:f11b29a9c7306674a9479158c1b4259938af11b979ac02030cc1095e9ed1",
        "sha256:930bda195c84cf132344bf38edcad255317380503fef234a9ce3bff0f4dd",
        "sha256:78bf9a5ad49e4ae42a83f4995ade4efc08fd38299cf05bc041e8cdda2a36"
    ],
    "Env": 
        "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
    ]
}

For example, to pull the latest Ubuntu image:

$ podman pull docker://docker.io/ubuntu:latest
Trying to pull docker://docker.io/ubuntu:latest...
Getting image source signatures
Copying blob 5bed26d33875 done
Copying blob f11b29a9c730 done
Copying blob 78bf9a5ad49e done
Copying blob 930bda195c84 done
Copying config 4e5021d210 done
Writing manifest to image destination
Storing signatures
4e5021d210f65ebe915670c7089120120bc0a303b90208592851708c1b8c04bd

Verify that the image has been stored by asking podman to list all local images:

$ podman images
REPOSITORY                 TAG      IMAGE ID       CREATED       SIZE
docker.io/library/ubuntu   latest   4e5021d210f6   3 weeks ago   66.6 MB

Details about a local image may be obtained by running the podman inspect command:

$ podman inspect ubuntu:latest

This command should output the same information as the skopeo command performed on the remote image earlier in this chapter.

1.3  Running the Image in a Container

The image pulled from the registry is a fully operational image that is ready to run in a container without modification. To run the image, use the podman run command. In this case the –rm option will be specified to indicate that we want to run the image in a container, execute one command and then have the container exit. In this case, the cat tool will be used to output the content of the /etc/passwd file located on the container root filesystem:

$ podman run --rm ubuntu:latest cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin

Compare the content of the /etc/passwd file within the container with the /etc/passwd file on the host system and note that it lacks all of the additional users that are present on the host confirming that the cat command was executed within the container environment. Also note that the container started, ran the command and exited all within a matter of seconds. Compare this to the amount of time it takes to start a full operating system, perform a task and shutdown a virtual machine and you begin to appreciate the speed and efficiency of containers.

To launch a container, keep it running and access the shell, the following command can be used:

$ podman run --name=mycontainer -it ubuntu:latest /bin/bash
[email protected]:/#

In this case, an additional command-line option has been used to assign the name “mycontainer” to the container. Though optional, this makes the container easier to recognize and reference as an alternative to using the automatically generated container ID.

While the container is running, run podman in a different terminal window to see the status of all containers on the system:

$ podman ps -a
CONTAINER ID  IMAGE                            COMMAND    CREATED             STATUS                 PORTS  NAMES
4b49ddeb2987  docker.io/library/ubuntu:latest  /bin/bash  About a minute ago  Up About a minute ago         mycontainer

To execute a command in a running container from the host, simply use the podman exec command, referencing the name of the running container and the command to be executed. The following command, for example, starts up a second bash session in the container named mycontainer:

$ podman exec -it mycontainer /bin/bash
[email protected]:/#

Note that though the above example referenced the container name the same result can be achieved using the container ID as listed by the podman ps -a command:

$ podman exec -it 4b49ddeb2987 /bin/bash
[email protected]:/#

Alternatively, the podman attach command will also attach to a running container and access the shell prompt:

$ podman attach mycontainer
[email protected]:/#

Once the container is up and running, any additional configuration changes can be made and packages installed just like any other Ubuntu system.

1.4  Managing a Container

Once launched, a container will continue to run until it is stopped via podman, or the command that was launched when the container was run exits. Running the following command on the host, for example, will cause the container to exit:

$ podman stop mycontainer

Alternatively, pressing the Ctrl-D keyboard sequence within the last remaining bash shell of the container would cause both the shell and container to exit. Once it has exited, the status of the container will change accordingly:

$ podman ps -a
CONTAINER ID  IMAGE                            COMMAND    CREATED        STATUS                           PORTS  NAMES
4b49ddeb2987  docker.io/library/ubuntu:latest  /bin/bash  6 minutes ago  Exited (127) About a minute ago         mycontainer

Although the container is no longer running, it still exists and contains all of the changes that were made to the configuration and file system. If you installed packages, made configuration changes or added files, these changes will persist within “mycontainer”. To verify this, simply restart the container as follows:

$ podman start mycontainer

After starting the container, use the podman exec command once again to execute commands within the container as outlined previously. For example, to once again gain access to a shell prompt:

$ podman exec -it mycontainer /bin/bash

A running container may also be paused and resumed using the podman pause and unpause commands as follows:

$ podman pause mycontainer
$ podman unpause mycontainer

1.5  Saving a Container to an Image

Once the container guest system is configured to your requirements there is a good chance that you will want to create and run more than one container of this particular type. To do this, the container needs to be saved as an image to local storage so that it can be used as the basis for additional container instances. This is achieved using the podman commit command combined with the name or ID of the container and the name by which the image will be stored, for example:

$ podman commit mycontainer > myubuntu_image

Once the image has been saved, check that it now appears in the list of images in the local repository:

$ podman images
REPOSITORY                 TAG      IMAGE ID       CREATED              SIZE
localhost/myubuntu_image   latest   8ad685d49482   47 seconds ago       66.6 MB
docker.io/library/ubuntu   latest   4e5021d210f6   3 weeks ago          66.6 MB

The saved image can now be used to create additional containers identical to the original:

$ podman run --name=mycontainer2 -it localhost/myubuntu_image /bin/bash

1.6  Removing an Image from Local Storage

To remove an image from local storage once it is no longer needed, simply run the podman rmi command, referencing either the image name or ID as output by the podman images command. For example, to remove the image named myubuntu_image created in the previous section, run podman as follows:

$ podman rmi localhost/myubuntu_image

Note before an image can be removed, any containers based on that image must first be removed.

1.7  Removing Containers

Even when a container has exited or been stopped, it still exists and can be restarted at any time. If a container is no longer needed, it can be deleted using the podman rm command as follows after the container has been stopped:

# podman rm mycontainer2

1.8  Building a Container with Buildah

Buildah allows new containers to be built either from existing containers, an image or entirely from scratch. Buildah also includes the ability to mount the file system of a container so that it can be accessed and modified from the host.

The following buildah command, for example, will build a container from the Ubuntu Base image (if the image has not already been pulled from the registry, buildah will download it before creating the container):

$ buildah from docker://docker.io/library/ubuntu:latest

The result of running this command will be a container named ubuntu-working-container that is ready to run:

$ buildah run ubuntu-working-container cat /etc/passwd

1.9  Summary

This chapter has worked through the creation and management of Linux Containers on Ubuntu using the podman, skopeo and buildah tools.

An Introduction to Ubuntu 20.04 Containers

The preceding chapters covered the concept of virtualization with a particular emphasis on creating and managing virtual machines using KVM. This chapter will introduce a related technology in the form of Linux Containers. While there are some similarities between virtual machines and containers, there are also some key differences that will be outlined in this chapter along with an introduction to the concepts and advantages of Linux Containers. The chapter will also provide an overview of some of the Ubuntu container management tools. Once the basics of containers have been covered in this chapter, the next chapter will work through some practical examples of creating and running containers on Ubuntu.

1.1  Linux Containers and Kernel Sharing

In simple terms, Linux containers can be thought of as a lightweight alternative to virtualization. In a virtualized environment, a virtual machine is created that contains and runs the entire guest operating system. The virtual machine, in turn, runs on top of an environment such as a hypervisor that manages access to the physical resources of the host system.

Containers work by using a concept referred to as kernel sharing which takes advantage of the architectural design of Linux and UNIX-based operating systems.

In order to understand how kernel sharing and containers work it helps to first understand the two main components of Linux or UNIX operating systems. At the core of the operating system is the kernel. The kernel, in simple terms, handles all the interactions between the operating system and the physical hardware. The second key component is the root file system which contains all the libraries, files and utilities necessary for the operating system to function. Taking advantage of this structure, containers each have their own root file system but share the kernel of the host operating system. This structure is illustrated in the architectural diagram in Figure 30-1 below.

This type of resource sharing is made possible by the ability of the kernel to dynamically change the current root file system (a concept known as change root or chroot) to a different root file system without having to reboot the entire system. Linux containers are essentially an extension of this capability combined with a container runtime, the responsibility of which is to provide an interface for executing and managing the containers on the host system. A number of container runtimes are available including Docker, lxd, containerd and CRI-O.

Figure 30-1

1.2  Container Uses and Advantages

The main advantage of containers is that they require considerably less resource overhead than virtualization allowing many container instances to be run simultaneously on a single server, and can be started and stopped rapidly and efficiently in response to demand levels. Containers run natively on the host system providing a level of performance that cannot be matched by a virtual machine.

Containers are also extremely portable and can be migrated between systems quickly and easily. When combined with a container management system such as Docker, OpenShift and Kubernetes, it is possible to deploy and manage containers on a vast scale spanning multiple servers and cloud platforms, potentially running thousands of containers.

Containers are frequently used to create lightweight execution environments for applications. In this scenario, each container provides an isolated environment containing the application together with all of the runtime and supporting files required by that application to run. The container can then be deployed to any other compatible host system that supports container execution and run without any concerns that the target system may not have the necessary runtime configuration for the application – all of the application’s dependencies are already in the container.

Containers are also useful when bridging the gap between development and production environments. By performing development and QA work in containers, those containers can then be passed to production and launched safe in the knowledge that the applications are running in the same container environments in which they were developed and tested.

Containers also promote a modular approach to deploying large and complex solutions. Instead of developing applications as single monolithic entities, containers can be used to design applications as groups of interacting modules, each running in a separate container.

One possible drawback of containers is the fact that the guest operating systems must be compatible with the version of the kernel which is being shared. It is not, for example, possible to run Microsoft Windows in a container on a Linux system. Nor is it possible for a Linux guest system designed for the 2.6 version of the kernel to share a 2.4 version kernel. These requirements are not, however, what containers were designed for. Rather than being seen as limitations, therefore, these restrictions should be viewed as some of the key advantages of containers in terms of providing a simple, scalable and reliable deployment platform.

1.3  Ubuntu Container Tools

There a number of options available for creating and managing containers on Ubuntu. One option is to download and install the standard tools provided by Docker. In this book, however, we are going to focus on a new set of tools that have been developed by Red Hat, Inc. and are widely used on other Linux distributions such as CentOS, Fedora and Red Hat Enterprise Linux. There are a number of reasons for this choice. First, these tools are fully compatible with the tools supplied by Docker (including using the same command-line options). More importantly, these tools have the advantage that they can be used without the need to have the Docker daemon running in the background. This container tool set consists of the following utilities:

  • buildah – A command-line tool for building container images.
  • podman – A command-line based container runtime and management tool. Performs tasks such as downloading container images from remote registries and inspecting, starting and stopping images.
  • skopeo – A command-line utility used to convert container images, copy images between registries and inspect images stored in registries without the need to download them.
  • runc – A lightweight container runtime for launching and running containers from the command-line.

All of the above tools are compliant with the Open Container Initiative (OCI), a set of specifications designed to ensure that containers conform to the same standards between competing tools and platforms.

1.4  The Docker Registry

Although Ubuntu is provided with a set of tools designed to be used in place of those provided by Docker, those tools still need access to Ubuntu images for use when building containers. For this purpose, the Ubuntu team maintains a set of Ubuntu container images within the Docker Hub. The Docker Hub is an online container registry made of multiple repositories, each containing a wide range of container images available for download when building containers. The images within a repository are each assigned a repository tag (for example, 20.04, latest etc) which can be referenced when performing an image download. The following, for example, is the URL of the Ubuntu 20.04 image contained within the Docker Hub:

docker://docker.io/library/ubuntu:20.04

In addition to downloading (referred to as “pulling” in container terminology) container images from Docker and other third party hosts registries, you can also use registries to store your own images. This can be achieved either by hosting your own registry, or by making use of existing services such as those provided by Docker, Amazon AWS, Google Cloud, Microsoft Azure and IBM Cloud to name a few of the many options.

1.5  Container Networking

By default, containers are connected to a network using a Container Networking Interface (CNI) bridged network stack. In the bridged configuration, all the containers running on a server belong to the same subnet and, as such, are able to communicate with each other. The containers are also connected to the external network by bridging the host system’s network connection. Similarly, the host is able to access the containers via a virtual network interface (usually named cni0) which will have been created as part of the container tool installation.

1.6  Summary

Linux Containers offer a lightweight alternative to virtualization and take advantage of the structure of the Linux and Unix operating systems. Linux Containers essentially share the kernel of the host operating system, with each container having its own root file system containing the files, libraries and applications. Containers are highly efficient and scalable and provide an ideal platform for building and deploying modular enterprise level solutions. A number of tools and platforms are available for building, deploying and managing containers including third-party solutions and those provided with Ubuntu.

Managing KVM on Ubuntu 20.04 using the virsh Command-Line Tool

In previous chapters we have covered the installation and configuration of KVM-based guest operating systems on Ubuntu. This chapter is dedicated to exploring some additional areas of the virsh tool that have not been covered in previous chapters, and how it may be used to manage KVM-based guest operating systems from the command-line.

1.1  The virsh Shell and Command-Line

The virsh tool is both a command-line tool and an interactive shell environment. When used in the command-line mode, the command is simply issued at the command prompt with sets of arguments appropriate to the task to be performed.

To use the options as command-line arguments, use them at a terminal command prompt as shown in the following example:

# virsh <option>

The virsh tool, when used in shell mode, provides an interactive environment from which to issue sequences of commands.

To run commands in the virsh shell, run the following command:

# virsh
Welcome to virsh, the virtualization interactive terminal.
 
Type:  'help' for help with commands
       'quit' to quit
 
virsh #

At the virsh # prompt enter the options you wish to run. The following virsh session, for example, lists the current virtual machines, starts a virtual machine named FedoraVM and then obtains another listing to verify the VM is running:

# virsh 
Welcome to virsh, the virtualization interactive terminal.
 
Type:  'help' for help with commands
       'quit' to quit
 
virsh # list
 Id    Name                           State
----------------------------------------------------
 8     RHEL8VM                       running
 9     CentOS7VM                     running
 
virsh # start FedoraVM
Domain FedoraVM started
 
virsh # list
 Id    Name                           State
----------------------------------------------------
 8     RHEL8VM                       running
 9     CentOS7VM                     running
10     FedoraVM                      running
 
virsh#

The virsh tool supports a wide range of commands, a full listing of which may be obtained using the help option:

# virsh help

Additional details on the syntax for each command may be obtained by specifying the command after the help directive:

# virsh help restore
  NAME
    restore - restore a domain from a saved state in a file
 
  SYNOPSIS
    restore <file> [--bypass-cache] [--xml <string>] [--running] [--paused]
 
  DESCRIPTION
    Restore a domain.
 
  OPTIONS
    [--file] <string>  the state to restore
    --bypass-cache   avoid file system cache when restoring
    --xml <string>   filename containing updated XML for the target
    --running        restore domain into running state
    --paused         restore domain into paused state

In the remainder of this chapter we will look at some of these commands in more detail.

1.2  Listing Guest System Status

The status of the guest systems on an Ubuntu virtualization host may be viewed at any time using the list option of the virsh tool. For example:

# virsh list

Managing KVM using the virsh Command-Line Tool The above command will display output containing a line for each guest similar to the following:

virsh # list
 Id    Name                           State
----------------------------------------------------
 8     RHEL8VM                       running
 9     CentOS7VM                     running
10     FedoraVM                      running

1.3  Starting a Guest System

A guest operating system can be started using the virsh tool combined with the start option followed by the name of the guest operating system to be launched. For example:

# virsh start myGuestOS

1.4  Shutting Down a Guest System

The shutdown option of the virsh tool, as the name suggests, is used to shutdown a guest operating system:

# virsh shutdown guestName

Note that the shutdown option allows the guest operating system to perform an orderly shutdown when it receives the shutdown instruction. To instantly stop a guest operating system the destroy option may be used (with the risk of file system damage and data loss):

# virsh destroy guestName

1.5  Suspending and Resuming a Guest System

A guest system can be suspended and resumed using the virsh tool’s suspend and resume options. For example, to suspend a specific system:

# virsh suspend guestName

Similarly, to resume the paused system:

# virsh resume guestName

Note that a suspended session will be lost if the host system is rebooted. Also, be aware that a suspended system continues to reside in memory. To save a session such that it no longer takes up memory and can be restored to its exact state (even after a reboot), it is necessary to save and restore the guest.

1.6  Saving and Restoring Guest Systems

A running guest operating system can be saved and restored using the virsh utility. When saved, the current status of the guest operating system is written to disk and removed from system memory. A saved system may subsequently be restored at any time (including after a host system reboot).

To save a guest:

# virsh save guestName path_to_save_file

To restore a saved guest operating system session:

# virsh restore path_to_save_file

1.7  Rebooting a Guest System

To reboot a guest operating system:

# virsh reboot guestName

1.8  Configuring the Memory Assigned to a Guest OS

To configure the memory assigned to a guest OS, use the setmem option of the virsh command. For example, the following command reduces the memory allocated to a guest system to 256MB:

# virsh setmem guestName 256

Note that acceptable memory settings must fall within the memory available to the current Domain. This may be increased using the setmaxmem option.

1.9  Summary

The virsh tool provides a wide range of options for creating, monitoring and managing guest virtual machines. As outlined in this chapter, the tool can be used in either command-line or interactive modes.

Creating an Ubuntu 20.04 KVM Networked Bridge Interface

By default, the KVM virtualization environment on Ubuntu creates a virtual network to which virtual machines may connect. It is also possible to configure a direct connection using a MacVTap driver, though as outlined in the chapter entitled “An Overview of Virtualization Techniques”, this approach does not allow the host and guest systems to communicate.

The goal of this chapter is to cover the steps involved in creating a network bridge on Ubuntu enabling guest systems to share one or more of the host system’s physical network connections while still allowing the guest and host systems to communicate with each other.

In the remainder of this chapter we will explain how to configure an Ubuntu network bridge for use by KVM-based guest operating systems.

1.1  Identifying the Network Management System

The steps to create a network bridge will differ depending on whether the host system is using Network Manager or Netplan for network management. If you installed Ubuntu using the desktop installation media then you most likely have a system running Network Manager. If, on the other hand, you installed from the server or Network installer image, then your system is most likely using Netplan.

To identify which networking system is being used, open a Terminal window and run the following command:

# networkctl status

If the above command generates output similar to the following then the system is using Netplan:

# networkctl status
●          State: routable                             
         Address: 192.168.86.242 on enp0s3             
                  fe80::a00:27ff:fe52:69a9 on enp0s3   
         Gateway: 192.168.86.1 (Google, Inc.) on enp0s3
             DNS: 192.168.86.1                         
  Search Domains: lan                                  

May 04 15:46:09 demo systemd[1]: Starting Network Service...
May 04 15:46:09 demo systemd-networkd[625]: Enumeration completed
.
.

If, on the other hand, output similar to the following appears, then Netplan is not running:

# networkctl status -a
WARNING: systemd-networkd is not running, output will be incomplete.

Failed to query link bit rates: Unit dbus-org.freedesktop.network1.service not found.
.
.

To identify if NetworkManager is running, change directory to /etc/netplan. If you are using NetworkManager this directory will contain a file named 01-network-manager-all.yaml with the following content:

# Let NetworkManager manage all devices on this system
network:
  version: 2
  renderer: NetworkManager

Having identified your network management system, follow the corresponding steps in the remainder of this chapter.

1.2  Getting the Netplan Network Settings

Before creating the network bridge on a Netplan based system, begin by obtaining information about the current network configuration using the networkctl command as follows:

# networkctl status -a
● 1: lo
       Link File: /lib/systemd/network/99-default.link
    Network File: n/a
            Type: loopback
           State: carrier (unmanaged)
         Address: 127.0.0.1
                  ::1
 
● 2: eno1
       Link File: /lib/systemd/network/99-default.link
    Network File: /run/systemd/network/10-netplan-eno1.network
            Type: ether
           State: routable (configured)
            Path: pci-0000:00:19.0
          Driver: e1000e
          Vendor: Intel Corporation
           Model: 82579LM Gigabit Network Connection (Lewisville)
      HW Address: fc:4d:d4:3b:e4:0f (Universal Global Scientific Industrial Co., Ltd.)
         Address: 192.168.86.214
                  fe80::fe4d:d4ff:fe3b:e40f
         Gateway: 192.168.86.1
             DNS: 192.168.86.1
  Search Domains: lan
 
● 3: virbr0
       Link File: /lib/systemd/network/99-default.link
    Network File: n/a
            Type: ether
           State: no-carrier (unmanaged)
          Driver: bridge
      HW Address: 52:54:00:2d:f4:2a
         Address: 192.168.122.1
 
● 4: virbr0-nic
       Link File: /lib/systemd/network/99-default.link
    Network File: n/a
            Type: ether
           State: off (unmanaged)
          Driver: tun
      HW Address: 52:54:00:2d:f4:2a

In the above output we can see that the host has an Ethernet network connection established via a device named eno1 and the default bridge interface named virbr0 which provides access to the NAT-based virtual network to which KVM guest systems are connected by default. The output also lists the loopback interface (lo).

1.3  Creating a Netplan Network Bridge

The creation of a network bridge on an Ubuntu system using Netplan involves the addition of an entry to the /etc/netplan/01-netcfg.yaml or /etc/netplan/00-installer-config.yaml file. Using your preferred editor, open the file and add a bridges entry beneath the current content as follows (replacing eno1 with the connection name on your system):

network:
  ethernets:
    eno1:
      dhcp4: true
  version: 2

  bridges:
    br0:
      interfaces: [eno1]
      dhcp4: yes

Note that the bridges: line must be indented by two spaces. Without this indentation, the netplan tool will fail with the following error when run:

Error in network definition: unknown key ‘bridges’

Once the changes have been made, apply them using the following command:

# netplan apply

Note that this command will switch the network from the current connection to the bridge resulting in the system being assigned a different IP address by the DHCP server. If you are connected via a remote SSH session this will cause you to lose contact with the server. If you would prefer to assign a static IP address to the bridge connection, modify the bridge declaration as follows (making sure to turn off DHCP for both IPv4 and IPv6):

network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
      dhcp6: no
 
  bridges:
    br0:
      interfaces: [eno1]
      dhcp4: no
      addresses: [192.168.86.230/24]
      gateway4: 192.168.86.1
      nameservers:
        addresses: [192.168.86.1]

After running the netplan apply command, check that the bridge is now configured and ready for use within KVM virtual machines:

# networkctl status -a
● 1: lo
       Link File: /lib/systemd/network/99-default.link
    Network File: n/a
            Type: loopback
           State: carrier (unmanaged)
         Address: 127.0.0.1
                  ::1
 
● 2: eno1
       Link File: /lib/systemd/network/99-default.link
    Network File: /run/systemd/network/10-netplan-eno1.network
            Type: ether
           State: carrier (configured)
            Path: pci-0000:00:19.0
          Driver: e1000e
          Vendor: Intel Corporation
           Model: 82579LM Gigabit Network Connection (Lewisville)
      HW Address: fc:4d:d4:3b:e4:0f (Universal Global Scientific Industrial Co.,
.
.
● 5: br0
       Link File: /lib/systemd/network/99-default.link
    Network File: /run/systemd/network/10-netplan-br0.network
            Type: ether
           State: routable (configured)
          Driver: bridge
      HW Address: b6:56:ed:e9:d5:75
         Address: 192.168.86.230
                  fe80::b456:edff:fee9:d575
         Gateway: 192.168.86.1
             DNS: 192.168.86.1

1.4  Getting the Current Network Manager Settings

A network bridge can be created using the NetworkManager command-line interface tool (nmcli). The NetworkManager is installed and enabled by default on Ubuntu desktop systems and is responsible for detecting and connecting to network devices in addition to providing an interface for managing networking configurations.

A list of current network connections on the host system can be displayed as follows:

# nmcli con show
NAME                UUID                                  TYPE      DEVICE
Wired connection 1  56f32c14-a4d2-32c8-9391-f51967efa173  ethernet  eno1
virbr0              59bf4111-e0d2-4e6c-b8d4-cb70fa6d695e  bridge    virbr0

In the above output we can see that the host has an Ethernet network connection established via a device named eno1 and the default bridge interface named virbr0 which provides access to the NAT-based virtual network to which KVM guest systems are connected by default.

Similarly, the following command can be used to identify the devices (both virtual and physical) that are currently configured on the system:

# nmcli device show
GENERAL.DEVICE:                         eno1
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         FC:4D:D4:3B:E4:0F
GENERAL.MTU:                            1500
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     Wired connection 1
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         192.168.86.207/24
IP4.GATEWAY:                            192.168.86.1
IP4.ROUTE[1]:                           dst = 0.0.0.0/0, nh = 192.168.86.1, mt = 100
IP4.ROUTE[2]:                           dst = 192.168.86.0/24, nh = 0.0.0.0, mt = 100
IP4.ROUTE[3]:                           dst = 169.254.0.0/16, nh = 0.0.0.0, mt = 1000
IP4.DNS[1]:                             192.168.86.1
IP4.DOMAIN[1]:                          lan
IP6.ADDRESS[1]:                         fe80::d3e2:c3dc:b69b:cd30/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ff00::/8, nh = ::, mt = 256, table=255
IP6.ROUTE[2]:                           dst = fe80::/64, nh = ::, mt = 256
IP6.ROUTE[3]:                           dst = fe80::/64, nh = ::, mt = 100
 
GENERAL.DEVICE:                         virbr0
GENERAL.TYPE:                           bridge
GENERAL.HWADDR:                         52:54:00:9D:19:E5
GENERAL.MTU:                            1500
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     virbr0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
IP4.ADDRESS[1]:                         192.168.122.1/24
IP4.GATEWAY:                            --
IP4.ROUTE[1]:                           dst = 192.168.122.0/24, nh = 0.0.0.0, mt = 0
IP6.GATEWAY:                            --
.
.

The above partial output indicates that the host system on which the command was executed contains a physical Ethernet device (eno1) and the virtual bridge (virbr0).

The virsh command may also be used to list the virtual networks currently configured on the system:

# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes

At this point, the only virtual network present is the default network provided by virbr0. Now that some basic information about the current network configuration has been obtained, the next step is to create a network bridge connected to the physical network device (in this case the device named eno1).

1.5  Creating a Network Manager Bridge from the Command-Line

The first step in creating the network bridge is to add a new connection to the network configuration. This can be achieved using the nmcli tool, specifying that the connection is to be a bridge and providing names for both the connection and the interface:

# nmcli con add ifname br0 type bridge con-name br0

Once the connection has been added, a bridge slave interface needs to be established between physical device eno1 (the slave) and the bridge connection br0 (the master) as follows:

# nmcli con add type bridge-slave ifname eno1 master br0

At this point, the NetworkManager connection list should read as follows:

# nmcli con show
NAME                UUID                                  TYPE      DEVICE 
Wired connection 1  56f32c14-a4d2-32c8-9391-f51967efa173  ethernet  eno1   
br0                 8416607e-c6c1-4abb-8583-1661689b95a9  bridge    br0    
virbr0              dffab88d-1588-4e69-8d1c-2148090aa5ee  bridge    virbr0 
bridge-slave-eno1   43383092-6434-448f-b735-0cbea39eb38f  ethernet  --

The next step is to start up the bridge interface. If the steps to configure the bridge are being performed over a network connection (i.e. via SSH) this step can be problematic because the current eno1 connection must be closed down before the bridge connection can be brought up. This means that the current connection will be lost before the bridge connection can be enabled to replace it, potentially leaving the remote host unreachable.

If you are accessing the host system remotely this problem can be avoided by creating a shell script to perform the network changes. This will ensure that the bridge interface is enabled after the eno1 interface is brought down, allowing you to reconnect to the host after the changes are complete. Begin by creating a shell script file named bridge.sh containing the following commands:

#!/bin/bash
nmcli con down "Wired connection 1"
nmcli con up br0

Once the script has been created, execute it as follows:

# sh ./bridge.sh

When the script executes, the connection will be lost when the eno1 connection is brought down. After waiting a few seconds, however, it should be possible to reconnect to the host once the br0 connection has been activated.

If you are working locally on the host, the two nmcli commands can be run within a terminal window without any risk of losing connectivity:

# nmcli con down "Wired connection 1"
# nmcli con up br0

Once the bridge is up and running, the connection list should now include both the bridge and the bridge-slave connections:

# nmcli con show
NAME                UUID                                  TYPE      DEVICE 
br0                 8416607e-c6c1-4abb-8583-1661689b95a9  bridge    br0    
bridge-slave-eno1   43383092-6434-448f-b735-0cbea39eb38f  ethernet  eno1   
virbr0              dffab88d-1588-4e69-8d1c-2148090aa5ee  bridge    virbr0 
Wired connection 1  56f32c14-a4d2-32c8-9391-f51967efa173  ethernet  --

Note that the Wired Connection 1 connection is still listed but is actually no longer active. To exclude inactive connections from the list, simply use the –active flag when requesting the list:

# nmcli con show --active
NAME               UUID                                  TYPE      DEVICE 
br0                8416607e-c6c1-4abb-8583-1661689b95a9  bridge    br0    
bridge-slave-eno1  43383092-6434-448f-b735-0cbea39eb38f  ethernet  eno1   
virbr0             dffab88d-1588-4e69-8d1c-2148090aa5ee  bridge    virbr0

1.6  Declaring the KVM Bridged Network

At this point, the bridge connection is present on the system but is not visible to the KVM environment. Running the virsh command should still list the default network as being the only available network option:

# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes

Before the bridge can be used by a virtual machine it must be declared and added to the KVM network configuration. This involves the creation of a definition file and, once again, the use of the virsh command-line tool.

Begin by creating a definition file for the bridge network named bridge.xml that reads as follows:

<network>
  <name>br0</name>
  <forward mode="bridge"/>
  <bridge name="br0" />
</network>

Next, use the file to define the new network:

# virsh net-define ./bridge.xml

Once the network has been defined, start it and, if required, configure it to autostart each time the system reboots:

# virsh net-start br0
# virsh net-autostart br0

Once again list the networks to verify that the bridge network is now accessible within the KVM environment:

# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 br0                  active     yes           yes
 default              active     yes           yes

1.7  Using a Bridge Network in a Virtual Machine

To create a virtual machine that makes use of the bridge network, use the virt-install –network option and specify the br0 bridge name. For example:

# virt-install --name MyFedora --memory 1024 --disk path=/tmp/myFedora.img,size=10 --network network=br0 --os-variant fedora28 --cdrom /home/demo/Downloads/Fedora-Server-dvd-x86_64-29-1.2.iso 

When the guest operating system is running it will appear on the same physical network as the host system and will no longer be on the NAT-based virtual network.

To modify an existing virtual machine so that it uses the bridge, use the virsh edit command. This command loads the XML definition file into an editor where changes can be made and saved:

# virsh edit GuestName

By default, the file will be loaded into the vi editor. To use a different editor, simply change the $EDITOR environment variable, for example:

# export EDITOR=gedit

To change from the default virtual network, locate the <interface> section of the file which will read as follows for a NAT based configuration:

<interface type='network'>
      <mac address='<your mac address here>'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

Alternatively, if the virtual machine was using a direct connection, the entry may read as follows:

<interface type='direct'>
      <mac address='<your mac address here>'/>
      <source dev='eno1' mode='vepa'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>

To use the bridge, change the source network property to read as follows before saving the file:

<interface type='network'>
      <mac address='<your mac address here>'/>
      <source network='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

If the virtual machine is already running, the change will not take effect until it is restarted.

1.8  Creating a Bridge Network using nm-connection-editor

If either local or remote desktop access is available on the host system, much of the bridge configuration process can be performed using the nm-connection-editor graphical tool. To use this tool, open a Terminal window within the desktop and enter the following command:

# nm-connection-editor

When the tool has loaded, the window shown in Figure 28-1 will appear listing the currently configured network connections (essentially the same output as that generated by the nmcli con show command):

Figure 28-1

To create a new connection, click on the ‘+’ button located in the bottom left-hand corner of the window. From the resulting dialog (Figure 28-2) select the Bridge option from the menu:

Figure 28-2

With the bridge option selected, click on the Create… button to proceed to the bridge configuration screen. Begin by changing both the connection and interface name fields to br0 before clicking on the Add button located to the right of the Bridge connections list as highlighted in Figure 28-3:

Figure 28-3

From the connection type dialog (Figure 28-4) change the menu setting to Ethernet before clicking on the Create… button:

Figure 28-4

Another dialog will now appear in which the bridge slave connection needs to be configured. Within this dialog, select the physical network to which the bridge is to connect (for example eno1) from the Device menu:

Figure 28-5

Click on the Save button to apply the changes and return to the Editing br0 dialog (as illustrated in Figure 28-3 above). Within this dialog, click on the Save button to create the bridge. On returning to the main window, the new bridge and slave connections should now be listed:

Figure 28-6

All that remains is to bring down the original eno1 connection and bring up the br0 connection using the steps outlined in the previous chapter (remembering to perform these steps in a shell script if the host is being accessed remotely):

# nmcli con down "Wired connection 1"
# nmcli con up br0

It will also be necessary, as it was when creating the bridge using the command-line tool, to add this bridge to the KVM network configuration. To do so, simply repeat the steps outlined in the section above entitled “Declaring the KVM Bridged Network”. Once this step has been taken, the bridge is ready to be used by guest virtual machines.

1.9  Summary

By default, the KVM virtualization environment on Ubuntu creates a virtual network to which virtual machines may connect. It is also possible to configure a direct connection using a MacVTap driver, though as outlined in the chapter entitled “An Overview of Virtualization Techniques”, this approach does not allow the host and guest systems to communicate. If the guests are required to appear on the network with their own IP addresses, the guests need to be configured to share the physical network interface of the host system. As outlined in this chapter, this can be achieved using either the nmcli or nm-connection-editor tools to create a networked bridge interface.