Working with Containers on Ubuntu 20.04

Now that the basics of Linux Containers have been covered in the previous chapter, this chapter will demonstrate how to create and manage containers using the Podman, Skopeo and Buildah tools on Ubuntu. It is intended that by the end of this chapter you will have a clearer understanding of how to create and manage containers on Ubuntu and will have gained a knowledge foundation on which to continue exploring the power of Linux Containers.

1.1   Installing the Container Tools

Before starting with containers, the first step is to install all of the container tools outlined in the previous chapter using the following commands:

# apt install curl
# . /etc/os-release
# sh -c "echo 'deb https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/ /' > /etc/apt/sources.list.d/devel:kubic:libcontainers:stable.list"
# curl -L https://download.opensuse.org/repositories/devel:/kubic:/libcontainers:/stable/xUbuntu_${VERSION_ID}/Release.key | sudo apt-key add -
# apt update
# apt install podman skopeo buildah

1.2  Pulling a Container Image

For this example, the most recent Ubuntu release will be pulled from the registry. Before pulling an image, however, information about the image repository can be obtained using the skopeo tool, for example:

$ skopeo inspect docker://docker.io/ubuntu
{
    "Name": "docker.io/library/ubuntu",
    "Digest": "sha256:bec5a2727be7fff3d308193cfde3491f8fba1a2ba392b7546b43a051853a341d",
    "RepoTags": [
        "10.04",
        "12.04.5",
        "12.04",
        "12.10",
        "13.04",
        "13.10",
        "14.04.1",
        "14.04.2",
        "14.04.3",
        "14.04.4",
        "14.04.5",
        "14.04",
        "14.10",
        "15.04",
.
.
    ],
    "Created": "2020-03-20T19:20:22.835345724Z",
    "DockerVersion": "18.09.7",
    "Labels": null,
    "Architecture": "amd64",
    "Os": "linux",
    "Layers": [
        "sha256:5bed26d33875e6da1d9ff9a1054c5fef3bbeb22ee979e2acf72528de007b",
        "sha256:f11b29a9c7306674a9479158c1b4259938af11b979ac02030cc1095e9ed1",
        "sha256:930bda195c84cf132344bf38edcad255317380503fef234a9ce3bff0f4dd",
        "sha256:78bf9a5ad49e4ae42a83f4995ade4efc08fd38299cf05bc041e8cdda2a36"
    ],
    "Env": 
        "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
    ]
}

For example, to pull the latest Ubuntu image:

$ podman pull docker://docker.io/ubuntu:latest
Trying to pull docker://docker.io/ubuntu:latest...
Getting image source signatures
Copying blob 5bed26d33875 done
Copying blob f11b29a9c730 done
Copying blob 78bf9a5ad49e done
Copying blob 930bda195c84 done
Copying config 4e5021d210 done
Writing manifest to image destination
Storing signatures
4e5021d210f65ebe915670c7089120120bc0a303b90208592851708c1b8c04bd

Verify that the image has been stored by asking podman to list all local images:

$ podman images
REPOSITORY                 TAG      IMAGE ID       CREATED       SIZE
docker.io/library/ubuntu   latest   4e5021d210f6   3 weeks ago   66.6 MB

Details about a local image may be obtained by running the podman inspect command:

$ podman inspect ubuntu:latest

This command should output the same information as the skopeo command performed on the remote image earlier in this chapter.

1.3  Running the Image in a Container

The image pulled from the registry is a fully operational image that is ready to run in a container without modification. To run the image, use the podman run command. In this case the –rm option will be specified to indicate that we want to run the image in a container, execute one command and then have the container exit. In this case, the cat tool will be used to output the content of the /etc/passwd file located on the container root filesystem:

$ podman run --rm ubuntu:latest cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
_apt:x:100:65534::/nonexistent:/usr/sbin/nologin

Compare the content of the /etc/passwd file within the container with the /etc/passwd file on the host system and note that it lacks all of the additional users that are present on the host confirming that the cat command was executed within the container environment. Also note that the container started, ran the command and exited all within a matter of seconds. Compare this to the amount of time it takes to start a full operating system, perform a task and shutdown a virtual machine and you begin to appreciate the speed and efficiency of containers.

To launch a container, keep it running and access the shell, the following command can be used:

$ podman run --name=mycontainer -it ubuntu:latest /bin/bash
root@4b49ddeb2987:/#

In this case, an additional command-line option has been used to assign the name “mycontainer” to the container. Though optional, this makes the container easier to recognize and reference as an alternative to using the automatically generated container ID.

While the container is running, run podman in a different terminal window to see the status of all containers on the system:

$ podman ps -a
CONTAINER ID  IMAGE                            COMMAND    CREATED             STATUS                 PORTS  NAMES
4b49ddeb2987  docker.io/library/ubuntu:latest  /bin/bash  About a minute ago  Up About a minute ago         mycontainer

To execute a command in a running container from the host, simply use the podman exec command, referencing the name of the running container and the command to be executed. The following command, for example, starts up a second bash session in the container named mycontainer:

$ podman exec -it mycontainer /bin/bash
root@4b49ddeb2987:/#

Note that though the above example referenced the container name the same result can be achieved using the container ID as listed by the podman ps -a command:

$ podman exec -it 4b49ddeb2987 /bin/bash
root@4b49ddeb2987:/#

Alternatively, the podman attach command will also attach to a running container and access the shell prompt:

$ podman attach mycontainer
root@4b49ddeb2987:/#

Once the container is up and running, any additional configuration changes can be made and packages installed just like any other Ubuntu system.

1.4  Managing a Container

Once launched, a container will continue to run until it is stopped via podman, or the command that was launched when the container was run exits. Running the following command on the host, for example, will cause the container to exit:

$ podman stop mycontainer

Alternatively, pressing the Ctrl-D keyboard sequence within the last remaining bash shell of the container would cause both the shell and container to exit. Once it has exited, the status of the container will change accordingly:

$ podman ps -a
CONTAINER ID  IMAGE                            COMMAND    CREATED        STATUS                           PORTS  NAMES
4b49ddeb2987  docker.io/library/ubuntu:latest  /bin/bash  6 minutes ago  Exited (127) About a minute ago         mycontainer

Although the container is no longer running, it still exists and contains all of the changes that were made to the configuration and file system. If you installed packages, made configuration changes or added files, these changes will persist within “mycontainer”. To verify this, simply restart the container as follows:

$ podman start mycontainer

After starting the container, use the podman exec command once again to execute commands within the container as outlined previously. For example, to once again gain access to a shell prompt:

$ podman exec -it mycontainer /bin/bash

A running container may also be paused and resumed using the podman pause and unpause commands as follows:

$ podman pause mycontainer
$ podman unpause mycontainer

1.5  Saving a Container to an Image

Once the container guest system is configured to your requirements there is a good chance that you will want to create and run more than one container of this particular type. To do this, the container needs to be saved as an image to local storage so that it can be used as the basis for additional container instances. This is achieved using the podman commit command combined with the name or ID of the container and the name by which the image will be stored, for example:

$ podman commit mycontainer > myubuntu_image

Once the image has been saved, check that it now appears in the list of images in the local repository:

$ podman images
REPOSITORY                 TAG      IMAGE ID       CREATED              SIZE
localhost/myubuntu_image   latest   8ad685d49482   47 seconds ago       66.6 MB
docker.io/library/ubuntu   latest   4e5021d210f6   3 weeks ago          66.6 MB

The saved image can now be used to create additional containers identical to the original:

$ podman run --name=mycontainer2 -it localhost/myubuntu_image /bin/bash

1.6  Removing an Image from Local Storage

To remove an image from local storage once it is no longer needed, simply run the podman rmi command, referencing either the image name or ID as output by the podman images command. For example, to remove the image named myubuntu_image created in the previous section, run podman as follows:

$ podman rmi localhost/myubuntu_image

Note before an image can be removed, any containers based on that image must first be removed.

1.7  Removing Containers

Even when a container has exited or been stopped, it still exists and can be restarted at any time. If a container is no longer needed, it can be deleted using the podman rm command as follows after the container has been stopped:

# podman rm mycontainer2

1.8  Building a Container with Buildah

Buildah allows new containers to be built either from existing containers, an image or entirely from scratch. Buildah also includes the ability to mount the file system of a container so that it can be accessed and modified from the host.

The following buildah command, for example, will build a container from the Ubuntu Base image (if the image has not already been pulled from the registry, buildah will download it before creating the container):

$ buildah from docker://docker.io/library/ubuntu:latest

The result of running this command will be a container named ubuntu-working-container that is ready to run:

$ buildah run ubuntu-working-container cat /etc/passwd

1.9  Summary

This chapter has worked through the creation and management of Linux Containers on Ubuntu using the podman, skopeo and buildah tools.

An Introduction to Ubuntu 20.04 Containers

The preceding chapters covered the concept of virtualization with a particular emphasis on creating and managing virtual machines using KVM. This chapter will introduce a related technology in the form of Linux Containers. While there are some similarities between virtual machines and containers, there are also some key differences that will be outlined in this chapter along with an introduction to the concepts and advantages of Linux Containers. The chapter will also provide an overview of some of the Ubuntu container management tools. Once the basics of containers have been covered in this chapter, the next chapter will work through some practical examples of creating and running containers on Ubuntu.

1.1  Linux Containers and Kernel Sharing

In simple terms, Linux containers can be thought of as a lightweight alternative to virtualization. In a virtualized environment, a virtual machine is created that contains and runs the entire guest operating system. The virtual machine, in turn, runs on top of an environment such as a hypervisor that manages access to the physical resources of the host system.

Containers work by using a concept referred to as kernel sharing which takes advantage of the architectural design of Linux and UNIX-based operating systems.

In order to understand how kernel sharing and containers work it helps to first understand the two main components of Linux or UNIX operating systems. At the core of the operating system is the kernel. The kernel, in simple terms, handles all the interactions between the operating system and the physical hardware. The second key component is the root file system which contains all the libraries, files and utilities necessary for the operating system to function. Taking advantage of this structure, containers each have their own root file system but share the kernel of the host operating system. This structure is illustrated in the architectural diagram in Figure 30-1 below.

This type of resource sharing is made possible by the ability of the kernel to dynamically change the current root file system (a concept known as change root or chroot) to a different root file system without having to reboot the entire system. Linux containers are essentially an extension of this capability combined with a container runtime, the responsibility of which is to provide an interface for executing and managing the containers on the host system. A number of container runtimes are available including Docker, lxd, containerd and CRI-O.

Figure 30-1

1.2  Container Uses and Advantages

The main advantage of containers is that they require considerably less resource overhead than virtualization allowing many container instances to be run simultaneously on a single server, and can be started and stopped rapidly and efficiently in response to demand levels. Containers run natively on the host system providing a level of performance that cannot be matched by a virtual machine.

Containers are also extremely portable and can be migrated between systems quickly and easily. When combined with a container management system such as Docker, OpenShift and Kubernetes, it is possible to deploy and manage containers on a vast scale spanning multiple servers and cloud platforms, potentially running thousands of containers.

Containers are frequently used to create lightweight execution environments for applications. In this scenario, each container provides an isolated environment containing the application together with all of the runtime and supporting files required by that application to run. The container can then be deployed to any other compatible host system that supports container execution and run without any concerns that the target system may not have the necessary runtime configuration for the application – all of the application’s dependencies are already in the container.

Containers are also useful when bridging the gap between development and production environments. By performing development and QA work in containers, those containers can then be passed to production and launched safe in the knowledge that the applications are running in the same container environments in which they were developed and tested.

Containers also promote a modular approach to deploying large and complex solutions. Instead of developing applications as single monolithic entities, containers can be used to design applications as groups of interacting modules, each running in a separate container.

One possible drawback of containers is the fact that the guest operating systems must be compatible with the version of the kernel which is being shared. It is not, for example, possible to run Microsoft Windows in a container on a Linux system. Nor is it possible for a Linux guest system designed for the 2.6 version of the kernel to share a 2.4 version kernel. These requirements are not, however, what containers were designed for. Rather than being seen as limitations, therefore, these restrictions should be viewed as some of the key advantages of containers in terms of providing a simple, scalable and reliable deployment platform.

1.3  Ubuntu Container Tools

There a number of options available for creating and managing containers on Ubuntu. One option is to download and install the standard tools provided by Docker. In this book, however, we are going to focus on a new set of tools that have been developed by Red Hat, Inc. and are widely used on other Linux distributions such as CentOS, Fedora and Red Hat Enterprise Linux. There are a number of reasons for this choice. First, these tools are fully compatible with the tools supplied by Docker (including using the same command-line options). More importantly, these tools have the advantage that they can be used without the need to have the Docker daemon running in the background. This container tool set consists of the following utilities:

  • buildah – A command-line tool for building container images.
  • podman – A command-line based container runtime and management tool. Performs tasks such as downloading container images from remote registries and inspecting, starting and stopping images.
  • skopeo – A command-line utility used to convert container images, copy images between registries and inspect images stored in registries without the need to download them.
  • runc – A lightweight container runtime for launching and running containers from the command-line.

All of the above tools are compliant with the Open Container Initiative (OCI), a set of specifications designed to ensure that containers conform to the same standards between competing tools and platforms.

1.4  The Docker Registry

Although Ubuntu is provided with a set of tools designed to be used in place of those provided by Docker, those tools still need access to Ubuntu images for use when building containers. For this purpose, the Ubuntu team maintains a set of Ubuntu container images within the Docker Hub. The Docker Hub is an online container registry made of multiple repositories, each containing a wide range of container images available for download when building containers. The images within a repository are each assigned a repository tag (for example, 20.04, latest etc) which can be referenced when performing an image download. The following, for example, is the URL of the Ubuntu 20.04 image contained within the Docker Hub:

docker://docker.io/library/ubuntu:20.04

In addition to downloading (referred to as “pulling” in container terminology) container images from Docker and other third party hosts registries, you can also use registries to store your own images. This can be achieved either by hosting your own registry, or by making use of existing services such as those provided by Docker, Amazon AWS, Google Cloud, Microsoft Azure and IBM Cloud to name a few of the many options.

1.5  Container Networking

By default, containers are connected to a network using a Container Networking Interface (CNI) bridged network stack. In the bridged configuration, all the containers running on a server belong to the same subnet and, as such, are able to communicate with each other. The containers are also connected to the external network by bridging the host system’s network connection. Similarly, the host is able to access the containers via a virtual network interface (usually named cni0) which will have been created as part of the container tool installation.

1.6  Summary

Linux Containers offer a lightweight alternative to virtualization and take advantage of the structure of the Linux and Unix operating systems. Linux Containers essentially share the kernel of the host operating system, with each container having its own root file system containing the files, libraries and applications. Containers are highly efficient and scalable and provide an ideal platform for building and deploying modular enterprise level solutions. A number of tools and platforms are available for building, deploying and managing containers including third-party solutions and those provided with Ubuntu.

Managing KVM on Ubuntu 20.04 using the virsh Command-Line Tool

In previous chapters we have covered the installation and configuration of KVM-based guest operating systems on Ubuntu. This chapter is dedicated to exploring some additional areas of the virsh tool that have not been covered in previous chapters, and how it may be used to manage KVM-based guest operating systems from the command-line.

1.1  The virsh Shell and Command-Line

The virsh tool is both a command-line tool and an interactive shell environment. When used in the command-line mode, the command is simply issued at the command prompt with sets of arguments appropriate to the task to be performed.

To use the options as command-line arguments, use them at a terminal command prompt as shown in the following example:

# virsh <option>

The virsh tool, when used in shell mode, provides an interactive environment from which to issue sequences of commands.

To run commands in the virsh shell, run the following command:

# virsh
Welcome to virsh, the virtualization interactive terminal.
 
Type:  'help' for help with commands
       'quit' to quit
 
virsh #

At the virsh # prompt enter the options you wish to run. The following virsh session, for example, lists the current virtual machines, starts a virtual machine named FedoraVM and then obtains another listing to verify the VM is running:

# virsh 
Welcome to virsh, the virtualization interactive terminal.
 
Type:  'help' for help with commands
       'quit' to quit
 
virsh # list
 Id    Name                           State
----------------------------------------------------
 8     RHEL8VM                       running
 9     CentOS7VM                     running
 
virsh # start FedoraVM
Domain FedoraVM started
 
virsh # list
 Id    Name                           State
----------------------------------------------------
 8     RHEL8VM                       running
 9     CentOS7VM                     running
10     FedoraVM                      running
 
virsh#

The virsh tool supports a wide range of commands, a full listing of which may be obtained using the help option:

# virsh help

Additional details on the syntax for each command may be obtained by specifying the command after the help directive:

# virsh help restore
  NAME
    restore - restore a domain from a saved state in a file
 
  SYNOPSIS
    restore <file> [--bypass-cache] [--xml <string>] [--running] [--paused]
 
  DESCRIPTION
    Restore a domain.
 
  OPTIONS
    [--file] <string>  the state to restore
    --bypass-cache   avoid file system cache when restoring
    --xml <string>   filename containing updated XML for the target
    --running        restore domain into running state
    --paused         restore domain into paused state

In the remainder of this chapter we will look at some of these commands in more detail.

1.2  Listing Guest System Status

The status of the guest systems on an Ubuntu virtualization host may be viewed at any time using the list option of the virsh tool. For example:

# virsh list

Managing KVM using the virsh Command-Line Tool The above command will display output containing a line for each guest similar to the following:

virsh # list
 Id    Name                           State
----------------------------------------------------
 8     RHEL8VM                       running
 9     CentOS7VM                     running
10     FedoraVM                      running

1.3  Starting a Guest System

A guest operating system can be started using the virsh tool combined with the start option followed by the name of the guest operating system to be launched. For example:

# virsh start myGuestOS

1.4  Shutting Down a Guest System

The shutdown option of the virsh tool, as the name suggests, is used to shutdown a guest operating system:

# virsh shutdown guestName

Note that the shutdown option allows the guest operating system to perform an orderly shutdown when it receives the shutdown instruction. To instantly stop a guest operating system the destroy option may be used (with the risk of file system damage and data loss):

# virsh destroy guestName

1.5  Suspending and Resuming a Guest System

A guest system can be suspended and resumed using the virsh tool’s suspend and resume options. For example, to suspend a specific system:

# virsh suspend guestName

Similarly, to resume the paused system:

# virsh resume guestName

Note that a suspended session will be lost if the host system is rebooted. Also, be aware that a suspended system continues to reside in memory. To save a session such that it no longer takes up memory and can be restored to its exact state (even after a reboot), it is necessary to save and restore the guest.

1.6  Saving and Restoring Guest Systems

A running guest operating system can be saved and restored using the virsh utility. When saved, the current status of the guest operating system is written to disk and removed from system memory. A saved system may subsequently be restored at any time (including after a host system reboot).

To save a guest:

# virsh save guestName path_to_save_file

To restore a saved guest operating system session:

# virsh restore path_to_save_file

1.7  Rebooting a Guest System

To reboot a guest operating system:

# virsh reboot guestName

1.8  Configuring the Memory Assigned to a Guest OS

To configure the memory assigned to a guest OS, use the setmem option of the virsh command. For example, the following command reduces the memory allocated to a guest system to 256MB:

# virsh setmem guestName 256

Note that acceptable memory settings must fall within the memory available to the current Domain. This may be increased using the setmaxmem option.

1.9  Summary

The virsh tool provides a wide range of options for creating, monitoring and managing guest virtual machines. As outlined in this chapter, the tool can be used in either command-line or interactive modes.

Creating an Ubuntu 20.04 KVM Networked Bridge Interface

By default, the KVM virtualization environment on Ubuntu creates a virtual network to which virtual machines may connect. It is also possible to configure a direct connection using a MacVTap driver, though as outlined in the chapter entitled “An Overview of Virtualization Techniques”, this approach does not allow the host and guest systems to communicate.

The goal of this chapter is to cover the steps involved in creating a network bridge on Ubuntu enabling guest systems to share one or more of the host system’s physical network connections while still allowing the guest and host systems to communicate with each other.

In the remainder of this chapter we will explain how to configure an Ubuntu network bridge for use by KVM-based guest operating systems.

1.1  Identifying the Network Management System

The steps to create a network bridge will differ depending on whether the host system is using Network Manager or Netplan for network management. If you installed Ubuntu using the desktop installation media then you most likely have a system running Network Manager. If, on the other hand, you installed from the server or Network installer image, then your system is most likely using Netplan.

To identify which networking system is being used, open a Terminal window and run the following command:

# networkctl status

If the above command generates output similar to the following then the system is using Netplan:

# networkctl status
●          State: routable                             
         Address: 192.168.86.242 on enp0s3             
                  fe80::a00:27ff:fe52:69a9 on enp0s3   
         Gateway: 192.168.86.1 (Google, Inc.) on enp0s3
             DNS: 192.168.86.1                         
  Search Domains: lan                                  

May 04 15:46:09 demo systemd[1]: Starting Network Service...
May 04 15:46:09 demo systemd-networkd[625]: Enumeration completed
.
.

If, on the other hand, output similar to the following appears, then Netplan is not running:

# networkctl status -a
WARNING: systemd-networkd is not running, output will be incomplete.

Failed to query link bit rates: Unit dbus-org.freedesktop.network1.service not found.
.
.

To identify if NetworkManager is running, change directory to /etc/netplan. If you are using NetworkManager this directory will contain a file named 01-network-manager-all.yaml with the following content:

# Let NetworkManager manage all devices on this system
network:
  version: 2
  renderer: NetworkManager

Having identified your network management system, follow the corresponding steps in the remainder of this chapter.

1.2  Getting the Netplan Network Settings

Before creating the network bridge on a Netplan based system, begin by obtaining information about the current network configuration using the networkctl command as follows:

# networkctl status -a
● 1: lo
       Link File: /lib/systemd/network/99-default.link
    Network File: n/a
            Type: loopback
           State: carrier (unmanaged)
         Address: 127.0.0.1
                  ::1
 
● 2: eno1
       Link File: /lib/systemd/network/99-default.link
    Network File: /run/systemd/network/10-netplan-eno1.network
            Type: ether
           State: routable (configured)
            Path: pci-0000:00:19.0
          Driver: e1000e
          Vendor: Intel Corporation
           Model: 82579LM Gigabit Network Connection (Lewisville)
      HW Address: fc:4d:d4:3b:e4:0f (Universal Global Scientific Industrial Co., Ltd.)
         Address: 192.168.86.214
                  fe80::fe4d:d4ff:fe3b:e40f
         Gateway: 192.168.86.1
             DNS: 192.168.86.1
  Search Domains: lan
 
● 3: virbr0
       Link File: /lib/systemd/network/99-default.link
    Network File: n/a
            Type: ether
           State: no-carrier (unmanaged)
          Driver: bridge
      HW Address: 52:54:00:2d:f4:2a
         Address: 192.168.122.1
 
● 4: virbr0-nic
       Link File: /lib/systemd/network/99-default.link
    Network File: n/a
            Type: ether
           State: off (unmanaged)
          Driver: tun
      HW Address: 52:54:00:2d:f4:2a

In the above output we can see that the host has an Ethernet network connection established via a device named eno1 and the default bridge interface named virbr0 which provides access to the NAT-based virtual network to which KVM guest systems are connected by default. The output also lists the loopback interface (lo).

1.3  Creating a Netplan Network Bridge

The creation of a network bridge on an Ubuntu system using Netplan involves the addition of an entry to the /etc/netplan/01-netcfg.yaml or /etc/netplan/00-installer-config.yaml file. Using your preferred editor, open the file and add a bridges entry beneath the current content as follows (replacing eno1 with the connection name on your system):

network:
  ethernets:
    eno1:
      dhcp4: true
  version: 2

  bridges:
    br0:
      interfaces: [eno1]
      dhcp4: yes

Note that the bridges: line must be indented by two spaces. Without this indentation, the netplan tool will fail with the following error when run:

Error in network definition: unknown key ‘bridges’

Once the changes have been made, apply them using the following command:

# netplan apply

Note that this command will switch the network from the current connection to the bridge resulting in the system being assigned a different IP address by the DHCP server. If you are connected via a remote SSH session this will cause you to lose contact with the server. If you would prefer to assign a static IP address to the bridge connection, modify the bridge declaration as follows (making sure to turn off DHCP for both IPv4 and IPv6):

network:
  version: 2
  renderer: networkd
  ethernets:
    eno1:
      dhcp4: no
      dhcp6: no
 
  bridges:
    br0:
      interfaces: [eno1]
      dhcp4: no
      addresses: [192.168.86.230/24]
      gateway4: 192.168.86.1
      nameservers:
        addresses: [192.168.86.1]

After running the netplan apply command, check that the bridge is now configured and ready for use within KVM virtual machines:

# networkctl status -a
● 1: lo
       Link File: /lib/systemd/network/99-default.link
    Network File: n/a
            Type: loopback
           State: carrier (unmanaged)
         Address: 127.0.0.1
                  ::1
 
● 2: eno1
       Link File: /lib/systemd/network/99-default.link
    Network File: /run/systemd/network/10-netplan-eno1.network
            Type: ether
           State: carrier (configured)
            Path: pci-0000:00:19.0
          Driver: e1000e
          Vendor: Intel Corporation
           Model: 82579LM Gigabit Network Connection (Lewisville)
      HW Address: fc:4d:d4:3b:e4:0f (Universal Global Scientific Industrial Co.,
.
.
● 5: br0
       Link File: /lib/systemd/network/99-default.link
    Network File: /run/systemd/network/10-netplan-br0.network
            Type: ether
           State: routable (configured)
          Driver: bridge
      HW Address: b6:56:ed:e9:d5:75
         Address: 192.168.86.230
                  fe80::b456:edff:fee9:d575
         Gateway: 192.168.86.1
             DNS: 192.168.86.1

1.4  Getting the Current Network Manager Settings

A network bridge can be created using the NetworkManager command-line interface tool (nmcli). The NetworkManager is installed and enabled by default on Ubuntu desktop systems and is responsible for detecting and connecting to network devices in addition to providing an interface for managing networking configurations.

A list of current network connections on the host system can be displayed as follows:

# nmcli con show
NAME                UUID                                  TYPE      DEVICE
Wired connection 1  56f32c14-a4d2-32c8-9391-f51967efa173  ethernet  eno1
virbr0              59bf4111-e0d2-4e6c-b8d4-cb70fa6d695e  bridge    virbr0

In the above output we can see that the host has an Ethernet network connection established via a device named eno1 and the default bridge interface named virbr0 which provides access to the NAT-based virtual network to which KVM guest systems are connected by default.

Similarly, the following command can be used to identify the devices (both virtual and physical) that are currently configured on the system:

# nmcli device show
GENERAL.DEVICE:                         eno1
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         FC:4D:D4:3B:E4:0F
GENERAL.MTU:                            1500
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     Wired connection 1
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         192.168.86.207/24
IP4.GATEWAY:                            192.168.86.1
IP4.ROUTE[1]:                           dst = 0.0.0.0/0, nh = 192.168.86.1, mt = 100
IP4.ROUTE[2]:                           dst = 192.168.86.0/24, nh = 0.0.0.0, mt = 100
IP4.ROUTE[3]:                           dst = 169.254.0.0/16, nh = 0.0.0.0, mt = 1000
IP4.DNS[1]:                             192.168.86.1
IP4.DOMAIN[1]:                          lan
IP6.ADDRESS[1]:                         fe80::d3e2:c3dc:b69b:cd30/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ff00::/8, nh = ::, mt = 256, table=255
IP6.ROUTE[2]:                           dst = fe80::/64, nh = ::, mt = 256
IP6.ROUTE[3]:                           dst = fe80::/64, nh = ::, mt = 100
 
GENERAL.DEVICE:                         virbr0
GENERAL.TYPE:                           bridge
GENERAL.HWADDR:                         52:54:00:9D:19:E5
GENERAL.MTU:                            1500
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     virbr0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
IP4.ADDRESS[1]:                         192.168.122.1/24
IP4.GATEWAY:                            --
IP4.ROUTE[1]:                           dst = 192.168.122.0/24, nh = 0.0.0.0, mt = 0
IP6.GATEWAY:                            --
.
.

The above partial output indicates that the host system on which the command was executed contains a physical Ethernet device (eno1) and the virtual bridge (virbr0).

The virsh command may also be used to list the virtual networks currently configured on the system:

# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes

At this point, the only virtual network present is the default network provided by virbr0. Now that some basic information about the current network configuration has been obtained, the next step is to create a network bridge connected to the physical network device (in this case the device named eno1).

1.5  Creating a Network Manager Bridge from the Command-Line

The first step in creating the network bridge is to add a new connection to the network configuration. This can be achieved using the nmcli tool, specifying that the connection is to be a bridge and providing names for both the connection and the interface:

# nmcli con add ifname br0 type bridge con-name br0

Once the connection has been added, a bridge slave interface needs to be established between physical device eno1 (the slave) and the bridge connection br0 (the master) as follows:

# nmcli con add type bridge-slave ifname eno1 master br0

At this point, the NetworkManager connection list should read as follows:

# nmcli con show
NAME                UUID                                  TYPE      DEVICE 
Wired connection 1  56f32c14-a4d2-32c8-9391-f51967efa173  ethernet  eno1   
br0                 8416607e-c6c1-4abb-8583-1661689b95a9  bridge    br0    
virbr0              dffab88d-1588-4e69-8d1c-2148090aa5ee  bridge    virbr0 
bridge-slave-eno1   43383092-6434-448f-b735-0cbea39eb38f  ethernet  --

The next step is to start up the bridge interface. If the steps to configure the bridge are being performed over a network connection (i.e. via SSH) this step can be problematic because the current eno1 connection must be closed down before the bridge connection can be brought up. This means that the current connection will be lost before the bridge connection can be enabled to replace it, potentially leaving the remote host unreachable.

If you are accessing the host system remotely this problem can be avoided by creating a shell script to perform the network changes. This will ensure that the bridge interface is enabled after the eno1 interface is brought down, allowing you to reconnect to the host after the changes are complete. Begin by creating a shell script file named bridge.sh containing the following commands:

#!/bin/bash
nmcli con down "Wired connection 1"
nmcli con up br0

Once the script has been created, execute it as follows:

# sh ./bridge.sh

When the script executes, the connection will be lost when the eno1 connection is brought down. After waiting a few seconds, however, it should be possible to reconnect to the host once the br0 connection has been activated.

If you are working locally on the host, the two nmcli commands can be run within a terminal window without any risk of losing connectivity:

# nmcli con down "Wired connection 1"
# nmcli con up br0

Once the bridge is up and running, the connection list should now include both the bridge and the bridge-slave connections:

# nmcli con show
NAME                UUID                                  TYPE      DEVICE 
br0                 8416607e-c6c1-4abb-8583-1661689b95a9  bridge    br0    
bridge-slave-eno1   43383092-6434-448f-b735-0cbea39eb38f  ethernet  eno1   
virbr0              dffab88d-1588-4e69-8d1c-2148090aa5ee  bridge    virbr0 
Wired connection 1  56f32c14-a4d2-32c8-9391-f51967efa173  ethernet  --

Note that the Wired Connection 1 connection is still listed but is actually no longer active. To exclude inactive connections from the list, simply use the –active flag when requesting the list:

# nmcli con show --active
NAME               UUID                                  TYPE      DEVICE 
br0                8416607e-c6c1-4abb-8583-1661689b95a9  bridge    br0    
bridge-slave-eno1  43383092-6434-448f-b735-0cbea39eb38f  ethernet  eno1   
virbr0             dffab88d-1588-4e69-8d1c-2148090aa5ee  bridge    virbr0

1.6  Declaring the KVM Bridged Network

At this point, the bridge connection is present on the system but is not visible to the KVM environment. Running the virsh command should still list the default network as being the only available network option:

# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 default              active     yes           yes

Before the bridge can be used by a virtual machine it must be declared and added to the KVM network configuration. This involves the creation of a definition file and, once again, the use of the virsh command-line tool.

Begin by creating a definition file for the bridge network named bridge.xml that reads as follows:

<network>
  <name>br0</name>
  <forward mode="bridge"/>
  <bridge name="br0" />
</network>

Next, use the file to define the new network:

# virsh net-define ./bridge.xml

Once the network has been defined, start it and, if required, configure it to autostart each time the system reboots:

# virsh net-start br0
# virsh net-autostart br0

Once again list the networks to verify that the bridge network is now accessible within the KVM environment:

# virsh net-list --all
 Name                 State      Autostart     Persistent
----------------------------------------------------------
 br0                  active     yes           yes
 default              active     yes           yes

1.7  Using a Bridge Network in a Virtual Machine

To create a virtual machine that makes use of the bridge network, use the virt-install –network option and specify the br0 bridge name. For example:

# virt-install --name MyFedora --memory 1024 --disk path=/tmp/myFedora.img,size=10 --network network=br0 --os-variant fedora28 --cdrom /home/demo/Downloads/Fedora-Server-dvd-x86_64-29-1.2.iso 

When the guest operating system is running it will appear on the same physical network as the host system and will no longer be on the NAT-based virtual network.

To modify an existing virtual machine so that it uses the bridge, use the virsh edit command. This command loads the XML definition file into an editor where changes can be made and saved:

# virsh edit GuestName

By default, the file will be loaded into the vi editor. To use a different editor, simply change the $EDITOR environment variable, for example:

# export EDITOR=gedit

To change from the default virtual network, locate the <interface> section of the file which will read as follows for a NAT based configuration:

<interface type='network'>
      <mac address='<your mac address here>'/>
      <source network='default'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

Alternatively, if the virtual machine was using a direct connection, the entry may read as follows:

<interface type='direct'>
      <mac address='<your mac address here>'/>
      <source dev='eno1' mode='vepa'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>

To use the bridge, change the source network property to read as follows before saving the file:

<interface type='network'>
      <mac address='<your mac address here>'/>
      <source network='br0'/>
      <model type='virtio'/>
      <address type='pci' domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
</interface>

If the virtual machine is already running, the change will not take effect until it is restarted.

1.8  Creating a Bridge Network using nm-connection-editor

If either local or remote desktop access is available on the host system, much of the bridge configuration process can be performed using the nm-connection-editor graphical tool. To use this tool, open a Terminal window within the desktop and enter the following command:

# nm-connection-editor

When the tool has loaded, the window shown in Figure 28-1 will appear listing the currently configured network connections (essentially the same output as that generated by the nmcli con show command):

Figure 28-1

To create a new connection, click on the ‘+’ button located in the bottom left-hand corner of the window. From the resulting dialog (Figure 28-2) select the Bridge option from the menu:

Figure 28-2

With the bridge option selected, click on the Create… button to proceed to the bridge configuration screen. Begin by changing both the connection and interface name fields to br0 before clicking on the Add button located to the right of the Bridge connections list as highlighted in Figure 28-3:

Figure 28-3

From the connection type dialog (Figure 28-4) change the menu setting to Ethernet before clicking on the Create… button:

Figure 28-4

Another dialog will now appear in which the bridge slave connection needs to be configured. Within this dialog, select the physical network to which the bridge is to connect (for example eno1) from the Device menu:

Figure 28-5

Click on the Save button to apply the changes and return to the Editing br0 dialog (as illustrated in Figure 28-3 above). Within this dialog, click on the Save button to create the bridge. On returning to the main window, the new bridge and slave connections should now be listed:

Figure 28-6

All that remains is to bring down the original eno1 connection and bring up the br0 connection using the steps outlined in the previous chapter (remembering to perform these steps in a shell script if the host is being accessed remotely):

# nmcli con down "Wired connection 1"
# nmcli con up br0

It will also be necessary, as it was when creating the bridge using the command-line tool, to add this bridge to the KVM network configuration. To do so, simply repeat the steps outlined in the section above entitled “Declaring the KVM Bridged Network”. Once this step has been taken, the bridge is ready to be used by guest virtual machines.

1.9  Summary

By default, the KVM virtualization environment on Ubuntu creates a virtual network to which virtual machines may connect. It is also possible to configure a direct connection using a MacVTap driver, though as outlined in the chapter entitled “An Overview of Virtualization Techniques”, this approach does not allow the host and guest systems to communicate. If the guests are required to appear on the network with their own IP addresses, the guests need to be configured to share the physical network interface of the host system. As outlined in this chapter, this can be achieved using either the nmcli or nm-connection-editor tools to create a networked bridge interface.

Creating Ubuntu 20.04 KVM Virtual Machines with virt-install and virsh

In the previous chapter we explored the creation of KVM guest operating systems on an Ubuntu host using Cockpit and the virt-manager graphical tool. In this chapter we will turn our attention to the creation of KVM-based virtual machines using the virt-install and virsh command-line tools. These tools provide all the capabilities of the virt-manager and Cockpit options with the added advantage that they can be used within scripts to automate virtual machine creation. In addition, the virsh command allows virtual machines to be created based on a specification contained within a configuration file.

The virt-install tool is supplied to allow new virtual machines to be created by providing a list of command-line options. This chapter assumes that the necessary KVM tools are installed. For details on these requirements read the chapter entitled “Installing KVM Virtualization on Ubuntu”.

1.1  Running virt-install to build a KVM Guest System

The virt-install utility accepts a wide range of command-line arguments that are used to provide configuration information related to the virtual machine being created. Some of these commandline options are mandatory (specifically name, memory and disk storage must be provided) while others are optional.

At a minimum, a virt-install command will typically need the following arguments:

  • –name – The name to be assigned to the virtual machine.
  • –memory – The amount of memory to be allocated to the virtual machine.
  • –disk – The name and location of an image file to be used as storage for the virtual machine. This file will be created by virt-install during the virtual machine creation unless the
  • –import option is specified to indicate an existing image file is to be used.
  • –cdrom or –location – Specifies the local path or the URL of a remote ISO image containing the installation media for the guest operating system.

A summary of all the arguments available for use when using virt-install can be found in the man page:

$ man virt-install

1.2  An Example Ubuntu virt-install Command

With reference to the above command-line argument list, we can now look at an example command-line construct using the virt-install tool.

Note that in order to be able to display the virtual machine and complete the installation, a virtviewer instance will need to be connected to the virtual machine after it is started by the virt-install utility. By default, virt-install will attempt to launch virt-viewer automatically once the virtual machine starts running. If virt-viewer is not available, virt-install will wait until a virt-viewer connection is established. The virt-viewer session may be running locally on the host system if it has a graphical desktop, or a connection may be established from a remote client as outlined in the chapter entitled “Creating KVM Virtual Machines using Cockpit and virt-manager”.

The following command creates a new KVM virtual machine configured to run Fedora using KVM para-virtualization. It creates a new 10GB disk image, assigns 1024MB of RAM to the virtual machine and configures a virtual CD device for the installation media ISO image:

# virt-install --name MyFedora --memory 1024 --disk path=/tmp/myFedora.img,size=10 --network network=default --os-variant fedora28 --cdrom /tmp/Fedora-Server-dvd-x86_64.iso

As the creation process runs, the virt-install command will display status updates of the creation progress:

Starting install...
Allocating 'MyFedora.img'                                |  10 GB  00:00:01     
Domain installation still in progress. Waiting for installation to complete.

Once the guest system has been created, the virt-viewer screen will appear containing the operating system installer loaded from the specified installation media:

Figure 27-1

From this point, follow the standard installation procedure for the guest operating system.

1.3  Starting and Stopping a Virtual Machine from the Command-Line

Having created the virtual machine from the command-line it stands to reason that you may also need to start it from the command-line in the future. This can be achieved using the virsh command-line utility, referencing the name assigned to the virtual machine during the creation process. For example:

# virsh start MyFedora

Similarly, the virtual machine may be sent a shutdown signal as follows:

# virsh shutdown MyFedora

If the virtual machine fails to respond to the shutdown signal and does not begin a graceful shutdown the virtual machine may be destroyed (with the attendant risks of data loss) using the destroy directive:

# virsh destroy MyFedora

1.4  Creating a Virtual Machine from a Configuration File

The virsh create command can take as an argument the name of a configuration file on which to base the creation of a new virtual machine. The configuration file uses XML format. Arguably the easiest way to create a configuration file is to dump out the configuration of an existing virtual machine and modify it for the new one. This can be achieved using the virsh dumpxml command. The following command outputs the configuration data for a virtual machine domain named MyFedora to a file named MyFedora.xml:

# virsh dumpxml MyFedora > MyFedora.xml

Once the file has been generated, load it into an editor to review and change the settings for the new virtual machine.

At the very least, the <name>, <uuid> and image file path <source file> must be changed in order to avoid conflict with the virtual machine from which the configuration was taken. In the case of the UUID, this line can simply be deleted from the file.

The virtualization type, memory allocation and number of CPUs to name but a few options may also be changed if required. Once the file has been modified, the new virtual machine may be created as follows:

# virsh create MyFedora.xml

1.5  Summary

KVM provides the virt-install and virsh command-line tools as a quick and efficient alternative to using the Cockpit and virt-manager tools to create and manage virtual machine instances. These tools have the advantage that they can be used from within scripts to automate the creation and management of virtual machines. The virsh command also includes the option to create VM instances from XML-based configuration files.

An Overview of Ubuntu 20.04 Virtualization Techniques

Virtualization is generically defined as the ability to run multiple operating systems simultaneously on a single computer system. While not necessarily a new concept, Virtualization has come to prominence in recent years because it provides a way to fully utilize the CPU and resource capacity of a server system while providing stability (in that if one virtualized guest system crashes, the host and any other guest systems continue to run).

Virtualization is also useful in terms of trying out different operating systems without having to configure dual boot environments. For example, you can run Windows in a virtual machine without having to re-partition the disk, shut down Ubuntu and then boot from Windows. You simply start up a virtualized version of Windows as a guest operating system. Similarly, virtualization allows you to run other Linux distributions from within an Ubuntu system, providing concurrent access to both operating systems.

When deciding on the best approach to implementing virtualization it is important to have a clear understanding of the different virtualization solutions that are currently available. The purpose of this chapter, therefore, is to describe in general terms the virtualization techniques in common use today.

1.1  Guest Operating System Virtualization

Guest OS virtualization, also referred to as application-based virtualization, is perhaps the easiest concept to understand. In this scenario the physical host computer system runs a standard unmodified operating system such as Windows, Linux, UNIX or macOS. Running on this operating system is a virtualization application which executes in much the same way as any other application such as a word processor or spreadsheet would run on the system. It is within this virtualization application that one or more virtual machines are created to run the guest operating systems on the host computer.

The virtualization application is responsible for starting, stopping and managing each virtual machine and essentially controlling access to physical hardware resources on behalf of the individual virtual machines. The virtualization application also engages in a process known as binary rewriting which involves scanning the instruction stream of the executing guest system and replacing any privileged instructions with safe emulations. This has the effect of making the guest system think it is running directly on the system hardware, rather than in a virtual machine within an application.

The following figure provides an illustration of guest OS based virtualization:

Figure 24-1

As outlined in the above diagram, the guest operating systems operate in virtual machines within the virtualization application which, in turn, runs on top of the host operating system in the same way as any other application. Clearly, the multiple layers of abstraction between the guest operating systems and the underlying host hardware are not conducive to high levels of virtual machine performance. This technique does, however, have the advantage that no changes are necessary to either host or guest operating systems and no special CPU hardware virtualization support is required.

1.2  Hypervisor Virtualization

In hypervisor virtualization, the task of a hypervisor is to handle resource and memory allocation for the virtual machines in addition to providing interfaces for higher level administration and monitoring tools. Hypervisor based solutions are categorized as being either Type-1 or Type-2.

Type-2 hypervisors (sometimes referred to as hosted hypervisors) are installed as software applications that run on top of the host operating system, providing virtualization capabilities by coordinating access to resources such as the CPU, memory and network for guest virtual machines. Figure 24-2 illustrates the typical architecture of a system using Type-2 hypervisor virtualization:

Figure 24-2

To understand how Type-1 hypervisors work, it helps to understand a little about Intel x86 processor architecture. The x86 family of CPUs provides a range of protection levels known as rings in which code can execute. Ring 0 has the highest level privilege and it is in this ring that the operating system kernel normally runs. Code executing in ring 0 is said to be running in system space, kernel mode or supervisor mode. All other code such as applications running on the operating system operate in less privileged rings, typically ring 3.

In contrast to Type-2 hypervisors, Type-1 hypervisors (also referred to as metal or native hypervisors) run directly on the hardware of the host system in ring 0. Clearly, with the hypervisor occupying ring 0 of the CPU, the kernels for any guest operating systems running on the system must run in less privileged CPU rings. Unfortunately, most operating system kernels are written explicitly to run in ring 0 for the simple reason that they need to perform tasks that are only available in that ring, such as the ability to execute privileged CPU instructions and directly manipulate memory. A number of different solutions to this problem have been devised in recent years, each of which is described below:

1.2.1  Paravirtualization

Under paravirtualization, the kernel of the guest operating system is modified specifically to run on the hypervisor. This typically involves replacing any privileged operations that will only run in ring 0 of the CPU with calls to the hypervisor (known as hypercalls). The hypervisor, in turn, performs the task on behalf of the guest kernel. This typically limits support to open source operating systems such as Linux which may be freely altered and proprietary operating An Overview of Virtualization Techniques

systems where the owners have agreed to make the necessary code modifications to target a specific hypervisor. These issues notwithstanding, the ability of the guest kernel to communicate directly with the hypervisor results in greater performance levels compared to other virtualization approaches.

1.2.2  Full Virtualization

Full virtualization provides support for unmodified guest operating systems. The term unmodified refers to operating system kernels which have not been altered to run on a hypervisor and therefore still execute privileged operations as though running in ring 0 of the CPU. In this scenario, the hypervisor provides CPU emulation to handle and modify privileged and protected CPU operations made by unmodified guest operating system kernels. Unfortunately this emulation process requires both time and system resources to operate resulting in inferior performance levels when compared to those provided by paravirtualization.

1.2.3  Hardware Virtualization

Hardware virtualization leverages virtualization features built into the latest generations of CPUs from both Intel and AMD. These technologies, known as Intel VT and AMD-V respectively, provide extensions necessary to run unmodified guest virtual machines without the overheads inherent in full virtualization CPU emulation. In very simplistic terms these processors provide an additional privilege mode (referred to as ring -1) above ring 0 in which the hypervisor can operate, thereby leaving ring 0 available for unmodified guest operating systems. The following figure illustrates the Type-1 hypervisor approach to virtualization:

Figure 24-3

As outlined in the above illustration, in addition to the virtual machines, an administrative operating system and/or management console also runs on top of the hypervisor allowing the virtual machines to be managed by a system administrator.

1.3  Virtual Machine Networking

Virtual machines will invariably need to be connected to a network to be of any practical use. One option is for the guest to be connected to a virtual network running within the operating system of the host computer. In this configuration any virtual machines on the virtual network can see each other but access to the external network is provided by Network Address Translation (NAT). When using the virtual network and NAT, each virtual machine is represented on the external network (the network to which the host is connected) using the IP address of the host system. This is the default behavior for KVM virtualization on Ubuntu and generally requires no additional configuration. Typically, a single virtual network is created by default, represented by the name default and the device virbr0.

In order for guests to appear as individual and independent systems on the external network (i.e. with their own IP addresses), they must be configured to share a physical network interface on the host. The quickest way to achieve this is to configure the virtual machine to use the “direct connection” network configuration option (also referred to a MacVTap) which will provide the guest system with an IP address on the same network as the host. Unfortunately, while this gives the virtual machine access to other systems on the network, it is not possible to establish a connection between the guest and the host when using the MacVTap driver.

A better option is to configure a network bridge interface on the host system to which the guests can connect. This provides the guest with an IP address on the external network while also allowing the guest and host to communicate, a topic which is covered in the chapter entitled “Creating an Ubuntu KVM Networked Bridge Interface”.

1.4  Summary

Virtualization is defined as the ability to run multiple guest operating systems within a single host operating system. A number of approaches to virtualization have been developed including guest operating system and hypervisor virtualization. Hypervisor virtualization falls into two categories known as Type-1 and Type-2. Type-2 virtualization solutions are categorized as paravirtualization, full virtualization and hardware virtualization, the latter making use of special virtualization features of some Intel and AMD processor models.

Virtual machine guest operating systems have a number of options in terms of networking including NAT, direct connection (MacVTap) and network bridge configurations.

Creating Ubuntu 20.04 KVM Virtual Machines using Cockpit and virt-manager

KVM-based virtual machines can easily be configured on Ubuntu using either the virt-install command-line tool, the virt-manager GUI tool or the Virtual Machines module of the Cockpit web console. For the purposes of this chapter we will use Cockpit and the virt-manager tool to install a Fedora distribution as a KVM guest on an Ubuntu host.

The command-line approach to virtual machine creation will be covered in the next chapter entitled “Creating KVM Virtual Machines with virt-install and virsh”.

1.1  Installing the Cockpit Virtual Machines Module

By default, the virtual machines module may not be included in a standard Cockpit installation. Assuming that Cockpit is installed and configured, the virtual machines module may be installed as follows:

# apt install cockpit-machines

Once installed, the Virtual Machines option (marked A in Figure 26-1) will appear in the navigation panel next time you log into the Cockpit interface:

Figure 26-1

1.2  Creating a Virtual Machine in Cockpit

To create a virtual machine in Cockpit, simply click on the Create VM button marked B in Figure 26-1 to display the creation dialog.

Within the dialog, enter a name for the machine and choose whether the installation media is in the form of an ISO accessible via a URL or a local filesystem path. Ideally, also select the vendor and operating system type information for the guest. While not essential, this will aid the system in optimizing the virtual machine for the guest.

Also specify the size of the virtual disk drive to be used for the operating system installation and the amount of memory to be allocated to the virtual machine:

Figure 26-2

Note that Cockpit provides the choice of running the guest with a Session or System connection. If the system option is selected, the guest will connect to the system instance of the libvirtd service which is already running in the background with root privileges. The session option, however, starts a new libvirtd service that is owned by the current user and then connects the host to it. A session guest will, by default, use a storage pool that is local to the user’s account (for example / home/demo/.local/share/libvirt/images) and will be accessible only to the owner. A system session, on the other hand will be accessible to all users with appropriate privileges and will, by default, use storage located in /var/lib/libvirt/images.

For this example, select the System option, leave the Immediately Start VM option unselected and, once the new virtual machine has been configured, click on the Create button to build the virtual machine. After the creation process is complete, the new VM will appear in Cockpit as shown in Figure 26-3:

Figure 26-3

As described in “An Overview of Virtualization Techniques”, KVM provides virtual machines with a number of options in terms of network configuration. To view and change the network settings of a virtual machine, click on the Network interfaces tab as shown in Figure 26-4 followed by the Edit button located next to the network entry:

Figure 26-4

In the resulting dialog, the Network Type menu may be used to change the type of network connection, for example from virtual network (NAT) to direct (MacVTap).

1.3  Starting the Installation

To start the new virtual machine and begin installing the guest operating system from the designated installation media, click on the Install button highlighted in Figure 26-3 above. Cockpit will start the virtual machine and switch to the Consoles view where the guest OS screen will appear:

Figure 26-5

If the installation fails, check the message to see if it reads as follows:

unsupported configuration: CPU mode ‘custom’ for x86_64 kvm domain on x86_64 host is not supported by hypervisor

To resolve this issue, delete the newly created virtual machine, reboot the system and then recreate the machine.

Alternatively, check whether the message reads as follows:

Could not open ‘<path to iso image>’: Permission denied Domain installation does not appear to have been successful.

This usually occurs because the QEMU emulator runs as a user named qemu which does not have access to the directory in which the ISO installation image is located. To resolve this issue, open a terminal window (or connect with SSH if the system is remote), change directory to the location of the ISO image file and add the qemu user to the access control list (ACL) of the parent directory as follows:

# cd /path/to/iso/directory
# setfacl --modify u:qemu:x ..

After making this change, check the setting as follows:

# getfacl ..
# file: ..
# owner: demo
# group: demo
user::rwx
user:qemu:--x
group::---
mask::--x
other::---

Once these changes have been made, click on the Install button once again to complete the installation.

To complete the installation, interact with the screen in the Consoles view just as you would if you were installing the operating system on physical hardware.

It is also possible to connect with and display the graphical console for the VM from outside the Cockpit browser session using the virt-viewer tool. To install virt-viewer on an Ubuntu system, run the following command:

# apt install virt-viewer

The virt-viewer tool is also available for Windows systems and can be downloaded from the following URL:

https://virt-manager.org/download/

To connect with a virtual machine running on the local host, simply run virt-viewer and select the virtual machine to which you wish to connect from the resulting dialog:

Figure 26-6

The above command will list system-based virtual machines. To list and access session-based guests, launch virt-viewer as follows:

$ virt-viewer --connect qemu:///session

Alternatively, it is also possible to specify the virtual machine name and bypass the selection dialog entirely, for example:

$ virt-viewer myFedoraGuest
$ virt-viewer --connect qemu:///session myFedoraGuest

To connect a virt-viewer instance to a virtual machine running on a remote host using SSH, the following command can be used:

$ virt-viewer --connect qemu+ssh://<user>@<host>/system <guest name>

For example:

$ virt-viewer --connect qemu+ssh://root@192.168.1.122/system MyFedoraGuest

When using this technique it is important to note that you will be prompted twice for the user password before the connection will be fully established.

Once the virtual machine has been created, the Cockpit interface can be used to monitor the machine and perform tasks such as rebooting, shutting down or deleting the guest system. An option is also included on the Disks panel to add additional disks to the virtual machine configuration.

1.4  Working with Storage Volumes and Storage Pools

When a virtual machine is created it will usually have associated with it at least one virtual disk drive. The images that represent these virtual disk drives are stored in storage pools. A storage pool can take the form of an existing directory on a local filesystem, a filesystem partition, physical disk device, Logical Volume Management (LVM) volume group or even a remote network file system (NFS).

Each storage pool is divided into one or more storage volumes. Storage volumes are typically individual image files, each representing a single virtual disk drive, but can also take the form of physical disk partitions, entire disk drives or LVM volume groups.

When a virtual machine was created using the previous steps, a default storage pool was created into which virtual machine images may be stored. This default storage pool occupies space on the root filesystem and can be reviewed from within the Cockpit Virtual Machine interface by selecting the Storage Pools option at the top of the panel marked C in Figure 26-1 above.

When selected, the screen shown in Figure 26-7 below will appear containing a list of storage pools currently configured on the system:

Figure 26-7

In the above example, the default storage pool is located on the root filesystem and stores the virtual machine image in the /var/lib/libvirtd/images directory. To view the storage volumes contained within the pool, select the Storage Volumes tab highlighted in Figure 26-8:

Figure 26-8

In the case of the Fedora guest, the storage volume takes the form of an image file named myFedoraGuest.qcow2. To find out which storage volume a particular virtual machine uses, return to the main Virtual Machine Cockpit screen, select the virtual machine and display the Disks panel as shown in Figure 26-9:

Figure 26-9

Although using the default storage pool is acceptable for testing purposes and early experimentation, it is recommended that additional pools be created for general virtualization use. To create a new storage pool, display the Storage Pools screen within Cockpit and click on the Create New Storage Pool button to display the dialog shown in Figure 26-10:

Figure 26-10

In the above example, a new storage pool is being created named MyPool using a file system partition mounted as /MyPool within the local filesystem (the topic of disk drives, partitions and mount points is covered later in the chapter entitled “Adding a New Disk Drive to an Ubuntu System”). Once created, the pool will now be listed within the Cockpit storage pool screen and can be used to contain storage volumes as new virtual machines are created.

At the time of writing, it was not possible to create a new storage volume within a custom storage pool from within the Cockpit interface. It is, however, possible to do this from within the Virtual Machine manager as outlined in the following section.

1.5  Creating a Virtual Machine using virt-manager

With the caveat that virt-manager may one day be discontinued once the Virtual Machines Cockpit extension is fully implemented, the remainder of this chapter will explore the use of this tool to create new virtual machines.

1.6  Starting the Virtual Machine Manager

Begin by launching Virtual Machine Manager from the command-line in a terminal window by running virt-manager. Once loaded, the virtual machine manager will prompt for the password of the currently active user prior to displaying the following screen:

Figure 26-11

The main screen lists the current virtual machines running on the system. By default the manager should be connected to the system libvirtd instance. If it is not, connect to the host system by right-clicking on the entry in the list and selecting Connect from the popup menu. To manage session-based virtual machines, select the File -> Add Connection… menu option to display the dialog shown in Figure 26-12:

Figure 26-12

Within this dialog, select QEMU/KVM user session from the Hypervisor menu and click on the Connect button. On returning to the main virt-manager screen, the user session hypervisor should now be listed:

Figure 26-13

To create a new virtual system, click on the new virtual machine button (the far left button on the toolbar) or right-click on the hypervisor entry and select New from the resulting menu to display the first screen of the New VM wizard. In the Name field enter a suitably descriptive name for the virtual system. On this screen, also select the location of the media from which the guest operating system will be installed. This can either be a CD or DVD drive, an ISO image file accessible to the local host, a network install using HTTP, FTP, NFS or PXE or the disk image from an existing virtual machine:

Figure 26-14

1.7  Configuring the KVM Virtual System

Clicking Forward will display a screen seeking additional information about the installation process. The screen displayed and information required will depend on selections made in the preceding screen. For example, if a CD, DVD or ISO was selected, this screen will ask for the specific location of the ISO file or physical media device. This screen also attempts to identify the type and version of the guest operating system to be installed (for example the Windows version or Linux distribution) based on the installation media specified. If it is unable to do so, uncheck the Automatically detect from installation media / source option, type in the first few characters of the operating system name and select an option from the list of possible matches:

Figure 26-15

Once these settings are complete, click the Forward button to configure CPU and memory settings. The optimal settings will depend on the number of CPUs and amount of physical memory present in the host together with the requirements of other applications and virtual machines that will run in parallel with the new virtual machine:

Figure 26-16

On the next screen, options are available to create an image disk of a specified size, select a preexisting volume or to create a storage volume of a specified format (raw, vmdk, ISO etc). Unless you have a specific need to use a particular format (for example you might need to use vmdk to migrate to a VMware based virtualization environment at a later date) or need to use a dedicated disk or partition, it is generally adequate to simply specify a size on this screen:

Figure 26-17

If the default settings are used here, the virtual machine will use a storage volume within the default storage pool for the virtual disk drive. To make use of the custom “MyPool” storage pool created earlier in the chapter, enable the Select or create custom storage option before clicking on the Manage… button.

In the storage volume dialog, select the MyPool entry in the left hand panel, followed by the + button in the main panel to create a new storage volume:

Figure 26-18

Note that the + button located in the bottom left-hand corner of the dialog may also be used to create new storage pools as an alternative to using the Cockpit interface.

In the configuration screen (Figure 26-19), name the storage volume, select the volume size and click on the Finish button to create the volume and assign it to the virtual machine:

Figure 26-19

Once these settings are configured, select the new volume and click on the Choose Volume button. Click the Forward button once more. The final screen displays a summary of the configuration. Review the information displayed. Advanced options are also available to change the virtual network configuration for the guest as shown in Figure 26-20:

Figure 26-20

1.8  Starting the KVM Virtual Machine

Click on the Finish button to begin the creation process. The virtualization manager will create the disk and configure the virtual machine before starting the guest system. The new virtual machine will appear in the main virt-manager window with the status set to Running as illustrated in Figure 26-21:

Figure 26-21

By default, the console for the virtual machine should appear in the virtual machine viewer window. To view the console of the running machine at any future time, ensure that it is selected in the virtual machine list and select the Open button from the toolbar. The virtual machine viewer should be ready for the installation process to begin:

Figure 26-22

From this point on, simply follow the operating system installation instructions to install the guest OS in the KVM virtual machine.

1.9 Summary

This chapter has outlined two different ways to create new KVM-based virtual machines on an Ubuntu host system. The first option covered involves the use of the Cockpit web-based interface to create and manage virtual machines. This has the advantage of not requiring access to a desktop environment running on the host system. An alternative option is to use the virtmanager graphical tool. With these basics covered, the next chapter will cover the creation of virtual machines from the command-line.

Installing KVM Virtualization on Ubuntu 20.04

Earlier versions of Ubuntu provided two virtualization platforms in the form of Kernel-based Virtual Machine (KVM) and Xen. In recent releases, support for Xen has been removed leaving KVM as the only bundled virtualization option supplied with Ubuntu. In addition to KVM, third party solutions are available in the form of products such as VMware and Oracle VirtualBox. Since KVM is supplied with Ubuntu, however, this is the virtualization solution that will be covered in this and subsequent chapters.

Before plunging into installing and running KVM it is worth taking a little time to talk about how it fits into the various types of virtualization outlined in the previous chapter.

1.1  An Overview of KVM

KVM is categorized as a Type-1 hypervisor virtualization solution that implements full virtualization with support for unmodified guest operating systems using Intel VT and AMD-V hardware virtualization support.

KVM differs from many other Type-1 solutions in that it turns the host Linux operating system itself into the hypervisor, allowing bare metal virtualization to be implemented while still running a full, enterprise level host operating system.

1.2  KVM Hardware Requirements

Before proceeding with this chapter we need to take a moment to discuss the hardware requirements for running virtual machines within a KVM environment. First and foremost, KVM virtualization is only available on certain processor types. As previously discussed, these processors must include either Intel VT or AMD-V technology.

To check for virtualization support, run the following command in a terminal window:

# lscpu | grep Virtualization:

If the system contains a CPU with Intel VT support, the above command will provide the following output:

Virtualization: VT-x

Alternatively, the following output will be displayed when a CPU with AMD-V support is detected:

Virtualization: AMD-V

If the CPU does not support virtualization, no output will be displayed by the above lscpu command.

Note that while the above commands only report whether the processor supports the respective feature, it does not indicate whether the feature is currently enabled in the BIOS. In practice virtualization support is typically disabled by default in the BIOS of most systems. It is recommended, therefore, that you check your BIOS settings to ensure the appropriate virtualization technology is enabled before proceeding with this tutorial.

Unlike a dual booting environment, a virtualized environment involves the running of two or more complete operating systems concurrently on a single computer system. This means that the system must have enough physical memory, disk space and CPU processing power to comfortably accommodate all these systems in parallel. Before beginning the configuration and installation process check on the minimum system requirements for both Ubuntu and your chosen guest operating systems and verify that your host system has sufficient resources to handle the requirements of both systems.

1.3  Preparing Ubuntu for KVM Virtualization

Unlike Xen, it is not necessary to run a special version of the kernel in order to support KVM. As a result KVM support is already available for use with the standard kernel via the installation of a KVM kernel module, thereby negating the need to install and boot from a special kernel. To avoid conflicts, however, if a Xen enabled kernel is currently running on the system, reboot the system and select a non-Xen kernel from the boot menu before proceeding with the remainder of this chapter.

The tools required to setup and maintain a KVM-based virtualized system are not installed by default unless specifically selected during the Ubuntu operating system installation process. To install the KVM tools from the command prompt, execute the following command in a terminal window:

# apt install qemu-kvm libvirt-clients libvirt-daemon-system bridge-utils

If you have access to a graphical desktop environment the virt-manager package is also recommended:

# apt install virt-manager

1.4  Verifying the KVM Installation

It is worthwhile checking that the KVM installation worked correctly before moving forward. When KVM is installed and running, two modules will have been loaded into the kernel. The presence or otherwise of these modules can be verified in a terminal window by running the following command:

# lsmod | grep kvm

Assuming that the installation was successful the above command should generate output similar to the following:

# lsmod | grep kvm
kvm_intel             237568  0
kvm                   737280  1 kvm_intel
irqbypass              16384  1 kvm

Note that if the system contains an AMD processor the kvm module will likely read kvm_amd rather than kvm_intel.

The installation process should also have configured the libvirtd daemon to run in the background. Once again using a terminal window, run the following command to ensure libvirtd is running:

# systemctl status libvirtd
 libvirtd.service - Virtualization daemon
   Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2019-03-06 14:41:22 EST; 3min 54s ago

If the process is not running, it may be started as follows:

# systemctl enable --now libvirtd
# systemctl start libvirtd

If the desktop environment is available, run the virt-manager tool by selecting Activities and entering “virt” into the search box. When the Virtual Machine Manager icon appears, click on it to launch it. When loaded, the manager should appear as illustrated in the following figure:

Figure 25-1

If the QEMU/KVM entry is not listed, select the File -> Add Connection menu option and, in the resulting dialog, select the QEMU/KVM Hypervisor before clicking on the Connect button:

Figure 25-2

If the manager is not currently connected to the virtualization processes, right-click on the entry Installing KVM Virtualization on Ubuntu listed and select Connect from the popup menu.

1.5  Summary

KVM is a Type-1 hypervisor virtualization solution that implements full virtualization with support for unmodified guest operating systems using Intel VT and AMD-V hardware virtualization support. It is the default virtualization solution bundled with Ubuntu and can be installed quickly and easily on any Ubuntu system with appropriate processor support. With KVM support installed and enabled, the next few chapters will outline some of the options for installing and managing virtual machines on an Ubuntu host.

Sharing Files between Ubuntu 20.04 and Windows Systems with Samba

Although Linux has made some inroads into the desktop market, its origins and future are very much server-based. It is not surprising therefore that Ubuntu has the ability to act as a file server. It is also extremely common for Ubuntu and Windows systems to be used side by side in networked environments. It is a common requirement, therefore, that files on an Ubuntu system be accessible to Linux, UNIX and Windows-based systems over network connections. Similarly, shared folders and printers residing on Windows systems may also need to be accessible from Ubuntu based systems.

Windows systems share resources such as file systems and printers using a protocol known as Server Message Block (SMB). In order for an Ubuntu system to serve such resources over a network to a Windows system and vice versa it must, therefore, support SMB. This is achieved using technology called Samba. In addition to providing integration between Linux and Windows systems, Samba may also be used to provide folder sharing between Linux systems (as an alternative to NFS which was covered in the previous chapter).

In this chapter we will look at the steps necessary to share file system resources and printers on an Ubuntu system with remote Windows and Linux systems, and to access Windows resources from Ubuntu.

1.1  Accessing Windows Resources from the GNOME Desktop

Before getting into more details of Samba sharing, it is worth noting that if all you want to do is access Windows shared folders from within the Ubuntu GNOME desktop then support is already provided within the GNOME Files application. The Files application is located in the dash as highlighted in Figure 23-1:

Figure 23-1

Once launched, select the Other Locations option in the left-hand navigation panel followed by the Windows Network icon in the main panel to browse available windows resources:

Figure 23-2

1.2  Samba and Samba Client

Samba allows both Ubuntu resources to be shared with Windows systems and Windows resources to be shared with Ubuntu systems. Ubuntu accesses Windows resources using the Samba client. Ubuntu resources, on the other hand, are shared with Windows systems by installing and configuring the Samba service.

1.3  Installing Samba on an Ubuntu System

The default settings used during the Ubuntu installation process do not typically install the necessary Samba packages. Unless you specifically requested that Samba be installed it is unlikely that you have Samba installed on your system. To check whether Samba is installed, open a terminal window and run the following command:

# apt -qq list samba-common samba smbclient

Any missing packages can be installed using the apt command-line tool:

# apt install samba-common samba smbclient

1.4  Configuring the Ubuntu Firewall to Enable Samba

Next, the firewall currently protecting the Ubuntu system needs to be configured to allow Samba traffic.

If you are using the Uncomplicated Firewall (ufw) run the following command:

# ufw allow samba

Alternatively, if you are using firewalld, run the firewall-cmd command as follows:

# firewall-cmd --permanent --add-port={139/tcp,445/tcp}
# firewall-cmd --reload

Before starting the Samba service a number of configuration steps are necessary to define how the Ubuntu system will appear to Windows systems, and the resources which are to be shared with remote clients. The majority of these configuration tasks take place within the /etc/samba/smb. conf file.

1.5  Configuring the smb.conf File

Samba is a highly flexible and configurable system that provides many different options for controlling how resources are shared on Windows networks. This flexibility can lead to the sense that Samba is overly complex to work with. In reality, however, many of the configuration options are not needed by the typical installation, and the learning curve to set up a basic configuration is actually quite short.

For the purposes of this chapter we will look at joining an Ubuntu system to a Windows workgroup and setting up a directory as a shared resource that can be accessed by a specific user. This is a configuration known as a standalone Samba server. More advanced configurations such as integrating Samba within an Active Directory environment are also available, though these are outside the scope of this book.

The first step in configuring Samba is to edit the /etc/samba/smb.conf file.

1.5.1  Configuring the [global] Section

The smb.conf file is divided into sections. The first section is the [global] section where settings can be specified that apply to the entire Samba configuration. While these settings are global, each option may be overridden within other sections of the configuration file.

The first task is to define the name of the Windows workgroup on which the Ubuntu resources are to be shared. This is controlled via the workgroup = directive of the [global] section which by default is configured as follows:

workgroup = WORKGROUP

Begin by changing this to the actual name of the workgroup if necessary.

In addition to the workgroup setting, the other settings indicate that this is a standalone server on which the shared resources will be protected by user passwords. Before moving on to configuring the resources to be shared, other parameters also need to be added to the [global] section as follows:

[global]
.
.
        netbios name = LinuxServer
.
.

The “netbios name” property specifies the name by which the server will be visible to other systems on the network.

1.5.2  Configuring a Shared Resource

The next step is to configure the shared resources (in other words the resources that will be accessible from other systems on the Windows network). In order to achieve this, the section is given a name by which it will be referred to when shared. For example, if we plan to share the /sampleshare directory of our Ubuntu system, we might entitle the section [sampleshare]. In this section a variety of configuration options are possible. For the purposes of this example, however, we will simply define the directory that is to be shared, indicate that the directory is both browsable and writable and declare the resource public so that guest users are able to gain access:

[sampleshare]
        comment = Example Samba share
        path = /sampleshare
        browseable = Yes
        public = yes
        writable = yes

To restrict access to specific users, the “valid users” property may be used, for example:

valid users = demo, bobyoung, marcewing

1.5.3  Removing Unnecessary Shares

The smb.conf file is pre-configured with sections for sharing printers and the home folders of the users on the system. If these resources do not need to be shared, the corresponding sections can be commented out so that they are ignored by Samba. In the following example, the [homes] section has been commented out:

.
.
#[homes]
#       comment = Home Directories
#       valid users = %S, %D%w%S
#       browseable = No
#       read only = No
#       inherit acls = Yes
.
.

1.6  Creating a Samba User

Any user that requires access to a Samba shared resource must be configured as a Samba User and assigned a password. This task is achieved using the smbpasswd command-line tool. Consider, for example, that a user named demo is required to be able to access the /sampleshare directory of our Ubuntu system from a Windows system. In order to fulfill this requirement we must add demo as a Samba user as follows:

# smbpasswd -a demo
New SMB password:
Retype new SMB password:
Added user demo.

Now that we have completed the configuration of a very basic Samba server, it is time to test our configuration file and then start the Samba services.

1.7  Testing the smb.conf File

The settings in the smb.conf file may be checked for errors using the testparm command-line tool as follows:

# testparm
Load smb config files from /etc/samba/smb.conf
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
WARNING: The "syslog" option is deprecated
Processing section "[printers]"
Processing section "[print$]"
Processing section "[sampleshare]"
Loaded services file OK.
Server role: ROLE_STANDALONE
 
Press enter to see a dump of your service definitions
 
# Global parameters
[global]
	dns proxy = No
	log file = /var/log/samba/log.%m
	map to guest = Bad User
	max log size = 1000
	netbios name = LINUXSERVER
	obey pam restrictions = Yes
	pam password change = Yes
	panic action = /usr/share/samba/panic-action %d
	passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
	passwd program = /usr/bin/passwd %u
	security = USER
	server role = standalone server
	server string = %h server (Samba, Ubuntu)
	syslog = 0
	unix password sync = Yes
	usershare allow guests = Yes
	wins support = Yes
	idmap config * : backend = tdb
 
[printers]
	browseable = No
	comment = All Printers
	create mask = 0700
	path = /var/spool/samba
	printable = Yes
 
[print$]
	comment = Printer Drivers
	path = /var/lib/samba/printers
 
[sampleshare]
	comment = Example Samba share
	guest ok = Yes
	path = /sampleshare
	read only = No

1.8  Starting the Samba and NetBIOS Name Services

In order for an Ubuntu server to operate within a Windows network both the Samba (SMB) and NetBIOS nameservice (NMB) services must be started. Optionally, also enable the services so that they start each time the system boots:

# systemctl enable smbd
# systemctl start smbd
# systemctl enable nmbd
# systemctl start nmbd

Before attempting to connect from a Windows system, use the smbclient utility to verify that the share is configured:

# smbclient -U demo -L localhost 
Enter WORKGROUP\demo's password: 
 
	Sharename       Type      Comment
	---------       ----      -------
	print$          Disk      Printer Drivers
	sampleshare     Disk      Example Samba share
	IPC$            IPC       IPC Service (demo-server2 server (Samba, Ubuntu))
	Officejet_Pro_8600_C7C718_ Printer   
	Officejet_6600_971B9B_ Printer   
Reconnecting with SMB1 for workgroup listing.
 
	Server               Comment
	---------            -------
 
	Workgroup            Master
	---------            -------
	WORKGROUP            LINUXSERVER

1.9  Accessing Samba Shares

Now that the Samba resources are configured and the services are running, it is time to access the shared resource from a Windows system. On a suitable Windows system on the same workgroup as the Ubuntu system, open Windows Explorer and navigate to the Network panel. At this point, explorer should search the network and list any systems using the SMB protocol that it finds. The following figure illustrates an Ubuntu system named LINUXSERVER located using Windows Explorer on a Windows 10 system:

Figure 23-3

Double clicking on the LINUXSERVER host will prompt for the name and password of a user with access privileges. In this case it is the demo account that we configured using the smbpasswd tool:

Figure 23-4

Entering the username and password will result in the shared resources configured for that user appearing in the explorer window, including the previously configured /sampleshare resource:

Figure 23-5

Double clicking on the /sampleshare shared resource will display a listing of the files and directories contained therein.

If you are unable to see the Linux system or have problems accessing the shared folder, try mapping the Samba share to a local Windows drive as follows:

  1. Open Windows File Explorer, right-click on the Network entry in the left-hand panel and select Map network drive… from the resulting menu.
  2. From the Map Network Drive dialog, select a drive letter before entering the path to the shared folder. For example:
\\LinuxServer\sampleshare

Enable the checkbox next to Connect using different credentials. If you do not want the drive to be mapped each time you log into the Windows system, turn off the corresponding check box:

Figure 23-6

With the settings entered, click on the Finish button to map the drive, entering the username and password for the Samba user configured earlier in the chapter when prompted. After a short delay the content of the Samba share will appear in a new File Explorer window.

1.10  Accessing Windows Shares from Ubuntu

As previously mentioned, Samba is a two way street, allowing not only Windows systems to access files and printers hosted on an Ubuntu system, but also allowing the Ubuntu system to access shared resources on Windows systems. This is achieved using the smbclient package which was installed at the start of this chapter. If it is not currently installed, install it from a terminal window as follows:

# apt install smbclient

Shared resources on a Windows system can be accessed either from the Ubuntu desktop using the Files application, or from the command-line prompt using the smbclient and mount tools. The steps in this section assume that appropriate network sharing settings have been enabled on the Windows system.

To access any shared resources on a Windows system using the GNOME desktop, begin by launching the Files application and selecting the Other Locations option. This will display the screen shown in Figure 23-7 below including an icon for the Windows Network (if one is detected):

Figure 23-7

Selecting the Windows Network option will display the Windows systems detected on the network and allow access to any shared resources.

Figure 23-8

Alternatively, the Connect to Server option may be used to connect to a specific system. Note that the name or IP address of the remote system must be prefixed by smb:// and may be followed by the path to a specific shared resource, for example:

smb://WinServer10/Documents

1.11  Summary

In this chapter we have looked at how to configure an Ubuntu system to act as both a Samba client and server allowing the sharing of resources with Windows systems. Topics covered included the installation of Samba client and server packages and configuration of Samba as a standalone server.

Using NFS to Share Ubuntu 20.04 Files with Remote Systems

Ubuntu provides two mechanisms for sharing files and folders with other systems on a network. One approach is to use technology called Samba. Samba is based on Microsoft Windows Folder Sharing and allows Linux systems to make folders accessible to Windows systems, and also to access Windows based folder shares from Linux. This approach can also be used to share folders between other Linux and UNIX based systems as long as they too have Samba support installed and configured. This is by far the most popular approach to sharing folders in heterogeneous network environments. The topic of folder sharing using Samba is covered in “Sharing Files between Ubuntu and Windows Systems with Samba”.

Another option, which is targeted specifically at sharing folders between Linux and UNIX based systems, uses technology called Network File System (NFS). NFS allows the file system on one Linux computer to be accessed over a network connection by another Linux or UNIX system. NFS was originally developed by Sun Microsystems (now part of Oracle Corporation) in the 1980s and remains the standard mechanism for sharing of remote Linux/UNIX file systems to this day.

NFS is very different to the Windows SMB resource sharing technology used by Samba. In this chapter we will be looking at network based sharing of folders between Ubuntu and other UNIX/ Linux based systems using NFS.

1.1  Ensuring NFS Services are running on Ubuntu

The first task is to verify that the NFS services are installed and running on your Ubuntu system. This can be achieved either from the command-line, or using the Cockpit interface.

Begin by installing the NFS service by running the following command from a terminal window:

# apt install nfs-kernel-server

Next, configure the service to automatically start at boot time:

# systemctl enable nfs-kernel-server

Once the service has been enabled, start it as follows:

# systemctl start nfs-kernel-server

1.2  Configuring the Ubuntu Firewall to Allow NFS Traffic

Next, the firewall needs to be configured to allow NFS traffic.

If the Uncomplicated Firewall is enabled, run the following command to add a rule to allow NFS traffic:

# ufw allow nfs

If, on the other hand, you are using firewalld, run the following firewall-cmd commands where <zone> is replaced by the appropriate zone for your firewall and system configuration:

# firewall-cmd --zone=<zone> --permanent --add-service=mountd
# firewall-cmd --zone=<zone> --permanent --add-service=nfs
# firewall-cmd --zone=<zone> --permanent --add-service=rpc-bind
# firewall-cmd --reload

1.3  Specifying the Folders to be Shared

Now that NFS is running and the firewall has been configured, we need to specify which parts of the Ubuntu file system may be accessed by remote Linux or UNIX systems. These settings can be declared in the /etc/exports file, which will need to be modified to export the directories for remote access via NFS. The syntax for an export line in this file is as follows:

<export> <host1>(<options>) <host2>(<options>)...

In the above line, <export> is replaced by the directory to be exported, <host1> is the name or IP address of the system to which access is being granted and <options> represents the restrictions that are to be imposed on that access (read only, read write etc). Multiple host and options entries may be placed on the same line if required. For example, the following line grants read only permission to the /datafiles directory to a host with the IP address of 192.168.2.38:

/datafiles 192.168.2.38(ro,no_subtree_check)

The use of wildcards is permitted in order to apply an export to multiple hosts. For example, the following line permits read write access to /home/demo to all external hosts:

/home/demo *(rw)

A full list of options supported by the exports file may be found by reading the exports man page:

# man exports

For the purposes of this chapter, we will configure the /etc/exports file as follows:

/tmp       *(rw,sync,no_subtree_check)
/vol1      192.168.2.21(ro,sync,no_subtree_check)

Once configured, the table of exported file systems maintained by the NFS server needs to be updated with the latest /etc/exports settings using the exportfs command as follows:

# exportfs -a

It is also possible to view the current share settings from the command-line using the exportfs tool:

# exportfs

The above command will generate the following output:

/tmp            <world>
/vol1           192.168.2.21

Using NFS to Share Ubuntu Files with Remote Systems

1.4  Accessing Shared Ubuntu Folders

The shared folders may be accessed from a client system by mounting them manually from the command-line. Before attempting to mount a remote NFS folder, the nfs-common package should first be installed on the client system:

# apt install nfs-common

To mount a remote folder from the command-line, open a terminal window and create a directory where you would like the remote shared folder to be mounted:

# mkdir /home/demo/tmp

Next enter the command to mount the remote folder using either the IP address or hostname of the remote NFS server, for example:

# mount -t nfs 192.168.1.115:/tmp /home/demo/tmp

The remote /tmp folder will then be mounted on the local system. Once mounted, the /home/ demo/tmp folder will contain the remote folder and all its contents.

Options may also be specified when mounting a remote NFS filesystem. The following command, for example, mounts the same folder, but configures it to be read-only:

# mount -t nfs -o ro 192.168.1.115:/tmp /home/demo/tmp

1.5  Mounting an NFS Filesystem on System Startup

It is also possible to configure an Ubuntu system to automatically mount a remote file system each time the system starts up by editing the /etc/fstab file. When loaded into an editor, it will likely resemble the following:

UUID=84982a2e-0dc1-4612-9ffa-13baf91ec558 /     ext4    errors=remount-ro 0  1
/swapfile                        none            swap    sw              0       0

To mount, for example, a folder with the path /tmp which resides on a system with the IP address 192.168.1.115 in the local folder with the path /home/demo/tmp (note that this folder must already exist) add the following line to the /etc/fstab file:

192.168.1.115:/tmp      /home/demo/tmp           nfs     rw              0 0

Next time the system reboots the /tmp folder located on the remote system will be mounted on the local /home/demo/tmp mount point. All the files in the remote folder can then be accessed as if they reside on the local hard disk drive.

1.6  Unmounting an NFS Mount Point

Once a remote file system is mounted using NFS it can be unmounted using the umount command with the local mount point as the command-line argument. The following command, for example, will unmount our example filesystem mount point:

# umount /home/demo/tmp

1.7  Accessing NFS Filesystems in Cockpit

In addition to mounting a remote NFS file system on a client using the command-line, it is also possible to perform mount operations from within the Cockpit web interface. Assuming that Cockpit has been installed and configured on the client system, log into the Cockpit interface from within a web browser and select the Storage option from the left-hand navigation panel. If the Storage option is not listed, the cockpit-storaged package will need to be installed:

# apt install cockpit-storaged

Once the Cockpit service has restarted, log back into the Cockpit interface at which point the Storage option should now be visible.

Once selected, the main storage page will include a section listing any currently mounted NFS file systems as illustrated in Figure 22-1:

Figure 22-1

To mount a remote filesystem, click on the ‘+’ button highlighted above and enter information about the remote NFS server and file system share together with the local mount point and any necessary options into the resulting dialog before clicking on the Add button:

Figure 22-2

To modify, unmount or remove an NFS filesystem share, select the corresponding mount in the NFS Mounts list (Figure 22-1 above) to display the page shown in Figure 22-3 below:

Using NFS to Share Ubuntu Files with Remote Systems

Figure 22-3

1.8  Summary

The Network File System (NFS) is a client/server-based system, originally developed by Sun Microsystems, which provides a way for Linux and Unix systems to share filesystems over a network. NFS allows a client system to access and (subject to permissions) modify files located on a remote server as though those files are stored on a local filesystem. This chapter has provided an overview of NFS and outlined the options available for configuring both client and server systems using the command-line or the Cockpit web interface.