In the previous chapters, we explored the creation of KVM guest operating systems on a Rocky Linux 9 host using Cockpit and the virt-manager graphical tool. This chapter will focus on creating KVM-based virtual machines using the virt-install and virsh command-line tools. These tools provide all the capabilities of the virt-manager and Cockpit options with the added advantage of being used within scripts to automate virtual machine creation. In addition, the virsh command allows virtual machines to be created based on a specification contained within a configuration file.
The virt-install tool is supplied to allow new virtual machines to be created by providing a list of command-line options. This chapter assumes that the necessary KVM tools are installed. Read the chapter Installing KVM Virtualization on Rocky Linuxfor details on these requirements.
Running virt-install to build a KVM Guest System
The virt-install utility accepts a wide range of command-line arguments that provide configuration information related to the virtual machine being created. Some command-line options are mandatory (expressly, name, memory, and disk storage must be provided), while others are optional.
At a minimum, a virt-install command will typically need the following arguments:
–name – The name to be assigned to the virtual machine.
–memory – The amount of memory allocated to the virtual machine.
–disk – The name and location of an image file for storage for the virtual machine. This file will be created by virt-install during the virtual machine creation unless the –import option is specified to indicate an existing image file is to be used.
–-cdrom or –location – Specifies the local path or the URL of a remote ISO image containing the installation media for the guest operating system.
A summary of all the arguments available for use when using virt-install can be found in the man page:
$ man virt-installCode language:plaintext(plaintext)
An Example Rocky Linux 9 virt-install Command
With reference to the above command-line argument list, we can now look at an example command-line construct using the virt-install tool.
Note that to display the virtual machine and complete the installation, a virt-viewer instance must be connected to the virtual machine after the virt-install utility starts it. By default, virt-install will attempt to launch virt-viewer automatically once the virtual machine starts running. However, if virt-viewer is unavailable, virt-install will wait until a virt-viewer connection is established. For example, the virt-viewer session may be running locally on the host system if it has a graphical desktop, or a connection may be established from a remote client as outlined in the chapter entitled Creating KVM Virtual Machines on Rocky Linux 9 using virt-manager.
The following command creates a new KVM virtual machine configured to run Rocky Linux 9 using KVM para-virtualization. It creates a new 10GB disk image, assigns 2048MB of RAM to the virtual machine, and configures a virtual CD device for the installation media ISO image:
Once the guest system has been created, the virt-viewer screen will appear containing the operating system installer loaded from the specified installation media:
Figure 24-1
From this point, follow the standard installation procedure for the guest operating system.
Starting and Stopping a Virtual Machine from the Command-Line
Having created the virtual machine from the command line, it stands to reason that you may also need to start it from the command line in the future. This can be achieved using the virsh command-line utility, referencing the name assigned to the virtual machine during creation. For example:
Suppose the virtual machine fails to respond to the shutdown signal and does not begin a graceful shutdown. In that case, the virtual machine may be destroyed (with the attendant risks of data loss) using the destroy directive:
Creating a Virtual Machine from a Configuration File
The virsh create command can take as an argument the name of a configuration file on which to base the creation of a new virtual machine. The configuration file uses XML format. The easiest way to create a configuration file is to dump out the configuration of an existing virtual machine and modify it for the new one. This can be achieved using the virsh dumpxml command. For example, the following command outputs the configuration data for a virtual machine domain named demo_vm_guest to a file named demo_vm_guest.xml:
Once the file has been generated, load it into an editor to review and change the settings for the new virtual machine.
At the very least, the <name>, <uuid>, and image file path <source file> must be changed to avoid conflict with the virtual machine from which the configuration was taken. In the case of the UUID, this line can be deleted from the file.
The virtualization type, memory allocation, and number of CPUs, to name but a few options, may also be changed if required. Once the file has been modified, the new virtual machine may be created as follows:
KVM provides the virt-install and virsh command-line tools as a quick and efficient alternative to using the Cockpit and virt-manager tools to create and manage virtual machine instances. These tools have the advantage that they can be used from within scripts to automate the creation and management of virtual machines. The virsh command also includes the option to create VM instances from XML-based configuration files.
The previous chapter explored how to create KVM virtual machines on Rocky Linux 9 using the Cockpit web tool. With the caveat that virt-manager may one day be discontinued once the Virtual Machines Cockpit extension is fully implemented, this chapter will cover using this tool to create new virtual machines.
Starting the Virtual Machine Manager
If you have not already done so, install the virt-manager package as follows:
Next, launch Virtual Machine Manager from the command line in a terminal window by running virt-manager. Once loaded, the virtual machine manager will prompt for the password of the currently active user before displaying the following screen:
Figure 23-1
The main screen lists the current virtual machines running on the system. By default, the manager should be connected to the system libvirtd instance. If it is not, connect to the host system by right-clicking on the entry in the list and selecting Connect from the popup menu. To manage session-based virtual machines, select the File -> Add Connection… menu option to display the dialog shown in Figure 23-2:
Figure 23-2
Select QEMU/KVM user session from the Hypervisor menu and click the Connect button within this dialog. On returning to the main virt-manager screen, the QEMU/KVM session should now be listed as shown in Figure 23-1 above.
To create a virtual system, click on the new virtual machine button (the far left button in the toolbar) to display the first screen of the New VM wizard. In the Name field, enter a suitably descriptive name for the virtual system. On this screen, also select the location of the media from which the guest operating system will be installed. This can either be a CD or DVD drive, an ISO image file accessible to the local host, a network install using HTTP, FTP, NFS, or PXE, or the disk image from an existing virtual machine:
Figure 23-3
Configuring the KVM Virtual System
Clicking Forward will display a screen seeking additional information about the installation process. The displayed screen and information required will depend on selections made on the initial screen. For example, if a CD, DVD, or ISO is selected, this screen will ask for the specific location of the ISO file or physical media device. This screen also attempts to identify the type and version of the guest operating system (for example, the Windows version or Linux distribution) based on the specified installation media. If it is unable to do so, uncheck the Automatically detect from installation media/source option, type in the first few characters of the operating system name, and select an option from the list of possible matches:
Figure 23-4
Once these settings are complete, click the Forward button to configure CPU and memory settings. The optimal settings will depend on the number of CPUs and amount of physical memory present in the host, together with the requirements of other applications and virtual machines that will run in parallel with the new virtual machine:
Figure 23-5
On the next screen, options are available to create an image disk of a specified size, select a preexisting volume, or create a storage volume of a specified format (raw, vmdk, ISO, etc.). Unless you have a specific need to use a particular format (for example, you might need to use vmdk to migrate to a VMware-based virtualization environment at a later date) or need to use a dedicated disk or partition, it is generally adequate to specify a size on this screen:
Figure 23-6
If the default settings are used here, the virtual machine will use a storage volume within the default storage pool for the virtual disk drive. To use the custom “MyPool” storage pool created earlier in the chapter, enable the Select or create custom storage option before clicking the Manage… button. In the storage volume dialog, select the MyPool entry in the left-hand panel, followed by the + button in the main panel to create a new storage volume:
Figure 23-7
Note that the + button in the bottom left-hand corner of the dialog may also be used to create new storage pools as an alternative to using the Cockpit interface.
In the configuration screen (Figure 23-8), name the storage volume, select the volume size, and click on the Finish button to create the volume and assign it to the virtual machine:
Figure 23-8
Once these settings are configured, select the new volume and click the Choose Volume button. Then, click the Forward button once more. The final screen displays a summary of the configuration. Review the information displayed. Advanced options are also available to change the virtual network configuration for the guest, as shown in Figure 23-9:
Figure 23-9
Starting the KVM Virtual Machine
Click on the Finish button to begin the creation process. The virtualization manager will create the disk and configure the virtual machine before starting the guest system. Finally, the new virtual machine will appear in the main virt-manager window with the status set to Running as illustrated in Figure 23-10:
Figure 23-10
By default, the console for the virtual machine should appear in the virtual machine viewer window. To view the console of the running machine at any future time, ensure that it is selected in the virtual machine list and select the Open button from the toolbar. The virtual machine viewer should be ready for the installation process to begin:
Figure 23-11
From now on, follow the installation instructions to install the guest OS in the KVM virtual machine.
Summary
There are several ways to create new KVM-based virtual machines on a Rocky Linux 9 host system. This chapter uses the virt-manager graphical tool to create, configure, and run a guest operating system, including creating a new storage volume. With these basics covered, the next chapter will cover the creation of virtual machines from the command line.
KVM-based virtual machines can easily be configured on Rocky Linux 9 using the virt-install command-line tool, the virt-manager GUI tool, or the Virtual Machines module of the Cockpit web console. This chapter will use Cockpit to install an operating system as a KVM guest on a Rocky 9 host. The chapter titled Creating KVM Virtual Machines on Rocky Linux 9 using virt-manager will cover using the virt-manager tool to create new virtual machines.
The virtual machines module may not be included in a standard Cockpit installation by default. Assuming that Cockpit is installed and configured, the virtual machines module may be installed as follows:
Once installed, the Virtual Machines option (marked A in Figure 22-1) will appear in the navigation panel the next time you log into the Cockpit interface:
Figure 22-1
Creating a Virtual Machine in Cockpit
To create a virtual machine in Cockpit, click the Create VM button marked B in Figure 22-1 to display the creation dialog.
Within the dialog, enter a name for the machine and choose whether the installation media is in the form of an ISO accessible via a URL or a local filesystem path, or select the vendor and operating system type information for the guest request and choose the Download an OS option to have the installation image downloaded automatically during the installation process.
Also, specify the size of the virtual disk drive to be used for the operating system installation and the amount of memory to be allocated to the virtual machine:
Figure 22-2
Click on the Create and edit button to build the virtual machine. After the creation process is complete, details of the new VM will appear in Cockpit, as shown in Figure 22-3:
Figure 22-3
As described inAn Overview of Rocky Linux 9 Virtualization Techniques, KVM provides virtual machines with several options in terms of network configuration. To view and change the network settings of a virtual machine, scroll down to the Network interfaces section of the VM Overview screen and click the Edit button:
Figure 22-4
In the resulting dialog, the Network Type menu may be used to change the type of network connection, for example, from Virtual network (NAT) to direct attachment (MacVTap) or Bridge to LAN.
Starting the Installation
To start the new virtual machine and install the guest operating system from the designated installation media, click the Install button at the top of the overview page. Cockpit will start the virtual machine and scroll down to the Console view where the guest OS screen will appear:
Figure 22-5
If the installation fails, check the message to see if an error occurred when opening the installation image. This usually occurs because the QEMU emulator runs as a user named qemu, which does not have access to the directory in which the ISO installation image is located. To resolve this issue, open a terminal window (or connect with SSH if the system is remote), change directory to the location of the ISO image file, and add the qemu user to the access control list (ACL) of the parent directory as follows:
# cd /path/to/iso/directory
# setfacl --modify u:qemu:x ..
Code language:plaintext(plaintext)
After making this change, check the setting as follows:
Once these changes have been made, click the Install button again to complete the installation.
To complete the installation, interact with the screen in the Consoles view just as you would if you were installing the operating system on physical hardware. If the console is too small to accommodate the entire guest operating system screen, click the Expand button in the top right-hand corner.
It is also possible to connect with and display the graphical console for the VM from outside the Cockpit browser session using the virt-viewer tool. To install virt-viewer on a Rocky 9 system, run the following command:
To connect with a virtual machine running on the local host, run virt-viewer and select the virtual machine to which you wish to connect from the resulting dialog:
Figure 22-6
The above command will list system-based virtual machines. To list and access session-based guests, launch virt-viewer as follows:
When using this technique, it is important to note that you will be prompted twice for the user password before the connection is fully established.
Once the virtual machine has been created, the Cockpit interface can monitor the machine and perform tasks such as rebooting, shutting down, or deleting the guest system. An option is also included on the Disks panel to add disks to the virtual machine configuration.
Working with Storage Volumes and Storage Pools
When a virtual machine is created, it will usually have at least one virtual disk drive. The images that represent these virtual disk drives are stored in storage pools. A storage pool can be an existing directory on a local filesystem, a filesystem partition, a physical disk device, Logical Volume Management (LVM) volume group, or even a remote network file system (NFS).
Each storage pool is divided into one or more storage volumes. Storage volumes are typically individual image files, each representing a single virtual disk drive, but they can also take the form of physical disk partitions, entire disk drives, or LVM volume groups.
When a virtual machine was created using the previous steps, a default storage pool was created to store virtual machine images. This default storage pool occupies space on the root filesystem and can be reviewed from within the Cockpit Virtual Machine interface by selecting the Storage Pools option at the top of the panel marked C in Figure 22-1 above.
When selected, the screen shown in Figure 22-7 below will appear containing a list of storage pools currently configured on the system:
Figure 22-7
In the above example, the default storage pool is located on the root filesystem and stores the virtual machine image in the /var/lib/libvirtd/images directory. To view the storage volumes contained within the pool, select the Storage Volumes tab highlighted in Figure 22-8:
Figure 22-8
In the case of the demo guest, the storage volume takes the form of an image file named demovm-guest.qcow2. In addition, the pool also includes a storage volume containing the installation ISO image. To find out which storage volume a particular virtual machine uses, return to the main Virtual Machine Cockpit screen, select the virtual machine, and scroll to the Disks panel, as shown in Figure 22-9:
Figure 22-9
Although using the default storage pool is acceptable for testing purposes and early experimentation, it is recommended that additional pools be created for general virtualization use. To create a new storage pool, display the Storage Pools screen within Cockpit and click on the Create storage pool button to display the dialog shown in Figure 22-10:
Figure 22-10
In the above example, a new storage pool is being created named MyPool using a file system partition mounted as /MyPool within the local filesystem (the topic of disk drives, partitions, and mount points is covered later in the chapter entitled Adding a New Disk Drive to a Rocky Linux 9 System). Once created, the pool will be listed within the Cockpit storage pool screen and can contain storage volumes as new virtual machines are created.
Summary
This chapter has outlined using the Cockpit web-based interface to create and manage KVM-based virtual machines. The Cockpit interface has the advantage of not requiring access to a desktop environment running on the host system. An alternative option is using the virt-manager graphical tool outlined in the next chapter.
Earlier versions of Rocky Linux provided two virtualization platforms: Kernel-based Virtual Machine (KVM) and Xen. In recent releases, support for Xen has been removed, leaving KVM as the only bundled virtualization option supplied with Rocky Linux 9. In addition to KVM, third-party solutions are available in products such as VMware and Oracle VirtualBox. Since KVM is supplied with Rocky 9, however, this virtualization solution will be covered in this and subsequent chapters.
Before plunging into installing and running KVM, it is worth discussing how it fits into the various types of virtualization outlined in the previous chapter.
An Overview of KVM
KVM is categorized as a Type-1 hypervisor virtualization solution that implements full virtualization with support for unmodified guest operating systems using Intel VT and AMD-V hardware virtualization support.
KVM differs from many other Type-1 solutions in that it turns the host Linux operating system into the hypervisor, allowing bare metal virtualization to be implemented while running a complete, enterprise-level host operating system.
KVM Hardware Requirements
Before proceeding with this chapter, we must discuss the hardware requirements for running virtual machines within a KVM environment. First and foremost, KVM virtualization is only available on certain processor types. As previously discussed, these processors must include either Intel VT or AMD-V technology.
To check for virtualization support, run the lscpu command in a terminal window:
If the CPU does not support virtualization, no output will be displayed by the above lscpu command.
Note that while the above commands only report whether the processor supports the respective feature, it does not indicate whether it is currently enabled in the BIOS. This is because, in practice, virtualization support is typically disabled by default in the BIOS of most systems. Therefore, you should check your BIOS settings to ensure the appropriate virtualization technology is enabled before proceeding with this tutorial.
Unlike a dual-booting environment, a virtualized environment involves running two or more complete operating systems concurrently on a single computer system. This means the system must have enough physical memory, disk space, and CPU processing power to comfortably accommodate all these systems in parallel. Therefore, before beginning the configuration and installation process, check on the minimum system requirements for Rocky 9 and your chosen guest operating systems and verify that your host system has sufficient resources to handle the requirements of both systems.
Preparing Rocky 9 for KVM Virtualization
Unlike Xen, it is not necessary to run a special version of the kernel to support KVM. As a result, KVM support is already available for use with the standard kernel via installing a KVM kernel module, thereby negating the need to install and boot from a special kernel.
To avoid conflicts, however, if a Xen-enabled kernel is currently running on the system, reboot the system and select a non-Xen kernel from the boot menu before proceeding with the remainder of this chapter.
The tools required to set up and maintain a KVM-based virtualized system are only installed by default if selected explicitly during the Rocky 9 operating system installation process. To install the KVM tools from the command prompt, execute the following command in a terminal window:
It is worthwhile checking that the KVM installation worked correctly before moving forward. When KVM is installed and running, two modules will have been loaded into the kernel. The presence or otherwise of these modules can be verified in a terminal window by running the lsmod command:
Note that if the system contains an AMD processor, the kvm module will likely read kvm_amd rather than kvm_intel.
The installation process should also have configured the libvirtd daemon to run in the background. Once again, using a terminal window, run the following command to ensure libvirtd is running:
# systemctl status libvirtd
libvirtd.service - Virtualization daemon
Loaded: loaded (/usr/lib/systemd/system/libvirtd.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-03-06 14:41:22 ESCode language:plaintext(plaintext)
If the process is not running, it may be started as follows:
If the desktop environment is available, run the virt-manager tool by selecting Activities and entering “virt” into the search box. When the Virtual Machine Manager icon appears, click it to launch it. When loaded, the manager should appear as illustrated in the following figure:
Figure 21-1
If the QEMU/KVM entry is not listed, select the File -> Add Connection menu option and, in the resulting dialog, select the QEMU/KVM Hypervisor before clicking on the Connect button:
Figure 21-2
If the manager is not currently connected to the virtualization processes, right-click on the entry listed and select Connect from the popup menu.
Summary
KVM is a Type-1 hypervisor virtualization solution that implements full virtualization with support for unmodified guest operating systems using Intel VT and AMD-V hardware virtualization support. It is the default virtualization solution bundled with Rocky 9 and can be installed quickly and easily on any Rocky 9 system with appropriate processor support. With KVM support installed and enabled, the following chapters will outline some options for installing and managing virtual machines on a Rocky 9 host.
Virtualization is the ability to run multiple operating systems simultaneously on a single computer system. While not necessarily a new concept, Virtualization has come to prominence in recent years because it provides a way to fully utilize the CPU and resource capacity of a server system while providing stability (in that if one virtualized guest system crashes, the host and any other guest systems continue to run).
Virtualization is also helpful in trying out different operating systems without configuring dual boot environments. For example, you can run Windows in a virtual machine without re-partitioning the disk, shut down Rocky 9, and boot from Windows. Instead, you start up a virtualized version of Windows as a guest operating system. Similarly, virtualization allows you to run other Linux distributions within a Rocky 9 system, providing concurrent access to both operating systems.
When deciding on the best approach to implementing virtualization, clearly understanding the different virtualization solutions currently available is essential. Therefore, this chapter’s purpose is to describe in general terms the virtualization techniques in common use today.
Guest Operating System Virtualization
Guest OS virtualization, also called application-based virtualization, is the most straightforward concept to understand. In this scenario, the physical host computer runs a standard unmodified operating system such as Windows, Linux, UNIX, or macOS. Running on this operating system is a virtualization application that executes in much the same way as any other application, such as a word processor or spreadsheet, would run on the system. Within this virtualization application, one or more virtual machines are created to run the guest operating systems on the host computer. The virtualization application is responsible for starting, stopping, and managing each virtual machine and essentially controlling access to physical hardware resources on behalf of the individual virtual machines. The virtualization application also engages in a process known as binary rewriting, which involves scanning the instruction stream of the executing guest system and replacing any privileged instructions with safe emulations. This makes the guest system think it is running directly on the system hardware rather than in a virtual machine within an application.
The following figure illustrates guest OS-based virtualization:
Figure 20-1
As outlined in the above diagram, the guest operating systems operate in virtual machines within the virtualization application, which, in turn, runs on top of the host operating system in the same way as any other application. The multiple layers of abstraction between the guest operating systems and the underlying host hardware are not conducive to high levels of virtual machine performance. However, this technique has the advantage that no changes are necessary to host or guest operating systems, and no special CPU hardware virtualization support is required.
Hypervisor Virtualization
In hypervisor virtualization, the task of a hypervisor is to handle resource and memory allocation for the virtual machines and provide interfaces for higher-level administration and monitoring tools. Hypervisor-based solutions are categorized as being either Type-1 or Type-2.
Type-2 hypervisors (sometimes called hosted hypervisors) are installed as software applications that run on top of the host operating system, providing virtualization capabilities by coordinating access to resources such as the CPU, memory, and network for guest virtual machines. Figure 21-2 illustrates the typical architecture of a system using Type-2 hypervisor virtualization:
Figure 20-2
To understand how Type-1 hypervisors work, it helps to understand Intel x86 processor architecture. The x86 family of CPUs provides a range of protection levels known as rings in which code can execute. Ring 0 has the highest level privilege, and it is in this ring that the operating system kernel normally runs. Code executing in ring 0 is said to be running in system space, kernel mode, or supervisor mode. All other code, such as applications running on the operating system, operate in less privileged rings, typically ring 3.
In contrast to Type-2 hypervisors, Type-1 hypervisors (also referred to as metal or native hypervisors) run directly on the hardware of the host system in ring 0. With the hypervisor occupying ring 0 of the CPU, the kernels for any guest operating systems running on the system must run in less privileged CPU rings. Unfortunately, most operating system kernels are written explicitly to run in ring 0 because they need to perform tasks only available in that ring, such as the ability to execute privileged CPU instructions and directly manipulate memory. Several different solutions to this problem have been devised in recent years, each of which is described below:
Paravirtualization
Under paravirtualization, the kernel of the guest operating system is modified specifically to run on the hypervisor. This typically involves replacing privileged operations that only run in ring 0 of the CPU with calls to the hypervisor (known as hypercalls). The hypervisor, in turn, performs the task on behalf of the guest kernel. Unfortunately, this typically limits support to open-source operating systems such as Linux, which may be freely altered, and proprietary operating systems where the owners have agreed to make the necessary code modifications to target a specific hypervisor. These issues notwithstanding, the ability of the guest kernel to communicate directly with the hypervisor results in greater performance levels than other virtualization approaches.
Full Virtualization
Full virtualization provides support for unmodified guest operating systems. The term unmodified refers to operating system kernels that have not been altered to run on a hypervisor and, therefore, still execute privileged operations as though running in ring 0 of the CPU. In this scenario, the hypervisor provides CPU emulation to handle and modify privileged and protected CPU operations made by unmodified guest operating system kernels. Unfortunately, this emulation process requires both time and system resources to operate, resulting in inferior performance levels when compared to those provided by paravirtualization.
Hardware Virtualization
Hardware virtualization leverages virtualization features built into the latest generations of CPUs from both Intel and AMD. These technologies, called Intel VT and AMD-V, respectively, provide extensions necessary to run unmodified guest virtual machines without the overheads inherent in full virtualization CPU emulation. In very simplistic terms, these processors provide an additional privilege mode (ring -1) above ring 0 in which the hypervisor can operate, thereby leaving ring 0 available for unmodified guest operating systems.
The following figure illustrates the Type-1 hypervisor approach to virtualization:
Figure 20-3
As outlined in the above illustration, in addition to the virtual machines, an administrative operating system or management console also runs on top of the hypervisor allowing the virtual machines to be managed by a system administrator.
Virtual Machine Networking
Virtual machines will invariably need to be connected to a network to be of any practical use. One option is for the guest to be connected to a virtual network running within the host computer’s operating system. In this configuration, any virtual machines on the virtual network can see each other, but Network Address Translation (NAT) provides access to the external network. When using the virtual network and NAT, each virtual machine is represented on the external network (the network to which the host is connected) using the IP address of the host system. This is the default behavior for KVM virtualization on Rocky 9 and generally requires no additional configuration. Typically, a single virtual network is created by default, represented by the name default and the device virbr0.
For guests to appear as individual and independent systems on the external network (i.e., with their own IP addresses), they must be configured to share a physical network interface on the host. The quickest way to achieve this is to configure the virtual machine to use the “direct connection” network configuration option (also called MacVTap), which will provide the guest system with an IP address on the same network as the host. Unfortunately, while this gives the virtual machine access to other systems on the network, it is not possible to establish a connection between the guest and the host when using the MacVTap driver.
A better option is to configure a network bridge interface on the host system to which the guests can connect. This provides the guest with an IP address on the external network while also allowing the guest and host to communicate, a topic covered in the chapter entitled Creating a Rocky Linux 9 KVM Networked Bridge Interface.
Summary
Virtualization is the ability to run multiple guest operating systems within a single host operating system. Several approaches to virtualization have been developed, including a guest operating system and hypervisor virtualization. Hypervisor virtualization falls into two categories known as Type-1 and Type-2. Type-2 virtualization solutions are categorized as paravirtualization, full virtualization, and hardware virtualization, the latter using special virtualization features of some Intel and AMD processor models.
Virtual machine guest operating systems have several options in terms of networking, including NAT, direct connection (MacVTap), and network bridge configurations.
Although Linux has made some inroads into the desktop market, its origins, and future are very much server based. It is unsurprising, therefore, that Rocky 9 can act as a file server. It is also common for Rocky Linux and Windows systems to be used side by side in networked environments. Therefore, it is a common requirement that files on a Rocky 9 system be accessible to Linux, UNIX, and Windows-based systems over network connections. Similarly, shared folders and printers residing on Windows systems may also need to be accessible from Rocky 9-based systems.
Windows systems share resources such as file systems and printers using a protocol known as Server Message Block (SMB). For a Rocky 9 system to serve such resources over a network to a Windows system and vice versa, it must support SMB. This is achieved using a technology called Samba. In addition to providing integration between Linux and Windows systems, Samba may also provide folder sharing between Linux systems (as an alternative to NFS covered in the previous chapter).
In this chapter, we will look at the steps necessary to share file system resources and printers on a Rocky 9 system with remote Windows and Linux systems and to access Windows resources from Rocky 9.
Accessing Windows Resources from the GNOME Desktop
Before getting into more details of Samba sharing, it is worth noting that if all you want to do is access Windows shared folders from within the GNOME desktop, then support is already provided within the GNOME Files application. The Files application is located in the dash as highlighted in Figure 19-1:
Figure 19-1
Once launched, select the Other Locations option in the left-hand navigation panel, followed by the Windows Network icon in the main panel to browse available Windows resources:
Figure 19-2
Samba and Samba Client
Samba allows both Rocky 9 resources to be shared with Windows systems and Windows resources to be shared with Rocky 9 systems. Rocky Linux accesses Windows resources using the Samba client. Rocky Linux resources, on the other hand, are shared with Windows systems by installing and configuring the Samba service.
Installing Samba on Rocky 9
The default settings used during the Rocky 9 installation process do not typically install the necessary Samba packages. Unless you specifically requested that Samba be installed, it is unlikely that you have Samba installed on your system. To check whether Samba is installed, open a terminal window and run the following command:
Next, the firewall protecting the Rocky 9 system must be configured to allow Samba traffic. This can be achieved using the firewall-cmd command as follows:
Before starting the Samba service, some configuration steps are necessary to define how the Rocky Linux system will appear to Windows systems and the resources to be shared with remote clients.
Most configuration tasks occur within the /etc/samba/smb.conf file.
Configuring the smb.conf File
Samba is a highly flexible and configurable system that provides many options for controlling how resources are shared on Windows networks. Unfortunately, this flexibility can lead to the sense that Samba is overly complex. In reality, however, the typical installation does not need many configuration options, and the learning curve to set up a basic configuration is relatively short.
For this chapter, we will look at joining a Rocky 9 system to a Windows workgroup and setting up a directory as a shared resource that a specific user can access. This is a configuration known as a standalone Samba server. More advanced configurations, such as integrating Samba within an Active Directory environment, are also available, though these are outside the scope of this book.
The first step in configuring Samba is to edit the /etc/samba/smb.conf file.
Configuring the [global] Section
The smb.conf file is divided into sections. The first section is the [global] section, where settings that apply to the entire Samba configuration can be specified. While these settings are global, each option may be overridden within other configuration file sections.
The first task is defining the Windows workgroup name on which the Rocky 9 resources will be shared. This is controlled via the workgroup = directive of the [global] section, which by default is configured as follows:
Begin by changing this to the actual name of the workgroup if necessary.
In addition to the workgroup setting, the other settings indicate that this is a standalone server on which user passwords will protect the shared resources. Before moving on to configuring the resources to be shared, other parameters also need to be added to the [global] section as follows:
[global]
.
.
netbios name = LinuxServerCode language:plaintext(plaintext)
The “netbios name” property specifies the name by which the server will be visible to other systems on the network.
Configuring a Shared Resource
The next step is configuring the shared resources (in other words, the resources that will be accessible from other systems on the Windows network). To achieve this, the section is given a name by which it will be referred when shared. For example, if we plan to share the /sampleshare directory of our Rocky 9 system, we might entitle the section [sampleshare]. In this section, a variety of configuration options are possible. For this example, however, we will define the directory that is to be shared, indicate that the directory is both browsable and writable, and declare the resource public so that guest users can gain access:
[sampleshare]
comment = Example Samba share
path = /sampleshare
browseable = Yes
public = yes
writable = yesCode language:plaintext(plaintext)
To restrict access to specific users, the “valid users” property may be used, for example:
The smb.conf file is pre-configured with sections for sharing printers and the home folders of the users on the system. If these resources do not need to be shared, the corresponding sections can be commented out so that Samba ignores them. In the following example, the [homes] section has been commented out:
.
.
#[homes]
# comment = Home Directories
# valid users = %S, %D%w%S
# browseable = No
# read only = No
# inherit acls = Yes
.
.Code language:plaintext(plaintext)
Configuring SELinux for Samba
SELinux is a system integrated by default into the Linux kernel on all Rocky 9 systems, providing an extra layer of security and protection to the operating system and user files.
Traditionally, Linux security has been based on allowing users to decide who has access to their files and other resources they own. Consider, for example, a file located in the home directory of, and owned by, a particular user. That user can control the access permissions of that file in terms of whether other users on the system can read and write to or, in the case of a script or binary, execute the file. This type of security is called discretionary access control since resource access is left to the user’s discretion.
With SELinux, however, access is controlled by the system administrator and cannot be overridden by the user. This is called mandatory access control and is defined by the administrator using the SELinux policy. To continue the previous example, the owner of a file can only perform tasks on that file if the SELinux policy, defined either by default by the system or by the administrator, permits it.
The current status of SELinux on a Rocky 9 system may be identified using the sestatus tool as follows:
SELinux can be run in either enforcing or permissive mode. When enabled, enforcing mode denies all actions that are not permitted by SELinux policy. On the other hand, permissive mode allows actions that would generally have been denied to proceed but records the violation in a log file.
SELinux security is based on the concept of context labels. All resources on a system (including processes and files) are assigned SELinux context labels consisting of user, role, type, and optional security level. The SELinux context of files or folders, for example, may be viewed as follows:
$ ls -Z /home/demo
unconfined_u:object_r:user_home_t:s0 Desktop
unconfined_u:object_r:user_home_t:s0 DocumentsCode language:plaintext(plaintext)
Similarly, the ps command may be used to identify the context of a running process, in this case, the ls command:
When a process (such as the above ls command) attempts to access a file or folder, the SELinux system will check the policy to identify whether or not access is permitted. Now consider the context of the Samba service:
SELinux implements security in several ways, the most common of which is called type enforcement. In basic terms, when a process attempts to perform a task on an object (for example, writing to a file), SELinux checks the context types of both the process and the object and verifies that the security policy allows the action to be taken. Suppose a process of type A, for example, attempts to write to a file of type B. In that case, it will only be permitted if SELinux policy states explicitly that a process of type A may perform a write operation to a file of type B. In SELinux enforcement, all actions are denied by default unless a rule specifically allows the action to be performed.
The issue with SELinux and Samba is that SELinux policy is not configured to allow processes of type smb_t to perform actions on files of any type other than samba_share_t. For example, the /home/demo directory listed above will be inaccessible to the Samba service because it has a type of user_home_t. To make files or folders on the system accessible to the Samba service, the enforcement type of those specific resources must be changed to samba_share_t.
For this example, we will create the /sampleshare directory referenced previously in the smb.conf file and change the enforcement type to make it accessible to the Samba service. Begin by creating the directory as follows:
Next, check the current SELinux context on the directory:
$ ls -aZ /sampleshare/
unconfined_u:object_r:root_t:s0 .Code language:plaintext(plaintext)
In this instance, the context label of the folder has been assigned a type of root_t. To make the folder sharable by Samba, the enforcement type needs to be set to samba_share_t using the semanage tool as follows:
# semanage fcontext -a -t samba_share_t "/sampleshare(/.*)?"Code language:plaintext(plaintext)
Note the use of a wildcard in the semanage command to ensure that the type is applied to any sub-directories and files contained within the /sampleshare directory. Once added, the change needs to be applied using the restorecon command, making use of the -R flag to apply the change recursively through any sub-directories:
# restorecon -R -v /sampleshare
Relabeled /sampleshare from unconfined_u:object_r:default_t:s0 to unconfined_u:object_r:samba_share_t:s0Code language:plaintext(plaintext)
Once these changes have been made, the folder is configured to comply with SELinux policy for the smb process and is ready to be shared by Samba.
Creating a Samba User
Any user that requires access to a Samba shared resource must be configured as a Samba User and assigned a password. This task is achieved using the smbpasswd command-line tool. Consider, for example, that a user named demo is required to be able to access the /sampleshare directory of our Rocky 9 system from a Windows system. To fulfill this requirement, we must add demo as a Samba user as follows:
# smbpasswd -a demo
New SMB password:
Retype new SMB password:
Added user demo.Code language:plaintext(plaintext)
Now that we have completed the configuration of an elementary Samba server, it is time to test our configuration file and then start the Samba services.
Testing the smb.conf File
The settings in the smb.conf file may be checked for errors using the testparm command-line tool as follows:
# testparm
Load smb config files from /etc/samba/smb.conf
Loaded services file OK.
Weak crypto is allowed
Server role: ROLE_STANDALONE
Press enter to see a dump of your service definitions
# Global parameters
[global]
log file = /var/log/samba/%m.log
netbios name = LINUXSERVER
printcap name = cups
security = USER
wins support = Yes
idmap config * : backend = tdb
cups options = raw
[sampleshare]
comment = Example Samba share
guest ok = Yes
path = /sampleshare
read only = No
[homes]
browseable = No
comment = Home Directories
inherit acls = Yes
read only = No
valid users = %S %D%w%S
[printers]
browseable = No
comment = All Printers
create mask = 0600
path = /var/tmp
printable = Yes
.
.Code language:plaintext(plaintext)
Starting the Samba and NetBIOS Name Services
For a Rocky 9 server to operate within a Windows network, the Samba (SMB) and NetBIOS nameservice (NMB) services must be started. Optionally, also enable the services so that they start each time the system boots:
Before attempting to connect from a Windows system, use the smbclient utility to verify that the share is configured:
# smbclient -U demo -L localhost
Enter WORKGROUP\demo’s password:
Sharename Type Comment
--------- ---- -------
sampleshare Disk Example Samba share
print$ Disk Printer Drivers
IPC$ IPC IPC Service (Samba 4.9.1)
demo Disk Home Directories
Code language:plaintext(plaintext)
Accessing Samba Shares
Now that the Samba resources are configured and the services are running, it is time to access the shared resource from a Windows system. On a suitable Windows system on the same workgroup as the Rocky 9 system, open Windows Explorer and navigate to the Network panel. At this point, explorer should search the network and list any systems using the SMB protocol that it finds. The following figure illustrates a Rocky 9 system named LINUXSERVER located using Windows Explorer on a Windows system:
Figure 19-3
Double-clicking on the LINUXSERVER host will prompt for the name and password of a user with access privileges. In this case, it is the demo account that we configured using the smbpasswd tool:
Figure 19-4
Entering the username and password will result in the shared resources configured for that user appearing in the explorer window, including the previously configured /sampleshare resource:
Figure 19-5
Double-clicking on the /sampleshare shared resource will display a listing of the files and directories contained therein.
If you are unable to see the Linux system or have problems accessing the shared folder, try mapping the Samba share to a local Windows drive as follows:
Open Windows File Explorer, right-click on the Network entry in the left-hand panel and select Map network drive… from the resulting menu.
Select a drive letter from the Map Network Drive dialog before entering the path to the shared folder. For example:
Enable the checkbox next to Connect using different credentials. For example, if you do not want the drive to be mapped each time you log into the Windows system, turn off the corresponding check box:
Figure 19-6
With the settings entered, click the Finish button to map the drive, entering the username and password for the Samba user configured earlier in the chapter when prompted. After a short delay, the content of the Samba share will appear in a new File Explorer window.
Accessing Windows Shares from Rocky 9
As previously mentioned, Samba is a two-way street, allowing not only Windows systems to access files and printers hosted on a Rocky 9 system but also allowing the Rocky 9 system to access shared resources on Windows systems. This is achieved using the samba-client package, installed at this chapter’s start. If it is not currently installed, install it from a terminal window as follows:
Shared resources on a Windows system can be accessed from the Rocky Linux desktop using the Files application or from the command-line prompt using the smbclient and mount tools. The steps in this section assume that the Windows system has enabled appropriate network-sharing settings.
To access any shared resources on a Windows system using the GNOME desktop, launch the Files application and select the Other Locations option. This will display the screen shown in Figure 19-7 below, including an icon for the Windows Network (if one is detected):
Figure 19-7
Selecting the Windows Network option will display the Windows systems detected on the network and allow access to any shared resources.
Figure 19-8
Alternatively, the Connect to Server option may be used to connect to a specific system. Note that the name or IP address of the remote system must be prefixed by smb:// and may be followed by the path to a specific shared resource, for example:
Without a desktop environment, a remote Windows share may be mounted from the command line using the mount command and specifying the cifs filesystem type. The following command, for example, mounts a share named Documents located on a Windows system named WinServer at a local mount point named /winfiles:
# mount -t cifs //WinServer/Documents /winfiles -o user=demoCode language:plaintext(plaintext)
Summary
In this chapter, we have looked at how to configure a Rocky 9 system to act as both a Samba client and server, allowing the sharing of resources with Windows systems. Topics covered included the installation of Samba client and server packages and configuring Samba as a standalone server. In addition, the basic concepts of SELinux were introduced together with the steps to provide Samba access to a shared resource.
Rocky Linux 9 provides two mechanisms for sharing files and folders with other systems on a network. One approach is to use a technology called Samba. Samba is based on Microsoft Windows Folder Sharing and allows Linux systems to make folders accessible to Windows systems and access Windows-based folder shares from Linux. This approach can also be used to share folders between other Linux and UNIX-based systems if they have Samba support installed and configured. This is the most popular approach to sharing folders in heterogeneous network environments. Folder sharing using Samba is covered in Sharing Files between Rocky Linux 9 and Windows Systems with Samba.
Another option, explicitly targeted at sharing folders between Linux and UNIX-based systems, uses Network File System (NFS). NFS allows the file system on one Linux computer to be accessed over a network connection by another Linux or UNIX system. NFS was originally developed by Sun Microsystems (now part of Oracle Corporation) in the 1980s and remains the standard mechanism for sharing remote Linux/UNIX file systems.
NFS is very different from the Windows SMB resource-sharing technology used by Samba. This chapter will look at the network-based sharing of folders between Rocky 9 and other UNIX/ Linux-based systems using NFS.
Ensuring NFS Services are running on Rocky Linux 9
The first task is to verify that the NFS services are installed and running on your Rocky 9 system. This can be achieved from the command line or the Cockpit interface.
Behind the scenes, NFS uses Remote Procedure Calls (RPC) to share filesystems over a network between different computers in the form of the rpcbind service. Begin by installing both rpcbind and the NFS service by running the following command from a terminal window:
Configuring the Rocky Linux 9 Firewall to Allow NFS Traffic
Next, the firewall needs to be configured to allow NFS traffic. To achieve this, run the following firewall-cmd commands where <zone> is replaced by the appropriate zone for your firewall and system configuration:
Now that NFS is running and the firewall has been configured, we need to specify which parts of the Rocky 9 file system may be accessed by remote Linux or UNIX systems. These settings can be declared in the /etc/exports file, which must be modified to export the directories for remote access via NFS. The syntax for an export line in this file is as follows:
In the above line, <export> is replaced by the directory to be exported, <host1> is the name or IP address of the system to which access is being granted, and <options> represents the restrictions that are to be imposed on that access (read-only, read-write, etc.). Multiple host and options entries may be placed on the same line if required. For example, the following line grants read-only permission to the /datafiles directory to a host with the IP address 192.168.2.38:
The use of wildcards is permitted to apply an export to multiple hosts. For example, the following line permits read-write access to /home/demo to all external hosts:
Once configured, the table of exported file systems maintained by the NFS server needs to be updated with the latest /etc/exports settings using the exportfs command as follows:
# exportfs -aCode language:plaintext(plaintext)
It is also possible to view the current share settings from the command line using the exportfs tool:
# exportfsCode language:plaintext(plaintext)
The above command will generate the following output:
The shared folders may be accessed from a client system by mounting them manually from the command line. However, before attempting to mount a remote NFS folder, the nfs-utils package must first be installed on the client system:
To mount a remote folder from the command line, open a terminal window and create a directory where you would like the remote shared folder to be mounted:
Next, enter the command to mount the remote folder using either the IP address or hostname of the remote NFS server, for example:
$ sudo mount -t nfs 192.168.86.24:/tmp /home/demo/tmpCode language:plaintext(plaintext)
The remote /tmp folder will then be mounted on the local system. Once mounted, the /home/ demo/tmp folder will contain the remote folder and all its contents.
Options may also be specified when mounting a remote NFS filesystem. The following command, for example, mounts the same folder but configures it to be read-only:
$ sudo mount -t nfs -o ro 192.168.86.24:/tmp /home/demo/tmpCode language:plaintext(plaintext)
Mounting an NFS Filesystem on System Startup
It is also possible to configure a Rocky 9 system to automatically mount a remote file system each time it starts up by editing the /etc/fstab file. When loaded into an editor, it will likely resemble the following:
To mount, for example, a folder with the path /tmp, which resides on a system with the IP address 192.168.86.24 in the local folder with the path /home/demo/tmp (note that this folder must already exist), add the following line to the /etc/fstab file:
Next time the system reboots, the /tmp folder on the remote system will be mounted on the local /home/demo/tmp mount point. All the files in the remote folder can then be accessed as if they reside on the local hard disk drive.
Unmounting an NFS Mount Point
Once a remote file system is mounted using NFS, it can be unmounted using the umount command with the local mount point as the command-line argument. The following command, for example, will unmount our example filesystem mount point:
In addition to mounting a remote NFS file system on a client using the command line, it is also possible to perform mount operations from within the Cockpit web interface. Assuming that Cockpit has been installed and configured on the client system, log into the Cockpit interface from within a web browser and select the Storage option from the left-hand navigation panel. If the Storage option is not listed, the cockpit-storaged package will need to be installed:
Once the Cockpit service has restarted, log back into the Cockpit interface, at which point the Storage option should now be visible.
Once selected, the main storage page will include a section listing any currently mounted NFS file systems, as illustrated in Figure 18-1:
Figure 18-1
To mount a remote filesystem, click on the ‘+’ button highlighted above and enter information about the remote NFS server and file system share together with the local mount point and any necessary options into the resulting dialog before clicking on the Add button:
Figure 18-2
To modify, unmount or remove an NFS filesystem share, select the corresponding mount in the NFS Mounts list (Figure 18-1 above) to display the page shown in Figure 18-3 below:
Figure 18-3
Within this screen, perform tasks such as changing the server or mount points or unmounting the file system. For example, the Remove option unmounts the file system and deletes the entry from the /etc/fstab file so that it does not re-mount the next time the system reboots.
Summary
The Network File System (NFS) is a client/server-based system, originally developed by Sun Microsystems, which provides a way for Linux and Unix systems to share filesystems over a network. NFS allows a client system to access and (subject to permissions) modify files located on a remote server as though those files are stored on a local filesystem. This chapter has provided an overview of NFS and outlined the options for configuring client and server systems using the command line or the Cockpit web interface.
In the previous chapter, we looked at how to display the entire Rocky Linux 9 desktop on a remote computer. While this works well if you need to display the entire desktop remotely, it could be considered overkill if you only want to display a single application. Therefore, this chapter will look at displaying individual applications on a remote system.
Requirements for Remotely Displaying Rocky Linux 9 Applications
There are some prerequisites to running an application on one Rocky 9 system and displaying it on another. First, the system on which the application is to be displayed must be running an X server. If the system is a Linux or UNIX-based system with a desktop environment running, then this is no problem. However, if the system is running Windows or macOS, you must install an X server on it before you can display applications from a remote system. Several commercial and free Windows-based X servers are available for this purpose, and a web search should provide you with a list of options.
Second, the system on which the application is being run (as opposed to the system on which the application is to be displayed) must be configured to allow SSH access. Details on configuring SSH on a Rocky 9 system can be found in the chapter Configuring SSH Key-based Authentication on Rocky Linux 9. This system must also run the X Window System from X.org instead of Wayland. To enable the X.org system, edit the /etc/gdm/custom.conf file and uncomment the WaylandEnable line as follows and restart the system:
# Uncomment the line below to force the login screen to use Xorg
WaylandEnable=falseCode language:plaintext(plaintext)
Finally, SSH must be configured to allow X11 forwarding. This is achieved by adding the following directive to the SSH configuration on the system from which forwarding is to occur. By default on Rocky 9, the /etc/sshd_config file contains a directive to include all of the configuration files contained in the /etc/ssh/sshd_config.d directory:
Include /etc/ssh/sshd_config.d/*.confCode language:plaintext(plaintext)
A file named 50-redhat.conf will have been created on a newly installed system in the /etc/ssh/sshd_ config.d folder. Edit this file and ensure that the X11Forwarding property is enabled as follows:
Once the above requirements are met, it should be possible to display an X-based desktop application remotely.
Displaying a Rocky Linux 9 Application Remotely
The first step in remotely displaying an application is to move to the system where the application is to be displayed. At this system, establish an SSH connection to the remote system so that you have a command prompt. This can be achieved using the ssh command. When using the ssh command, we need to use the -X flag to tell it that we plan to tunnel X11 traffic through the connection:
In the above example, user is the user name to use to log into the remote system, and hostname is the hostname or IP address of the remote system. Enter your password at the login prompt and, once logged in, run the following command to see the DISPLAY setting:
$ echo $DISPLAYCode language:plaintext(plaintext)
The command should output something similar to the following: localhost:10.0
To display an application, run it from the command prompt. For example:
$ geditCode language:plaintext(plaintext)
When executed, the above command should run the gedit tool on the remote system but display the user interface on the local system.
Trusted X11 Forwarding
If the /etc/ssh/sshd_config.d/50-redhat.conf file on the remote system contains the following line, then it is possible to use trusted X11 forwarding:
Trusted X11 forwarding is slightly faster than untrusted forwarding but is less secure since it does not engage the X11 security controls. The -Y flag is needed when using trusted X11 forwarding:
To display Rocky 9-based apps on Windows, an SSH client and an X server must be installed on the Windows system. The subject of installing and using the PuTTY client on Windows was covered earlier in the book in the Configuring SSH Key-based Authentication on Rocky Linux 9 chapter.
Refer to this chapter if you have not already installed PuTTY on your Windows system.
In terms of the X server, several options are available, though a popular choice appears to be VcXsrv which is available for free from the following URL:
Once the VcXsrv X server has been installed, an application named XLaunch will appear on the desktop and in the start menu. Start XLaunch and select a display option (the most flexible being the Multiple windows option which allows each client app to appear in its own window):
Figure 17-1
Click the Next button to proceed through the remaining screens, accepting the default configuration settings. On the final screen, click the Finish button to start the X server. If the Windows Defender dialog appears, click the button to allow access to your chosen networks.
Once running, XLaunch will appear in the taskbar and can be exited by right-clicking on the icon and selecting the Exit… menu option:
Figure 17-2
With the X server installed and running, launch PuTTY and either enter the connection information for the remote host or load a previously saved session profile. Before establishing the connection, however, X11 forwarding needs to be enabled. Therefore, within the PuTTY main window, scroll down the options in the left-hand panel, unfold the SSH section, and select the X11 option, as shown in Figure 17-3:
Figure 17-3
Turn on the Enable X11 forwarding checkbox highlighted in Figure 17-4, return to the sessions screen, and open the connection (saving the session beforehand if you plan to use it again):
Figure 17-4
Log into the Rocky 9 system within the PuTTY session window and run a desktop app. After a short delay, the app will appear on the Windows desktop in its own window. Any dialogs that the app opens will also appear in separate windows, just as they would on the Rocky 9 GNOME desktop. Figure 17-5, for example, shows the Rocky 9 nm-connection-editor tool displayed on a Windows 11 system:
Figure 17-5
Summary
For situations where remote access to individual Rocky Linux 9 desktop applications is required as opposed to the entire GNOME desktop, X11 forwarding provides a lightweight solution to remotely displaying graphical applications. The system on which the applications are to appear must be running an X Window System-based desktop environment (such as GNOME) or have an X server installed and running. Once X11 forwarding has been enabled on the remote server and a secure SSH connection established from the local system using the X11 forwarding option, most applications can be displayed remotely on the local X server.
Rocky Linux 9 can be configured to provide remote access to the graphical desktop environment over a network or internet connection. Although not enabled by default, displaying and accessing a Rocky 9 desktop from a system anywhere else on a network or the internet is relatively straightforward. This can be achieved regardless of whether that system runs Linux, Windows, or macOS. There are even apps available for Android and iOS that will allow you to access your Rocky 9 desktop from just about anywhere that a data signal is available.
Remote desktop access can be helpful in many scenarios. For example, it enables you or another person to view and interact with your Rocky 9 desktop environment from another computer system on the same network or over the internet. This is useful if you need to work on your computer when you are away from your desk, such as while traveling. It is also helpful when a coworker or IT support technician needs access to your desktop to resolve a problem.
When the Rocky 9 system runs on a cloud-based server, it also allows access to the desktop environment as an alternative to performing administrative tasks using the command-line prompt or Cockpit web console.
The Rocky 9 remote desktop functionality is based on a technology known as Virtual Network Computing (VNC). This chapter will cover the key aspects of configuring and using remote desktops within Rocky 9.
Secure and Insecure Remote Desktop Access
In this chapter, we will cover both secure and insecure remote desktop access methods. Assuming you are accessing one system from another within a secure internal network, using the insecure access method is generally safe. If, on the other hand, you plan to access your desktop remotely over any public network, you must use the secure method of access to avoid your system and data being compromised.
Installing the GNOME Desktop Environment
It is, of course, only possible to access the desktop environment if the desktop itself has been installed. If, for example, the system was initially configured as a server, it is unlikely that the desktop packages were installed. The easiest way to install the packages necessary to run the GNOME desktop is to perform a group install. The key to installing groups of packages to enable a specific feature is knowing the group’s name. At the time of writing, there are two groups for installing the desktop environment on Rocky 9: “Server with GUI” and “Workstation”. As the group names tend to change from one Rocky Linux release to another, it is helpful to know that the list of groups that are either installed or available to be installed can be obtained using the dnf utility as follows:
# dnf grouplist
Available Environment Groups:
Server with GUI
Minimal Install
Workstation
Custom Operating System
Virtualization Host
Installed Environment Groups:
Server
Installed Groups:
Container Management
Headless Management
Available Groups:
Legacy UNIX Compatibility
Graphical Administration Tools
Smart Card Support
RPM Development Tools
.NET Development
System Tools
Development Tools
Console Internet Tools
Security Tools
Network Servers
Scientific SupportCode language:plaintext(plaintext)
The Workstation environment group is listed as available (and therefore not already installed) in the above example. To find out more information about the contents of a group before installation, use the following command:
# dnf groupinfo workstation
Environment Group: Workstation
Description: Workstation is a user-friendly desktop system for laptops and PCs.
Mandatory Groups:
Common NetworkManager submodules
Core
Fonts
GNOME
Guest Desktop Agents
Hardware Support
Internet Browser
Multimedia
Printing Client
Standard
Workstation product core
base-x
Optional Groups:
Backup Client
GNOME Applications
Headless Management
Internet Applications
Office Suite and Productivity
Remote Desktop Clients
Smart Card Support
Code language:plaintext(plaintext)
Having confirmed that this is the correct group, it can be installed as follows:
Once installed, and assuming that the system has a display added, the desktop can be launched using the following startx command:
$ startxCode language:plaintext(plaintext)
If, on the other hand, the system is a server with no directly connected display, the only way to run and access the desktop will be to configure VNC support on the system.
Installing VNC on Rocky Linux 9
Access to a remote desktop requires a VNC server installed on the remote system, a VNC viewer on the system from which access is being established, and, optionally, a secure SSH connection. While several VNC server and viewer implementations are available, Red Hat has standardized on TigerVNC, which provides both server and viewer components for Linux-based operating systems. VNC viewer clients for non-Linux platforms include RealVNC and TightVNC. To install the TigerVNC server package on Rocky 9, run the following command:
Once the server has been installed, the system must be configured to run one or more VNC services and open the appropriate ports on the firewall.
Configuring the VNC Server
With the VNC server packages installed, the next step is configuring the server. The first step is to specify a password for the remote desktop environment user. While logged in as root, execute the vncpasswd command (where the user name is assumed to be demo):
# su - demo
[demo@demoserver ~]$ vncpasswd
Password:
Verify:
Would you like to enter a view-only password (y/n)? n
A view-only password is not used
[demo@demoserver ~]$ exit
#Code language:plaintext(plaintext)
Next, a VNC server configuration file named [email protected] needs to be created in the /etc/ systemd/system directory. The content of this file should read as follows, where all instances of <USER> are replaced with the username referenced when the VNC password was set:
Note that :1 is included in the service name to indicate that this is the service for VNC server display number 1. This matches port 5901, which was previously opened in the firewall. Check that the service has started successfully as follows:
# systemctl status vncserver@:1.serviceCode language:plaintext(plaintext)
If the service fails to start, run the journalctl command to check for error messages:
Also, try again after rebooting the system. If the service continues to fail, the VNC server can be started manually by logging in as the designated user and running the vncserver command:
$ vncserver :1Code language:plaintext(plaintext)
Connecting to a VNC Server
VNC viewer implementations are available for a wide range of operating systems. Therefore, a quick internet search will likely provide numerous links containing details on obtaining and installing this tool on your chosen platform.
From the desktop of a Linux system on which a VNC viewer such as TigerVNC is installed, a remote desktop connection can be established as follows from a Terminal window:
In the above example, <hostname> is either the hostname or IP address of the remote system, and <display number> is the display number of the VNC server desktop, for example:
Alternatively, run the command without any options to be prompted for the details of the remote server:
Figure 16-1
Enter the hostname or IP address followed by the display number (for example, 192.168.1.115:1) into the VNC server field and click on the Connect button. The viewer will prompt for the user’s VNC password to complete the connection, at which point a new window containing the remote desktop will appear.
This section assumed that the remote desktop was accessed from a Linux or UNIX system; the same steps apply to most other operating systems.
Connecting to a remote VNC server using the steps in this section results in an insecure, unencrypted connection between the client and server. This means the data transmitted during the remote session is vulnerable to interception. Therefore, a few extra steps are necessary to establish a secure and encrypted connection.
Establishing a Secure Remote Desktop Session
The remote desktop configurations explored in this chapter are considered insecure because no encryption is used. This is acceptable when the remote connection does not extend outside an internal network protected by a firewall. However, a more secure option is needed when a remote session is required over an internet connection. This is achieved by tunneling the remote desktop through a secure shell (SSH) connection. This section will cover how to do this on Linux, UNIX, and macOS client systems.
The SSH server is typically installed and activated by default on Rocky 9 systems. If this is not the case on your system, refer to the chapter Configuring SSH Key-based Authentication on Rocky Linux 9. Assuming the SSH server is installed and active, it is time to move to the other system. At the other system, log in to the remote system using the following command, which will establish the secure tunnel between the two systems:
In the above example, <username> references the user account on the remote system for which VNC access has been configured, and <remotehost> is either the hostname or IP address of the remote system, for example:
When prompted, log in using the account password. With the secure connection established, it is time to launch vncviewer to use the secure tunnel. Leaving the SSH session running in the other terminal window, launch another terminal and enter the following command:
The vncviewer session will prompt for a password if one is required, and then launch the VNC viewer providing secure access to your desktop environment.
Although the connection is now secure and encrypted, the VNC viewer will most likely still report that the connection is insecure. Figure 16-2, for example, shows the warning dialog displayed by the RealVNC viewer running on a macOS system:
Figure 16-2
Unfortunately, although the connection is now secure, the VNC viewer software has no way of knowing this and consequently continues to issue warnings. However, rest assured that as long as the SSH tunnel is being used, the connection is indeed secure.
In the above example, we left the SSH tunnel session running in a terminal window. If you would prefer to run the session in the background, this can be achieved by using the –f and –N flags when initiating the connection:
The above command will prompt for a password for the remote server and then establish the connection in the background, leaving the terminal window available for other tasks.
If you are connecting to the remote desktop from outside the firewall, keep in mind that the IP address for the SSH connection will be the external IP address provided by your ISP or cloud hosting provider, not the LAN IP address of the remote system (since this IP address is not visible to those outside the firewall). Therefore, you will also need to configure your firewall to forward port 22 (for the SSH connection) to the IP address of the system running the desktop. It is not necessary to forward port 5900. Steps to perform port forwarding differ between firewalls, so refer to the documentation for your firewall, router, or wireless base station for details specific to your configuration.
Establishing a Secure Tunnel on Windows using PuTTY
A similar approach is taken to establishing a secure desktop session from a Windows system to a Rocky 9 server. Assuming you already have a VNC client such as TightVNC installed, the remaining requirement is a Windows SSH client (in this case, PuTTY).
Once PuTTY is downloaded and installed, the first step is establishing a secure connection between the Windows system and the remote Rocky 9 system with appropriate tunneling configured. When launched, PuTTY displays the following screen:
Figure 16-3
Enter the IP address or hostname of the remote host (or the external IP address of the gateway if you are connecting from outside the firewall). The next step is to set up the tunnel. Click on the + next to SSH in the Category tree on the left-hand side of the dialog and select Tunnels. The screen should subsequently appear as follows:
Figure 16-4
Enter 5901 as the Source port and localhost:5901 as the Destination, and click the Add button. Finally, return to the main screen by clicking on the Session category. Enter a name for the session in the Saved Sessions text field and press Save. Click on Open to establish the connection. A terminal window will appear with the login prompt from the remote system. Enter the appropriate user login and password credentials.
The SSH connection is now established. Launch the TightVNC viewer, enter localhost:5901 in the VNC Server text field, and click Connect. The viewer will establish the connection, prompt for the password, and then display the desktop. You are now accessing the remote desktop of a Linux system from Windows over a secure SSH tunnel connection.
Shutting Down a Desktop Session
To shut down a VNC Server hosted desktop session, use the –kill command-line option and the number of the desktop to be terminated. For example, to kill desktop :1:
With so much happening in the background, VNC can sometimes seem opaque, particularly when problems arise, and the server fails to start or connect, resulting in error messages. There are, however, some techniques for tracking down and resolving VNC problems:
If the VNC service fails to start, check the systemctl status of the service and check for error messages:
# systemctl status vncserver@:1.serviceCode language:plaintext(plaintext)
For more detailed information, check the systemd journal by running the journalctl command:
Check the output and log file for errors that may help identify the problem. Then, if the server starts successfully, try connecting again with a VNC viewer.
If the VNC server appears to be running, but attempts to connect from a viewer fail, it may be worth checking that the correct firewall ports are open. Begin by identifying the default zone as follows:
Remote access to the GNOME desktop environment of a Rocky 9 system can be enabled by using Virtual Network Computing (VNC). Comprising the VNC server running on the remote server and a corresponding client on the local host, VNC allows remote access to multiple desktop instances running on the server.
When the VNC connection is being used over a public connection, SSH tunneling is recommended to ensure that the communication between the client and server is encrypted and secure.
When a Rocky Linux 9 system is first installed, it is configured by default to allow remote command-line access via Secure Shell (SSH) connections. SSH provides password-protected and encrypted access to the system for the root account and any other users added during the installation phase. However, this level of security is inadequate and should be upgraded to SSH key-based authentication as soon as possible.
This chapter will outline the steps to increase the security of a Rocky 9 system by implementing key-based SSH authentication.
An Overview of Secure Shell (SSH)
SSH allows secure remote access to systems to gain shell access and transfer files and data. As will be covered inConfiguring SSH Key-based Authentication on Rocky Linux 9, SSH can also provide a secure tunnel through which remote access to the GNOME desktop can be achieved over a network connection.
A basic SSH configuration consists of a client (used on the computer establishing the connection) and a server (running on the system to which the connection is to be established). A user might, for example, use an SSH client running on a Linux, Windows, or macOS system to connect to the SSH server running on a Rocky 9 system to gain access to a shell command-line prompt or to perform file transfers. All communications between the client and server, including the password entered to gain access, are encrypted to prevent outside parties from intercepting the data.
The inherent weakness in a basic SSH implementation is that it depends entirely on the strength of the passwords assigned to the accounts on the system. If a malicious party is able to identify the password for an account (either through guesswork, deception, or a brute force attack), the system becomes vulnerable. This weakness can be addressed by implementing SSH key-based authentication.
SSH Key-based Authentication
SSH key-based authentication uses asymmetric public key encryption to add an extra layer of security to remote system access. The concept of public key encryption was devised in 1975 by Whitfield Diffie and Martin Hellman and is based on using a pair of private and public keys. In a public key encryption system, the public key is used to encrypt data that can only be decrypted by the owner of the private key.
In the case of SSH key-based authentication, the host holds the private key on which the SSH client is located, while the corresponding public key resides on the system on which the SSH server is running. Therefore, it is vital to protect the private key since ownership of the key will allow anyone to log into the remote system. As an added layer of protection, the private key may also be encrypted and protected by a password which must be entered each time a connection is established to the server.
Setting Up Key-based Authentication
There are four steps to setting up key-based SSH authentication, which can be summarized as follows:
Generate the public and private keys.
Install the public key on the server.
Test authentication.
Disable password-based authentication on the server.
The remainder of this chapter will outline these steps in greater detail for Linux, macOS, and Windows-based client operating systems.
Installing and Starting the SSH Service
If the SSH server is not already installed and running on the system, it can be added using the following commands:
SSH Key-based Authentication from Linux and macOS Clients
The first step in setting up SSH key-based authentication is to generate the key pairs on the client system. If the client system is running Linux or macOS, this is achieved using the ssh-keygen utility:
# ssh-keygenCode language:plaintext(plaintext)
This command will result in output similar to the following:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/<username>/.ssh/id_rsa):
Code language:plaintext(plaintext)
Press the Enter key to accept the default location for the key files. This will place two files in the .ssh sub-directory of the current user’s home directory. The private key will be stored in a file named id_rsa while the public key will reside in the file named id_rsa.pub.
Next, ssh-keygen will prompt for a passphrase with which to protect the private key. If a passphrase is provided, the private key will be encrypted on the local disk, and the passphrase will be required to access the remote system. Therefore, for better security, the use of a passphrase is recommended.
Enter passphrase (empty for no passphrase):Code language:plaintext(plaintext)
Finally, the ssh-keygen tool will generate the following output indicating that the keys have been generated:
Your identification has been saved in /home/neil/.ssh/id_rsa.
Your public key has been saved in /home/neil/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:FOLGWEEGFIjWnCT5wtTOv5VK4hdimzWghZizUEMYbfo <username>@<hostname>
The key’s randomart image is:
+---[RSA 2048]----+
|.BB+=+*.. |
|o+B= * . . |
|===.. + . |
|*+ * . . |
|.++ o S |
|..E+ * o |
| o B * |
| + + |
| . |
+----[SHA256]-----+
Code language:plaintext(plaintext)
The next step is to install the public key onto the remote server system. This can be achieved using the ssh-copy-id utility as follows:
$ ssh-copy-id [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/neil/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]’s password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.
Code language:plaintext(plaintext)
Once the key is installed, test that the authentication works by attempting a remote login using the ssh client:
If the private key is encrypted and protected with a passphrase, enter the phrase when prompted to complete the authentication and establish remote access to the Rocky 9 system:
Enter passphrase for key '/home/neil/.ssh/id_rsa':
Last login: Fri Mar 31 14:29:28 2023 from 192.168.86.21
[neil@demosystem02 ~]$Code language:plaintext(plaintext)
Repeat these steps for any other accounts on the server for which remote access is required. If access is also required from other client systems, copy the id_rsa private key file to the .ssh subdirectory of your home folder on the other systems.
As currently configured, access to the remote system can still be achieved using less secure password authentication. Once you have verified that key-based authentication works, password authentication will need to be disabled on the system. To understand how to change this setting, begin by opening the /etc/ssh/sshd_config file and locating the following line:
Include /etc/ssh/sshd_config.d/*.confCode language:PHP(php)
This tells us that sshd configuration settings are controlled by files in the /etc/ssh/sshd_config.d directory. These filenames must be prefixed with a number and have a .conf filename extension, for example:
The number prefix designates the priority assigned to the file relative to the other files in the folder, with 01 being the highest priority. This ensures that if a configuration file contains a setting conflicting with another file, the one with the highest priority will always take precedence. Within the /etc/ssh/sshd_config.d folder, create a new file named 02-nopasswordlogin.conf with content that reads as follows:
From this point on, it will only be possible to remotely access the system using SSH key-based authentication, and when doing so, you won’t be required to enter a password.
Managing Multiple Keys
It is common for multiple private keys to reside on a client system, each providing access to a different server. As a result, several options exist for selecting a specific key when establishing a connection. It is possible, for example, to specify the private key file to be used when launching the ssh client as follows:
Alternatively, the SSH client user configuration file may associate key files with servers. The configuration file is named config, must reside in the .ssh directory of the user’s home directory, and can be used to configure a wide range of options, including the private key file, the default port to use when connecting, the default user name, and an abbreviated nickname via which to reference the server. The following example config file defines different key files for two servers and allows them to be referenced by the nicknames home and work. In the case of the work system, the file also specifies the user name to be used when authenticating:
Host work
HostName 35.194.18.119
IdentityFile ~/.ssh/id_work
User neilsmyth
Host home
HostName 192.168.0.21
IdentityFile ~/.ssh/id_home
Code language:plaintext(plaintext)
Before setting up the configuration file, the user would have used the following command to connect to the work system:
Now, however, the command can be shortened as follows:
$ ssh workCode language:plaintext(plaintext)
A full listing of configuration file options can be found by running the following command:
$ man ssh_configCode language:plaintext(plaintext)
SSH Key-based Authentication from Windows Clients
Recent releases of Windows include a subset of the OpenSSH implementation used by most Linux and macOS systems as part of Windows PowerShell. This allows SSH key-based authentication to be set up from a Windows client using similar steps to those outlined above for Linux and macOS.
On Windows, search for Windows PowerShell and select it from the results. Once running, the PowerShell window will appear as shown in Figure 15-1:
Figure 15-1
If you already have a private key from another client system, copy the id_rsa file to a folder named .ssh on the Windows system. Once the file is in place, test the authentication within the PowerShell window as follows:
PS C:\Users\neil> ssh -l neil 192.168.1.101
Enter passphrase for key 'C:\Users\neil\.ssh\id_rsa':
Code language:plaintext(plaintext)
Enter the passphrase when prompted and complete the authentication process.
If the private key does not yet exist, generate a new private and public key pair within the PowerShell window using the ssh-keygen utility using the same steps outlined for Linux and macOS. Once the keys have been generated, they will again be located in the .ssh directory of the current user’s home folder, and the public key file id_rsa.pub will need to be installed on the remote Rocky 9 system. Unfortunately, Windows PowerShell does not include the ssh-copy-id utility, so this task must be performed manually.
Within the PowerShell window, change directory into the .ssh sub-directory and display the content of the public key id_rsa.pub file:
PS C:\Users\neil> cd .ssh
PS C:\Users\neil\.ssh> type id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFgx1vzu59lll6/uQw7FbmKVsQ3fzLz9MW1fgo4sdsxXp81wCHNAlqcjx1Pgr9BJPXWUMInQOi7BQ5I+vc2xQ2AS0kMq3ZH9ybWuQe/U2GjueXZd0FKrEXrT55wM36Rm6Ii3roUCoGCzGR8mn95JvRB3VtCyDdzTWSi8JBpK5gV5oOxNTNPsewlLzouBlCT1qW3CKwEiIwu8S9MTL7m3nrcaNeLewTTHevvHw4QDwzFQ+B0PDg96fzsYoTXVhzyHSWyo6H0gqrft7aK+gILBtEIhWTkSVEMAzy1piKtCr1IYTmVK6engv0aoGtMUq6FnOeGp5FjvKkF4aQkh1QR28r neil@DESKTOP-S8P8D3NCode language:plaintext(plaintext)
Highlight the file’s content and copy it using the Ctrl-C keyboard shortcut.
Remaining within the PowerShell window, log into the remote system using password authentication:
Once signed in, check if the .ssh sub-directory exists. If it does not, create it as follows:
$ mkdir .sshCode language:plaintext(plaintext)
Change directory into .ssh and check whether a file named authorized_keys already exists. If it does not, create it and paste the content of the public key file from the Windows system into it.
If the authorized_keys file already exists, it likely contains other keys. If this is the case, edit the file and paste the new public key at the end of the file. The following file, for example, contains two keys:
Once the public key is installed on the server, test the authentication by logging in to the server from within the PowerShell window, for example:
PS C:\Users\neil\.ssh> ssh -l neil 192.168.1.100
Enter passphrase for key 'C:\Users\neil\.ssh\id_rsa':
Code language:plaintext(plaintext)
When key-based authentication has been set up for all the accounts and verified, disable password authentication on the Rocky 9 system as outlined at the end of the previous section.
SSH Key-based Authentication using PuTTY
For Windows systems that do not have OpenSSH available or as a more flexible alternative to using PowerShell, the PuTTY tool is a widely used alternative. The first step in using PuTTY is downloading and installing it on any Windows system that needs an SSH client. PuTTY is a free utility and can be downloaded using the following link:
Download the Windows installer executable that matches your Windows system (32-bit and 64-bit versions are available), then execute the installer to complete the installation.
If a private key already exists on another system, create the .ssh folder in the current user’s home folder and copy the private id_rsa key into it.
Next, the private key file must be converted to a PuTTY private key format file using the PuTTYgen tool. Locate this utility by typing “PuTTY Key Generator” into the search bar of the Windows Start menu and launch it:
Figure 15-2
Once launched, click on the Load button located in the Actions section and navigate to the private key file previously copied to the .ssh folder (note that it may be necessary to change the file type filter to All Files (*.*) for the key file to be visible). Once located, select the file and load it into PuttyGen. When prompted, enter the passphrase used initially to encrypt the file. Once the private key has been imported, save it as a PuTTY key file by clicking the Save Private Key button. For consistency, save the key file to the .ssh folder but give it a different name to differentiate it from the original key file.
Launch PuTTY from the Start menu and enter the IP address or hostname of the remote server into the main screen before selecting the Connection -> SSH -> Auth -> Credentials category in the left-hand panel, as highlighted in Figure 15-3:
Figure 15-3
Click the Browse button next to the Private key for authentication field and navigate to and select the previously saved PuTTY private key file. Then, optionally, scroll to the top of the left-hand panel, select the Session entry, and enter a name for the session in the Saved Sessions field before clicking on the Save button. This will save the session configuration for future use without reentering the settings each time.
Finally, click on the Open button to establish the connection to the remote server, entering the user name and passphrase when prompted to do so to complete the authentication.
Generating a Private Key with PuTTYgen
The previous section explored using existing private and public keys when working with PuTTY. If keys do not exist, they can be created using the PuTTYgen tool, which is included in the main PuTTY installation.
To create new keys, launch PuttyGen and click on the Generate button highlighted in Figure 15-4:
Figure 15-4
Move the mouse pointer to generate random data as instructed, then enter an optional passphrase to encrypt the private key. Once the keys have been generated, save the files to suitable locations using the Save public key and Save private key buttons. As outlined in the previous section, the private key can be used with PuTTY. To install the public key on the remote server, use the steps covered in the earlier section on SSH within PowerShell on Windows.
Summary
Any remote access to a Rocky 9 system must be implemented in a way that provides a high level of security. By default, SSH allows remote system access using password-based authentication. However, this leaves the system vulnerable to anyone who can guess a password or find out the password through other means. For this reason, key-based authentication is recommended to protect system access. Key-based authentication uses public key encryption involving public and private keys. When implemented, users can only connect to a server if they are using a client with a private key that matches a public key on the server. As an added layer of security, the private key may also be encrypted and password protected. Once key-based encryption has been implemented, the server system is configured to disable support for the less secure password-based authentication.
This chapter has provided an overview of SSH key-based authentication and outlined the steps involved in generating keys and configuring clients on macOS, Linux, and Windows, as well as installing and managing public keys on a Rocky 9 server.