Using NFS to Share Ubuntu 20.04 Files with Remote Systems

Ubuntu provides two mechanisms for sharing files and folders with other systems on a network. One approach is to use technology called Samba. Samba is based on Microsoft Windows Folder Sharing and allows Linux systems to make folders accessible to Windows systems, and also to access Windows based folder shares from Linux. This approach can also be used to share folders between other Linux and UNIX based systems as long as they too have Samba support installed and configured. This is by far the most popular approach to sharing folders in heterogeneous network environments. The topic of folder sharing using Samba is covered in “Sharing Files between Ubuntu and Windows Systems with Samba”.

Another option, which is targeted specifically at sharing folders between Linux and UNIX based systems, uses technology called Network File System (NFS). NFS allows the file system on one Linux computer to be accessed over a network connection by another Linux or UNIX system. NFS was originally developed by Sun Microsystems (now part of Oracle Corporation) in the 1980s and remains the standard mechanism for sharing of remote Linux/UNIX file systems to this day.

NFS is very different to the Windows SMB resource sharing technology used by Samba. In this chapter we will be looking at network based sharing of folders between Ubuntu and other UNIX/ Linux based systems using NFS.

1.1  Ensuring NFS Services are running on Ubuntu

The first task is to verify that the NFS services are installed and running on your Ubuntu system. This can be achieved either from the command-line, or using the Cockpit interface.

Begin by installing the NFS service by running the following command from a terminal window:

# apt install nfs-kernel-server

Next, configure the service to automatically start at boot time:

# systemctl enable nfs-kernel-server

Once the service has been enabled, start it as follows:

# systemctl start nfs-kernel-server

1.2  Configuring the Ubuntu Firewall to Allow NFS Traffic

Next, the firewall needs to be configured to allow NFS traffic.

If the Uncomplicated Firewall is enabled, run the following command to add a rule to allow NFS traffic:

# ufw allow nfs

If, on the other hand, you are using firewalld, run the following firewall-cmd commands where <zone> is replaced by the appropriate zone for your firewall and system configuration:

# firewall-cmd --zone=<zone> --permanent --add-service=mountd
# firewall-cmd --zone=<zone> --permanent --add-service=nfs
# firewall-cmd --zone=<zone> --permanent --add-service=rpc-bind
# firewall-cmd --reload

1.3  Specifying the Folders to be Shared

Now that NFS is running and the firewall has been configured, we need to specify which parts of the Ubuntu file system may be accessed by remote Linux or UNIX systems. These settings can be declared in the /etc/exports file, which will need to be modified to export the directories for remote access via NFS. The syntax for an export line in this file is as follows:

<export> <host1>(<options>) <host2>(<options>)...

In the above line, <export> is replaced by the directory to be exported, <host1> is the name or IP address of the system to which access is being granted and <options> represents the restrictions that are to be imposed on that access (read only, read write etc). Multiple host and options entries may be placed on the same line if required. For example, the following line grants read only permission to the /datafiles directory to a host with the IP address of 192.168.2.38:

/datafiles 192.168.2.38(ro,no_subtree_check)

The use of wildcards is permitted in order to apply an export to multiple hosts. For example, the following line permits read write access to /home/demo to all external hosts:

/home/demo *(rw)

A full list of options supported by the exports file may be found by reading the exports man page:

# man exports

For the purposes of this chapter, we will configure the /etc/exports file as follows:

/tmp       *(rw,sync,no_subtree_check)
/vol1      192.168.2.21(ro,sync,no_subtree_check)

Once configured, the table of exported file systems maintained by the NFS server needs to be updated with the latest /etc/exports settings using the exportfs command as follows:

# exportfs -a

It is also possible to view the current share settings from the command-line using the exportfs tool:

# exportfs

The above command will generate the following output:

/tmp            <world>
/vol1           192.168.2.21

Using NFS to Share Ubuntu Files with Remote Systems

1.4  Accessing Shared Ubuntu Folders

The shared folders may be accessed from a client system by mounting them manually from the command-line. Before attempting to mount a remote NFS folder, the nfs-common package should first be installed on the client system:

# apt install nfs-common

To mount a remote folder from the command-line, open a terminal window and create a directory where you would like the remote shared folder to be mounted:

# mkdir /home/demo/tmp

Next enter the command to mount the remote folder using either the IP address or hostname of the remote NFS server, for example:

# mount -t nfs 192.168.1.115:/tmp /home/demo/tmp

The remote /tmp folder will then be mounted on the local system. Once mounted, the /home/ demo/tmp folder will contain the remote folder and all its contents.

Options may also be specified when mounting a remote NFS filesystem. The following command, for example, mounts the same folder, but configures it to be read-only:

# mount -t nfs -o ro 192.168.1.115:/tmp /home/demo/tmp

1.5  Mounting an NFS Filesystem on System Startup

It is also possible to configure an Ubuntu system to automatically mount a remote file system each time the system starts up by editing the /etc/fstab file. When loaded into an editor, it will likely resemble the following:

UUID=84982a2e-0dc1-4612-9ffa-13baf91ec558 /     ext4    errors=remount-ro 0  1
/swapfile                        none            swap    sw              0       0

To mount, for example, a folder with the path /tmp which resides on a system with the IP address 192.168.1.115 in the local folder with the path /home/demo/tmp (note that this folder must already exist) add the following line to the /etc/fstab file:

192.168.1.115:/tmp      /home/demo/tmp           nfs     rw              0 0

Next time the system reboots the /tmp folder located on the remote system will be mounted on the local /home/demo/tmp mount point. All the files in the remote folder can then be accessed as if they reside on the local hard disk drive.

1.6  Unmounting an NFS Mount Point

Once a remote file system is mounted using NFS it can be unmounted using the umount command with the local mount point as the command-line argument. The following command, for example, will unmount our example filesystem mount point:

# umount /home/demo/tmp

1.7  Accessing NFS Filesystems in Cockpit

In addition to mounting a remote NFS file system on a client using the command-line, it is also possible to perform mount operations from within the Cockpit web interface. Assuming that Cockpit has been installed and configured on the client system, log into the Cockpit interface from within a web browser and select the Storage option from the left-hand navigation panel. If the Storage option is not listed, the cockpit-storaged package will need to be installed:

# apt install cockpit-storaged

Once the Cockpit service has restarted, log back into the Cockpit interface at which point the Storage option should now be visible.

Once selected, the main storage page will include a section listing any currently mounted NFS file systems as illustrated in Figure 22-1:

Figure 22-1

To mount a remote filesystem, click on the ‘+’ button highlighted above and enter information about the remote NFS server and file system share together with the local mount point and any necessary options into the resulting dialog before clicking on the Add button:

Figure 22-2

To modify, unmount or remove an NFS filesystem share, select the corresponding mount in the NFS Mounts list (Figure 22-1 above) to display the page shown in Figure 22-3 below:

Using NFS to Share Ubuntu Files with Remote Systems

Figure 22-3

1.8  Summary

The Network File System (NFS) is a client/server-based system, originally developed by Sun Microsystems, which provides a way for Linux and Unix systems to share filesystems over a network. NFS allows a client system to access and (subject to permissions) modify files located on a remote server as though those files are stored on a local filesystem. This chapter has provided an overview of NFS and outlined the options available for configuring both client and server systems using the command-line or the Cockpit web interface.

Displaying Ubuntu 20.04 Applications Remotely (X11 Forwarding)

In the previous chapter we looked at how to display the entire Ubuntu desktop on a remote computer. While this works well if you actually need to remotely display the entire desktop, it could be considered overkill if all you want to do is display a single application. In this chapter, therefore, we will look at displaying individual applications on a remote system.

1.1  Requirements for Remotely Displaying Ubuntu Applications

In order to run an application on one Ubuntu system and have it display on another system there are a couple of prerequisites. First, the system on which the application is to be displayed must be running an X server. If the system is a Linux or UNIX-based system with a desktop environment running then this is no problem. If the system is running Windows or macOS, however, then you must install an X server on it before you can display applications from a remote system. A number of commercial and free Windows based X servers are available for this purpose and a web search should provide you with a list of options.

Second, the system on which the application is being run (as opposed to the system on which the application is to be displayed) must be configured to allow SSH access. Details on configuring SSH on an Ubuntu system can be found in the chapter entitled “Configuring SSH Key-based Authentication on Ubuntu”. This system must also be running the X Window system from X.org instead of Wayland. To find out which system is being used, open a terminal window and run the following command:

# echo $XDG_SESSION_TYPE x11

If the above command outputs “wayland” instead of “x11”, edit the /etc/gdm3/custom.conf file and uncomment the WaylandEnable line as follows and restart the system:

# Uncomment the line below to force the login screen to use Xorg
WaylandEnable=false

Finally, SSH must be configured to allow X11 forwarding. This is achieved by adding the following directive to the SSH configuration on the system from which forwarding is to occur. Edit the /etc/ ssh/ssh_config file and uncomment the ForwardX11 entry (in other words remove the ‘#’ at the beginning of the line) and change the value to yes entry as follows:

.
.
Host *
#   ForwardAgent no
    ForwardX11 yes
.
.

After making the change, save the file and restart the SSH service:

# systemctl restart sshd

Once the above requirements are met it should be possible to remotely display an X-based desktop application.

1.2  Remotely Displaying an Ubuntu Application

The first step in remotely displaying an application is to move to the system where the application is to be displayed. At this system, establish an SSH connection to the remote system so that you have a command prompt. This can be achieved using the ssh command. When using the ssh command we need to use the -X flag to tell it that we plan to tunnel X11 traffic through the connection:

In the above example user is the user name to use to log into the remote system and hostname is the hostname or IP address of the remote system. Enter your password at the login prompt and, once logged in, run the following command to see the DISPLAY setting:

$ echo $DISPLAY

The command should output something similar to the following: localhost:10.0

To display an application simply run it from the command prompt. For example:

$ gedit

When executed, the above command should run the gedit tool on the remote system, but display the user interface on the local system.

1.3  Trusted X11 Forwarding

If the /etc/ssh/ssh_config file on the remote system contains the following line, then it is possible to use trusted X11 forwarding:

ForwardX11Trusted yes

Trusted X11 forwarding is slightly faster than untrusted forwarding but is less secure since it does not engage the X11 security controls. The -Y flag is needed when using trusted X11 forwarding:

1.4  Compressed X11 Forwarding

When using slower connections the X11 data can be compressed using the -C flag to improve performance:

$ ssh -X -C [email protected]

1.5  Displaying Remote Ubuntu Apps on Windows

To display Ubuntu based apps on Windows an SSH client and an X server will need to be installed on the Windows system. The subject of installing and using the PuTTY client on Windows was covered earlier in the book in the “Configuring SSH Key-based Authentication on Ubuntu” chapter. Refer to this chapter if you have not already installed PuTTY on your Windows system.

In terms of the X server, a number of options are available, though a popular choice appears to be VcXsrv which is available for free from the following URL:

https://sourceforge.net/projects/vcxsrv/

Once the VcXsrv X server has been installed, an application named XLaunch will appear on the desktop and in the start menu. Start XLaunch and select a display option (the most flexible being the Multiple windows option which allows each client app to appear in its own window):

Figure 21-1

Click the Next button to proceed through the remaining screens, accepting the default configuration settings. On the final screen, click on the Finish button to start the X server. If the Windows Defender dialog appears click on the button to allow access to your chosen networks.

Once running, XLaunch will appear in the taskbar and can be exited by right-clicking on the icon and selecting the Exit… menu option:

Figure 21-2

With the X server installed and running, launch PuTTY and either enter the connection information for the remote host or load a previously saved session profile. Before establishing the connection, however, X11 forwarding needs to be enabled. Within the PuTTY main window, scroll down the options in the left-hand panel, unfold the SSH section and select the X11 option as shown in Figure 21-3:

Figure 21-3

Turn on the Enable X11 forwarding checkbox highlighted in Figure 21-4, return to the sessions screen and open the connection (saving the session beforehand if you plan to use it again).

Displaying Ubuntu Applications Remotely (X11 Forwarding)

Figure 21-4

Log into the Ubuntu system within the PuTTY session window and run a desktop app. After a short delay, the app will appear in the Windows desktop in its own window. Any dialogs that are opened by the app will also appear in separate windows, just as they would on the Ubuntu GNOME desktop. Figure 21-5, for example, shows the Ubuntu nm-connection-editor tool displayed on a Windows 10 system:

Figure 21-5

1.6  Summary

For situations where remote access to individual Ubuntu desktop applications is required as opposed to the entire GNOME desktop, X11 forwarding provides a lightweight solution to remotely displaying graphical applications. The system on which the applications are to appear must be running an X Window System based desktop environment (such as GNOME) or have an X server installed and running. Once X11 forwarding has been enabled on the remote server and a secure SSH connection established from the local system using the X11 forwarding option, most applications can be displayed remotely on the local X server.

Ubuntu 20.04 Remote Desktop Access with VNC

The chapter entitled “Ubuntu Remote Desktop Access with Vino” explored remote access to the Ubuntu GNOME desktop using the Vino server, an approach that is intended solely for situations where the remote system is already running a GNOME desktop session. In this chapter we will cover launching and accessing GNOME desktop sessions that run in the background, allowing multiple desktop sessions to be accessed remotely, including on server based system that do not have a graphical console attached.

1.1  Installing the GNOME Desktop Environment

It is, of course, only possible to access the desktop environment if the desktop itself has been installed. If, for example, the system was initially configured as a server it is unlikely that the desktop packages were installed. The easiest way to install the packages necessary to run the GNOME desktop is via the apt command as follows:

# apt install ubuntu-gnome-desktop

To prevent the desktop from attempting to launch automatically each time the system reboots, change the default systemd target back to multi-user:

# systemctl set-default multi-user.target

If the system has a graphical display attached, the desktop can be launched using the following command:

$ startx

If, on the other hand, the system is a server with no directly connected display, the only way to run and access the desktop will be to configure VNC support on the system.

1.2  Installing VNC on Ubuntu

Access to a remote desktop requires a VNC server installed on the remote system, a VNC viewer on the system from which access is being established and, optionally, a secure SSH connection. While a number of VNC server and viewer implementations are available, this chapter will make use of TigerVNC which provides both server and viewer components for Linux-based operating systems. VNC viewer clients for non-Linux platforms include RealVNC and TightVNC.

To install the TigerVNC server package on Ubuntu, simply run the following command:

# apt install tigervnc-standalone-server

If required, the TigerVNC viewer may also be installed as follows:

# apt install tigervnc-viewer

Once the server has been installed the system will need to be configured to run one or more VNC services and to open the appropriate ports on the firewall.

1.3  Configuring the VNC Server

With the VNC server packages installed, the next step is to configure the server. The first step is to specify a password for the user that will be accessing the remote desktop environment. While logged in as root (or with superuser privileges), execute the vncpasswd command (where the user name is assumed to be demo):

# su - demo
[email protected]:~$ vncpasswd
Password:
Verify:
Would you like to enter a view-only password (y/n)? n
A view-only password is not used

The above command will create a file named passwd in the .vnc directory of the user’s home directory. Next, change directory to the .vnc directory and create a new file named xstartup containing the following:

#!/bin/sh
# Start Gnome 3 Desktop 
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
vncconfig -iconic &
dbus-launch --exit-with-session gnome-session &

These are the commands that will be executed to start the GNOME desktop when the VNC server is launched.

1.4  Starting the VNC Server

With the necessary packages installed and configured for the user’s account, the VNC server can be started as follows (making sure to run the command as the user and without superuser privileges):

$ vncserver

This will start the first desktop session running on the system. Since this is the first session, it will be configured to use port 5901 (which may be abbreviated to :1). Running the command a second time while the first session is running will create a VNC server listening on port 5902 (:2) and so on. The following command may be used to obtain a list of desktop sessions currently running:

$ vncserver -list
TigerVNC server sessions:

X DISPLAY #	PROCESS ID
:1		1607
:2		4726

To terminate a session, use the vncserver command with the -kill option referencing the corresponding port. For example:

$ vncserver -kill :2
Killing Xtigervnc process ID 4726... success!

Alternatively, use the following command to kill all currently running VNC server sessions:

$ vncserver -kill :*
Killing Xtigervnc process ID 1607... success!
Killing Xtigervnc process ID 5287... success!

To manually specify the port to be used by the VNC server session, include the number in the command-line as follows:

$ vncserver :5

In the above example, the session will listen for a remote connection on port 5905.

1.5  Connecting to a VNC Server

For details on remotely connecting to a desktop session from another system, follow the steps outlined in the sections titled “Establishing a Secure Remote Desktop Session” and “Establishing a Secure Tunnel on Windows using PuTTY” in the previous chapter.

1.6  Summary

In this and the preceding chapter we have explored two different ways to remotely access the GNOME desktop environment of an Ubuntu system. While the previous chapter explored access to an existing desktop session, this chapter has focused on launching GNOME desktop sessions as background processes, thereby allowing remote access to multiple desktop sessions. This is a particularly useful technique for running and remotely accessing desktop sessions on “headless” server-based systems.

Ubuntu 20.04 Remote Desktop Access with Vino

Ubuntu can be configured to provide remote access to the graphical desktop environment over a network or internet connection. Although not enabled by default, it is relatively straightforward to display and access an Ubuntu desktop from a system anywhere else on a network or the internet. This can be achieved regardless of whether that system is running Linux, Windows or macOS. In fact, there are even apps available for Android and iOS that will allow you to access your Ubuntu desktop from just about anywhere that a data signal is available.

Remote desktop access can be useful in a number of scenarios. It enables you or another person, for example, to view and interact with your Ubuntu desktop environment from another computer system either on the same network or over the internet. This is useful if you need to work on your computer when you are away from your desk such as while traveling. It is also useful in situations where a co-worker or IT support technician needs access to your desktop to resolve a problem.

The Ubuntu remote desktop functionality is based on technology known as Virtual Network Computing (VNC) and in this and the next chapter we will cover the key aspects of configuring and using remote desktops within Ubuntu.

1.1  Remote Desktop Access Types

Before starting it is important to understand that there are essentially two types of remote desktop access. The approach covered in this chapter is useful if you primarily use Ubuntu as a desktop operating system and require remote access to your usual desktop session. When configured, you will take over your desktop session and view and control it remotely.

The second option, covered in the next chapter entitled “Ubuntu Remote Desktop Access with VNC”, is intended for situations where you need to start and access one or more remote desktop sessions on a remote server-based system, regardless of whether the remote system has a graphical console attached. This allows you to launch multiple desktop sessions in the background on the remote system and view and control those desktops over a network or internet connection.

1.2  Secure and Insecure Remote Desktop Access

In this chapter we will cover both secure and insecure remote desktop access methods. Assuming that you are accessing one system from another within the context of a secure internal network then it is generally safe to use the insecure access method. If, on the other hand, you plan to access your desktop remotely over any kind of public network you must use the secure method of access to avoid your system and data being compromised.

Remote desktop access on Ubuntu is provided by the Vino package. Vino is a VNC server that was developed specifically for use with the GNOME desktop.

The first step in enabling remote access is to install this package:

# apt install vino

Once Vino has been installed, the next step is to enable remote desktop access from within GNOME. Begin by opening the settings app as shown in Figure 19-1:

Figure 19-1

From within the Settings application, select the Sharing option (marked A in Figure 19-2):

Figure 19-2

Turn on the Sharing switch (B) and click on the Screen Sharing option (C) to display the dialog shown in Figure 19-3 below:

Figure 19-3

The Screen Sharing dialog provides the following configuration options to manage remote desktop access:

  • Allows connections to control the screen – If enabled, the remote session will be able to use the mouse and keyboard to interact with the desktop environment. If disabled the remote session will only allow the desktop to be viewed.
  • New connections must ask for access – When selected, a prompt will appear on the host screen asking to give permission to the remote user to access the desktop. Do not select this option if you plan to access your screen remotely and nobody will be at the host system to accept the connection request.
  • Require a password – Requires the user to enter the specified password prior to gaining access to the desktop.
  • Networks – The network connections on the host system via which remote access is to be permitted. After configuring the settings, close both the Screen Settings and Settings dialogs.

1.4  Connecting to the Shared Desktop

Although VNC viewer implementations are available for a wide range of operating systems, a tool such as the Remmina Desktop Client is recommended when connecting from Ubuntu or other Linux-based systems. Remmina is a user friendly tool with a graphical interface that supports the encryption used by Vino to ensure a secure remote connection.

To install this tool, open the Ubuntu Software application and search for and install Remmina:

Figure 19-4

After installing and launching Remmina, change the connection type menu (marked A in Figure 19-5) to VNC and enter into the address field (B) the IP address or hostname of the remote system to which you wish to connect:

Figure 19-5

To establish the connection, tap the keyboard Enter key to begin the connection process. After a short delay, a second screen will appear requesting the desktop access password (if one was entered when screen sharing was enabled earlier in the chapter):

Figure 19-6

After entering the password, click on OK to access the remote screen:

Figure 19-7

The default settings for Remmina prioritize speed over image quality. If you find that the quality of the desktop rendering is unacceptably poor, click on the settings button (Figure 19-8) in the left-hand toolbar within the remote viewer window and experiment with different settings until you find the ideal balance of performance and image quality:

Figure 19-8

1.5  Connecting from Non-Linux Clients

The previous section assumed that the remote desktop was being accessed from a Linux or UNIX system. It is important to understand that Vino, by default, requires that the remote connection be encrypted to ensure security. One of the reasons for using Remmina is that it fully supports the encryption used by Vino.

If you need to connect from a non-Linux system such as Windows or macOS you will need to install a third-party VNC viewer such as TightVNC, TigerVNC or RealVNC. Unfortunately, these viewers do not support the encryption that Vino uses by default. To experience this in action, download the RealVNC viewer for your macOS or Windows system from the following URL:

https://www.realvnc.com/en/connect/download/viewer/

Once installed, launch the viewer and enter the hostname or IP address of your remote Ubuntu system. On attempting to connect, a failure dialog will appear similar to the one shown in Figure 19-9:

Figure 19-9

To allow a connection to be established from a non-Linux system it is necessary to turn off the encryption requirement for the remote desktop connection. To do this, open a terminal window on the Ubuntu system and run the following command (using your account and without sudo privileges):

$ gsettings set org.gnome.Vino require-encryption false

After disabling the Vino encryption requirement, attempt to connect from the RealVNC viewer once again. This time a warning will appear indicating that the connection is not encrypted:

Figure 19-10

Clicking the Continue button will dismiss the warning dialog and establish the remote connection.

Clearly, connecting to a remote VNC server using the steps in this section results in an insecure, unencrypted connection between the client and server. This means that the data transmitted during the remote session is vulnerable to interception. To establish a secure and encrypted connection from an Ubuntu system to a non-Linux client a few extra steps are necessary.

1.6  Establishing a Secure Remote Desktop Session

The remote desktop connection from macOS and Windows in the previous section is considered to be insecure because no encryption is used. This is acceptable when the remote connection does not extend outside of an internal network protected by a firewall. When a remote session is required over an internet connection, however, a more secure option is needed. This is achieved by tunneling the remote desktop through a secure shell (SSH) connection. This section will cover how to do this on Linux, UNIX and macOS client systems.

When a remote desktop session is invoked on an Ubuntu system a connection is made using TCP/IP network port 5900. To prove this, establish a connection to your remote Ubuntu system referencing port 5900 after the hostname or IP address, for example, and note that the connection is still established:

192.168.86.218:5900

To implement an encrypted remote desktop session for non-Linux system the session needs to be tunneled through a secure SSH connection.

If the SSH server has not yet been installed on your Ubuntu system, refer to the chapter entitled “Configuring SSH Key-based Authentication on Ubuntu”.

Assuming the SSH server is installed and active it is time to move to the other system. At the other system, log in to the remote system using the following command, which will establish the secure tunnel between the two systems. This assumes the client system is running macOS, Linux or UNIX (instructions for Windows systems are covered in the next section):

$ ssh -l <username> -L 5900:localhost:5900 <remotehost>

In the above example, <username> references the user account on the remote system for which VNC access has been configured, and <remotehost> is either the host name or IP address of the remote system, for example:

$ ssh -l demo -L 5900:localhost:5900 192.168.86.218

When prompted, log in using the account password. With the secure connection established it is time to launch vncviewer so that it uses the secure tunnel. Leaving the SSH session running in the terminal window, launch the VNC viewer and enter the following into the address field: localhost:5900

The vncviewer session will prompt for a password if one is required, and then launch the VNC viewer providing secure access to your desktop environment.

Although the connection is now secure and encrypted, the VNC viewer will most likely still report that the connection is insecure. Unfortunately, although the connection is now secure, the VNC viewer software has no way of knowing this and consequently continues to issue warnings. Rest assured that as long as the SSH tunnel is being used, the connection is indeed secure.

In the above example we left the SSH tunnel session running in a terminal window. If you would prefer to run the session in the background, this can be achieved by using the –f and –N flags when initiating the connection:

$ ssh -l <username> -f -N -L 5900:localhost:5900 <remotehost>

The above command will prompt for a password for the remote server and then establish the connection in the background, leaving the terminal window available for other tasks.

If you are connecting to the remote desktop from outside the firewall keep in mind that the IP address for the SSH connection will be the external IP address provided by your ISP or cloud hosting provider, not the LAN IP address of the remote system (since this IP address is not visible to those outside the firewall). You will also need to configure your firewall to forward port 22 (for the SSH connection) to the IP address of the system running the desktop. It is not necessary to forward port 5900. Steps to perform port forwarding differ between firewalls, so refer to the documentation for your firewall, router or wireless base station for details specific to your configuration.

1.7  Establishing a Secure Tunnel on Windows using PuTTY

A similar approach is taken to establishing a secure desktop session from a Windows system to an Ubuntu server. Assuming that you already have a VNC client such as TightVNC installed, the one remaining requirement is a Windows SSH client (in this case PuTTY).

Once PuTTY is downloaded and installed, the first step is to establish a secure connection between the Windows system and the remote system with appropriate tunneling configured. When launched, PuTTY displays the following screen:

Figure 19-11

Enter the IP address or host name of the remote host (or the external IP address of the gateway if you are connecting from outside the firewall). The next step is to set up the tunnel. Click on the + next to SSH in the Category tree on the left-hand side of the dialog and click on Tunnels. The screen should subsequently appear as follows:

Figure 19-12

Enter 5900 as the Source port and localhost:5900 as the Destination and click on the Add button. Finally return to the main screen by clicking on the Session category. Enter a name for the session in the Saved Sessions text field and press Save. Click on Open to establish the connection. A terminal window will appear with the login prompt from the remote system. Enter the appropriate user login and password credentials.

The SSH connection is now established. Launch the VNC viewer, enter localhost:5900 in the VNC Server text field and click on Connect. The viewer will establish the connection, prompt for the password and then display the desktop. You are now accessing the remote desktop of a Linux system from Windows over a secure SSH tunnel connection.

1.8  Summary

Remote access to the GNOME desktop environment of an Ubuntu system can be enabled by making use of Virtual Network Computing (VNC). Comprising the VNC server running on the remote server and a corresponding client on the local host, VNC allows remote access to multiple desktop instances running on the server.

The standard remote server solution for the GNOME desktop is Vino. Once installed, remote desktop sessions can be established from other Linux systems using a remote desktop viewer such as Remmina.

When connecting from non-Linux systems such as Windows or macOS, it is necessary to disable Vino’s encryption requirements. Once disabled, connections from client systems should be established using SSH tunneling.

When the VNC connection is being used over a public connection with Vino encryption disabled, the use of SSH tunneling is recommended when connecting to ensure that the communication between client and server is encrypted and secure.

Configuring SSH Key-based Authentication on Ubuntu 20.04

When an Ubuntu system is first installed, it is not configured by default to allow remote commandline access via Secure Shell (SSH) connections. When installed, SSH provides password protected and encrypted access to the system for the root account and any other users added during the installation phase. This level of security is far from adequate and should be upgraded to SSH keybased authentication as soon as possible.

This chapter will outline the steps to increase the security of an Ubuntu system by implementing key-based SSH authentication.

1.1  An Overview of Secure Shell (SSH)

SSH is designed to allow secure remote access to systems for the purposes of gaining shell access and transferring files and data. As will be covered in “Ubuntu Remote Desktop Access with Vino”, SSH can also be used to provide a secure tunnel through which remote access to the GNOME desktop can be achieved over a network connection.

A basic SSH configuration consists of a client (used on the computer establishing the connection) and a server (running on the system to which the connection is to be established). A user might, for example, use an SSH client running on a Linux, Windows or macOS system to connect to the SSH server running on an Ubuntu system to gain access to a shell command-line prompt or to perform file transfers. All of the communications between client and server, including the password entered to gain access, are encrypted to prevent outside parties from intercepting the data.

The inherent weakness in a basic SSH implementation is that it depends entirely on the strength of the passwords assigned to the accounts on the system. If a malicious party is able to identify the password for an account (either through guess work, subterfuge or a brute force attack) the system becomes vulnerable. This weakness can be addressed by implementing SSH key-based authentication.

1.2  SSH Key-based Authentication

SSH key-based authentication makes use of asymmetric public key encryption to add an extra layer of security to remote system access. The concept of public key encryption was devised in 1975 by Whitfield Diffie and Martin Hellman and is based on the concept of using a pair of keys, one private and one public.

In a public key encryption system, the public key is used to encrypt data that can only be decrypted by the owner of the private key.

In the case of SSH key-based authentication, the private key is held by the host on which the SSH client is located while the corresponding public key resides on the system on which the SSH server is running. It is important to protect the private key, since ownership of the key will allow anyone to log into the remote system. As an added layer of protection, therefore, the private key may also be encrypted and protected by a password which must be entered each time a connection is established to the server.

1.3  Setting Up Key-based Authentication

There are four steps to setting up key-based SSH authentication which can be summarized as follows:

  1. Generate the public and private keys.
  2. Install the public key on the server.
  3. Test authentication.
  4. Disable password-based authentication on the server.

The remainder of this chapter will outline these steps in greater detail for Linux, macOS and Windows-based client operating systems.

1.4  Installing and Starting the SSH Service

If the SSH server is not already installed and running on the system, it can be added using the following commands:

# apt install openssh-server
# systemctl start sshd.service
# systemctl enable sshd.service

1.5  SSH Key-based Authentication from Linux and macOS Clients

The first step in setting up SSH key-based authentication is to generate the key pairs on the client system. If the client system is running Linux or macOS, this is achieved using the ssh-keygen utility:

# ssh-keygen

This will result in output similar to the following:

Generating public/private rsa key pair.
Enter file in which to save the key (/home/<username>/.ssh/id_rsa):

Press the Enter key to accept the default location for the key files. This will place two files in the .ssh sub-directory of the current user’s home directory. The private key will be stored in a file named id_rsa while the public key will reside in the file named id_rsa.pub.

Next, ssh-keygen will prompt for a passphrase with which to protect the private key. If a passphrase is provided, the private key will be encrypted on the local disk and the passphrase required in order to gain access to the remote system. For better security, use of a passphrase is recommended.

Enter passphrase (empty for no passphrase):

Finally, the ssh-keygen tool will generate the following output indicating that the keys have been generated:

Your identification has been saved in /home/neil/.ssh/id_rsa.
Your public key has been saved in /home/neil/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:FOLGWEEGFIjWnCT5wtTOv5VK4hdimzWghZizUEMYbfo <username>@<hostname>
The key's randomart image is:
+---[RSA 2048]----+
|.BB+=+*..        |
|o+B= * . .       |
|===.. + .        |
|*+ * . .         |
|.++ o   S        |
|..E+ * o         |
|  o B *          |
|   + +           |
|    .            |
+----[SHA256]-----+

The next step is to install the public key onto the remote server system. This can be achieved using the ssh-copy-id utility as follows:

$ ssh-copy-id [email protected]_hostname 

For example:

For example:
$ ssh-copy-id [email protected]
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/neil/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
[email protected]'s password: 
 
Number of key(s) added: 1
 
Now try logging into the machine, with:   "ssh '[email protected]'"
and check to make sure that only the key(s) you wanted were added.

Once the key is installed, test that the authentication works by attempting a remote login using the ssh client:

$ ssh -l <username> <hostname>

If the private key is encrypted and protected with a passphrase, enter the phrase when prompted to complete the authentication and establish remote access to the Ubuntu system:

Enter passphrase for key '/home/neil/.ssh/id_rsa': 
Last login: Thu Feb 21 13:41:23 2019 from 192.168.1.101
[[email protected] ~]$

Repeat these steps for any other accounts on the server for which remote access is required. If access is also required from other client systems, simply copy the id_rsa private key file to the .ssh sub-directory of your home folder on the other systems.

As currently configured, access to the remote system can still be achieved using the less secure password authentication. Once you have verified that key-based authentication works, log into the remote system, edit the /etc/ssh/ssh_config file and change the PasswordAuthentication setting to no:

PasswordAuthentication no

Save the file and restart the sshd service to implement the change:

# systemctl restart sshd.service

From this point on, it will only be possible to remotely access the system using SSH key-based authentication.

1.6  Managing Multiple Keys

It is not uncommon for multiple private keys to reside on a client system, each providing access to a different server. There are a number of options for selecting a specific key when establishing a connection. It is possible, for example, to specify the private key file to be used when launching the ssh client as follows:

$ ssh -l neilsmyth -i ~/.ssh/id_work 35.194.18.119

Alternatively, the SSH client user configuration file may be used to associate key files with servers. The configuration file is named config, must reside in the .ssh directory of the user’s home directory and can be used to configure a wide range of options including the private key file, the default port to use when connecting, the default user name, and an abbreviated nickname via which to reference the server. The following example config file defines different key files for two servers and allows them to be referenced by the nicknames home and work. In the case of the work system, the file also specifies the user name to be used when authenticating:

Host work
  HostName 35.194.18.119
  IdentityFile ~/.ssh/id_work
  User neilsmyth
 
Host home
  HostName 192.168.0.21
  IdentityFile ~/.ssh/id_home

Prior to setting up the configuration file, the user would have used the following command to connect to the work system:

$ ssh -l neilsmyth -i ~/.ssh/id_work 35.194.18.119

Now, however, the command can be shortened as follows:

$ ssh work

A full listing of configuration file options can be found by running the following command:

$ man ssh_config

1.7  SSH Key-based Authentication from Windows 10 Clients

Recent releases of Windows 10 include a subset of the OpenSSH implementation that is used by most Linux and macOS systems as part of Windows PowerShell. This allows SSH key-based authentication to be set up from a Windows 10 client using similar steps to those outlined above for Linux and macOS.

To open Windows PowerShell on a Windows 10 system press the Win+X keyboard combination and select it from the menu, or locate and select it from the Start menu. Once running, the PowerShell window will appear as shown in Figure 18-1:

Figure 18-1

If you already have a private key from another client system, simply copy the id_rsa file to a folder named .ssh on the Windows 10 system. Once the file is in place, test the authentication within the PowerShell window as follows:

$ ssh -l <username>@<hostname> 

For example:

PS C:\Users\neil> ssh -l neil 192.168.1.101
Enter passphrase for key 'C:\Users\neil/.ssh/id_rsa':

Enter the passphrase when prompted and complete the authentication process.

If an existing private key does not yet exist, generate a new private and public key pair within the PowerShell window using the ssh-keygen utility using the same steps as those outlined for Linux and macOS. Once the keys have been generated, they will once again be located in the .ssh directory of the current user’s home folder, and the public key file id_rsa.pub will need to be installed on the remote Ubuntu system. Unfortunately, Windows PowerShell does not include the ssh-copy-id utility, so this task will need to be performed manually.

Within the PowerShell window, change directory into the .ssh sub-directory and display the content of the public key id_rsa.pub file:

PS C:\Users\neil> cd .ssh
PS C:\Users\neil\.ssh> type id_rsa.pub
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFgx1vzu59lll6/uQw7FbmKVsQ3fzLz9MW1fgo4sdsxXp81wCHNAlqcjx1Pgr9BJPXWUMInQOi7BQ5I+vc2xQ2AS0kMq3ZH9ybWuQe/U2GjueXZd0FKrEXrT55wM36Rm6Ii3roUCoGCzGR8mn95JvRB3VtCyDdzTWSi8JBpK5gV5oOxNTNPsewlLzouBlCT1qW3CKwEiIwu8S9MTL7m3nrcaNeLewTTHevvHw4QDwzFQ+B0PDg96fzsYoTXVhzyHSWyo6H0gqrft7aKgILBtEIhWTkSVEMAzy1piKtCr1IYTmVK6engv0aoGtMUq6FnOeGp5FjvKkF4aQkh1QR28r [email protected]

Highlight the content of the file and copy it using the Ctrl-C keyboard shortcut.

Remaining within the PowerShell window, log into the remote system using password authentication:

PS C:\Users\neil\.ssh> ssh -l <username> <hostname>

Once signed in, check if the .ssh sub-directory exists. If it does not, create it as follows:

$ mkdir .ssh

Change directory into .ssh and check whether a file named authorized_keys already exists. If it does not, create it and paste the content of the public key file from the Windows 10 system into it.

If the authorized_keys file already exists it most likely already contains other keys. If this is the case, edit the file and paste the new public key at the end of the file. The following file, for example, contains two keys:

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCzRWH27Xs8ZA5rIbZXKgxFY5XXauMv+6F5PljBLJ6j+9nkmykVe3GjZTp3oD+KMRbT2kTEPbDpFD67DNL0eiX2ZuEEiYsxZfGCRCPBGYmQttFRHEAFnlS1Jx/G4W5UNKvhAXWyMwDEKiWvqTVy6syB2Ritoak+D/Sc8nJflQ6dtw0jBs+S7Aim8TPfgpi4p5XJGruXNRScamk68NgnPfTL3vT726EuABCk6C934KARd+/AXa8/5rNOh4ETPstjBRfFJ0tpmsWWhhNEnwJRqS2LD0ug7E3yFI2qsNKGEzvAYUC8Up45MRP7liR3aMlCBil1tsy9R+IB7oMEycZAe/qj [email protected]
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDFgx1vzu59lll6/uQw7FbmKVsQ3fzLz9MW1fgo4sdsxXp81wCHNAlqcjx1Pgr9BJPXWUMInQOi7BQ5I+vc2xQ2AS0kMq3ZH9ybWuQe/U2GjueXZd0FKrEXrT55wM36Rm6Ii3roUCoGCzGR8mn95JvRB3VtCyDdzTWSi8JBpK5gV5oOxNTNPsewlLzouBlCT1qW3CKwEiIwu8S9MTL7m3nrcaNeLewTTHevvHw4QDwzFQ+B0PDg96fzsYoTXVhzyHSWyo6H0gqrft7aK+gILBtEIhWTkSVEMAzy1piKtCr1IYTmVK6engv0aoGtMUq6FnOeGp5FjvKkF4aQkh1QR28r [email protected]

Once the public key is installed on the server, test the authentication by logging in to the server from within the Windows 10 PowerShell window, for example:

PS C:\Users\neil\.ssh> ssh -l neil 192.168.1.100
Enter passphrase for key 'C:\Users\neil/.ssh/id_rsa':

When key-based authentication has been set up for all the accounts and verified, disable password authentication on the Ubuntu system as outlined at the end of the previous section.

1.8  SSH Key-based Authentication using PuTTY

For Windows systems that do not have OpenSSH available, or as a more flexible alternative to using PowerShell, the PuTTY tool is a widely used alternative. The first step in using PuTTY is to download and install it on any Windows systems that need an SSH client. PuTTY is a free utility and can be downloaded using the following link:

https://www.chiark.greenend.org.uk/~sgtatham/putty/latest.html

Download the Windows installer executable that matches your Windows system (32-bit and 64bit versions are available) then execute the installer to complete installation.

If a private key already exists on another system, create the .ssh folder in the home folder of the current user and copy the private id_rsa key into it.

Next, the private key file needs to be converted to a PuTTY private key format file using the PuTTYgen tool. Locate this utility in the Windows Start menu and launch it:

Figure 18-2

Once launched, click on the Load button located in the Actions section and navigate to the private key file previously copied to the .ssh folder (note that it may be necessary to change the file type filter to All Files (*.*) in order for the key file to be visible). Once located, select the file and load it into PuttyGen. When prompted, enter the passphrase originally used to encrypt the file. Once the private key has been imported, save it as a PuTTY key file by clicking on the Save Private Key button. For consistency, save the key file to the .ssh folder but give it a different name to differentiate it from the original key file.

Launch PuTTY from the Start menu and enter the IP address or host name of the remote server into the main screen before selecting the Connection -> SSH -> Auth category in the left-hand panel as highlighted in Figure 18-3:

Figure 18-3

Click on the Browse button next to the Private key for authentication field and navigate to and select the previously saved PuTTY private key file. Optionally, scroll to the top of the left-hand panel, select the Session entry and enter a name for the session in the Saved Sessions field before clicking on the Save button. This will save the session configuration so that it can be used in future without having to re-enter the settings each time.

Finally, click on the Open button to establish the connection to the remote server, entering the user name and passphrase when prompted to do so to complete the authentication.

1.9  Generating a Private Key with PuTTYgen

The previous section explored the use of existing private and public keys when working with PuTTY. If keys do not already exist, they can be created using the PuTTYgen tool which is included with the main PuTTY installation.

To create new keys, launch PuttyGen and click on the Generate button highlighted in Figure 18-4:

Figure 18-4

Move the mouse pointer around to generate random data as instructed, then enter an optional passphrase with which to encrypt the private key. Once the keys have been generated, save the files to suitable locations using the Save public key and Save private key buttons. The private key can be used with PuTTY as outlined in the previous section. To install the public key on the remote server use the steps covered in the earlier section on using SSH within PowerShell on Windows 10.

1.10  Installing the Public Key for a Google Cloud Instance

If your Ubuntu system is hosted by Google Cloud, for example as a Compute Engine instance, there are a number of different ways to gain SSH access to the server using key-based authentication. Perhaps the most straightforward is to add your public key to the metadata for your Google Cloud account. This will make the public key available for all Virtual Machine instances that you create within Google Cloud. To add the public key, log into the Google Cloud Platform console, select the Metadata option from the left-hand navigation panel as highlighted in Figure 18-5 followed by the SSH keys tab:

Figure 18-5

On the SSH Keys screen, click on the Edit button (also highlighted in Figure 18-5) to edit the list of keys. Scroll down to the bottom of the current list and click on the + Add Item button. A new field will appear into which you will need to paste the entire public key as it appears in your id_rsa.pub file. Once the key has been entered, click on the Save button to add the key.

The public key will now appear in the list of SSH Keys. Note that the key entry also includes the username which must be used when logging into any Google Cloud instances:

Figure 18-6

With the public key added to the metadata it should be possible to access any virtual machine instance from any client on which the corresponding private key has been installed and on which the user has an account. In fact, behind the scenes, all Google Cloud has done to enable this is add the public key to the .ssh/authorized_keys file in the user’s home directory on any virtual machines on which the account exists.

1.11  Summary

It is important that any remote access to an Ubuntu system be implemented in a way that provides a high level of security. By default, SSH allows remote system access using password-based authentication. This leaves the system vulnerable to anyone who can either guess a password, or find out the password through other means. For this reason, the use of key-based authentication is recommended to protect system access. Key-based authentication uses the concept of public key encryption involving public and private keys. When implemented, users are only able to connect to a server if they are using a client which has a private key that matches a public key on the server. As an added layer of security, the private key may also be encrypted and password protected. Once key-based encryption has been implemented, the server system is then configured to disable support for the less secure password-based authentication.

This chapter has provided an overview of SSH key-based authentication and outlined the steps involved in generating keys and configuring clients on macOS, Linux and Windows, in addition to the installation and management of public keys on an Ubuntu server.

Basic Ubuntu 20.04 Firewall Configuration with firewalld

All Linux distributions are provided with a firewall solution of some form. In the case of Ubuntu this takes the form of the Uncomplicated Firewall outlined in the previous chapter. This chapter will introduce a more advanced firewall solution available for Ubuntu in the form of firewalld.

1.1  An Introduction to firewalld

Originally developed for Red Hat-based Linux distributions, the firewalld service uses a set of rules to control incoming network traffic and define which traffic is to be blocked and which is to be allowed to pass through to the system and is built on top of a more complex firewall tool named iptables.

The firewalld system provides a flexible way to manage incoming traffic. The firewall could, for example, be configured to block traffic arriving from a specific external IP address, or to prevent all traffic arriving on a particular TCP/IP port. Rules may also be defined to forward incoming traffic to different systems or to act as an internet gateway to protect other computers on a network.

In keeping with common security practices, a default firewalld installation is configured to block all access with the exception of SSH remote login and the DHCP service used by the system to obtain a dynamic IP address (both of which are essential if the system administrator is to be able to gain access to the system after completing the installation).

The key elements of firewall configuration on Ubuntu are zones, interfaces, services and ports.

1.1.1  Zones

By default, firewalld is installed with a range of pre-configured zones. A zone is a preconfigured set of rules which can be applied to the system at any time to quickly implement firewall configurations for specific scenarios. The block zone, for example, blocks all incoming traffic, while the home zone imposes less strict rules on the assumption that the system is running in a safer environment where a greater level of trust is expected. New zones may be added to the system, and existing zones modified to add or remove rules. Zones may also be deleted entirely from the system. Table 17-1 lists the set of zones available by default on an Ubuntu system:

ZoneDescription
dropThe most secure zone. Only outgoing connections are permitted and all incoming connections are dropped without any notification to the connecting client.
blockSimilar to the drop zone with the exception that incoming connections are rejected with an icmp-host-prohibited or icmp6-adm-prohibited notification.
publicIntended for use when connected to public networks or the internet where other computers are not known to be trustworthy. Allows select incoming connections.
externalWhen a system is acting as the internet gateway for a network of computers, the external zone is applied to the interface that is connected to the internet. This zone is used in conjunction with the internal zone when implementing masquerading or network address translation (NAT) as outlined later in this chapter. Allows select incoming connections
internalUsed with the external zone and applied to the interface that is connected to the internal network. Assumes that the computers on the internal network are trusted. Allows select incoming connections.
dmzFor use when the system is running in the demilitarized zone (DMZ). These are generally computers that are publicly accessible but isolated from other parts of your internal network. Allows select incoming connections.
workFor use when running a system on a network in a work environment where other computers are trusted. Allows select incoming connections.
homeFor use when running a system on a home network where other computers are trusted. Allows select incoming connections.
trustedThe least secure zone. All incoming connections are accepted.

Table 17-1

To review specific settings for a zone, refer to the corresponding XML configuration file located on the system in the /usr/lib/firewalld/zones directory. The following, for example, lists the content of the public.xml zone configuration file:

<?xml version="1.0" encoding="utf-8"?>
<zone>
  <short>Public</short>
  <description>For use in public areas. You do not trust the other computers on networks to not harm your computer. Only selected incoming connections are accepted.</description>
  <service name="ssh"/>
  <service name="mdns"/>
  <service name="dhcpv6-client"/>
</zone>

1.1.2   Interfaces

Any Ubuntu system connected to the internet or a network (or both) will contain at least one interface in the form of either a physical or virtual network device. When firewalld is active, each of these interfaces is assigned to a zone allowing different levels of firewall security to be assigned to different interfaces. Consider a server containing two interfaces, one connected externally to the internet and the other to an internal network. In such a scenario, the external facing interface would most likely be assigned to the more restrictive external zone while the internal interface might use the internal zone.

1.1.3  Services

TCP/IP defines a set of services that communicate on standard ports. Secure HTTPS web connections, for example, use port 443, while the SMTP email service uses port 25. To selectively enable incoming traffic for specific services, firewalld rules can be added to zones. The home zone, for example, does not permit incoming HTTPS connections by default. This traffic can be enabled by adding rules to a zone to allow incoming HTTPS connections without having to reference the specific port number.

1.1.4  Ports

Although common TCP/IP services can be referenced when adding firewalld rules, situations will arise where incoming connections need to be allowed on a specific port that is not allocated to a service. This can be achieved by adding rules that reference specific ports instead of services.

1.2  Checking firewalld Status

The firewalld service is not usually installed and enabled by default on all Ubuntu installations. The status of the service can be checked via the following command:

# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2020-04-10 10:17:54 EDT; 20s ago
     Docs: man:firewalld(1)
 Main PID: 4488 (firewalld)
    Tasks: 2 (limit: 4915)
   CGroup: /system.slice/firewalld.service
           └─4488 /usr/bin/python3 -Es /usr/sbin/firewalld --nofork --nopid
 
Apr 10 10:17:54 demo-server systemd[1]: Starting firewalld - dynamic firewall daemon...
Apr 10 10:17:54 demo-server systemd[1]: Started firewalld - dynamic firewall daemon.

If necessary, the firewalld service may be installed as follows:

# apt install firewalld

The firewalld service is enabled by default so will start automatically both after installation is complete and each time the system boots.

1.3  Configuring Firewall Rules with firewall-cmd

The firewall-cmd command-line utility allows information about the firewalld configuration to be viewed and changes to be made to zones and rules from within a terminal window.

When making changes to the firewall settings, it is important to be aware of the concepts of runtime and permanent configurations. By default, any rule changes are considered to be runtime configuration changes. This means that while the changes will take effect immediately, they will be lost next time the system restarts or the firewalld service reloads, for example by issuing the following command:

# firewall-cmd --reload

To make a change permanent, the –permanent command-line option must be used. Permanent changes do not take effect until the firewalld service reloads but will remain in place until manually changed.

1.3.1  Identifying and Changing the Default Zone

To identify the default zone (in other words the zone to which all interfaces will be assigned unless a different zone is specifically selected) use the firewall-cmd tool as follows:

# firewall-cmd --get-default-zone public

To change the default to a different zone:

# firewall-cmd --set-default-zone=home success

1.3.2  Displaying Zone Information

To list all of the zones available on the system:

# firewall-cmd --get-zones
block dmz drop external home internal public trusted work

Obtain a list of zones currently active together with the interfaces to which they are assigned as follows:

# firewall-cmd --get-active-zones external&nbsp;&nbsp; interfaces: eth0 internal&nbsp; interfaces: eth1

All of the rules currently configured for a specific zone may be listed as follows:

# firewall-cmd --get-active-zones
external
  interfaces: eth0
internal
 interfaces: eth1

All of the rules currently configured for a specific zone may be listed as follows:

# firewall-cmd --zone=home --list-all
home (active)
  target: default
  icmp-block-inversion: no
  interfaces: eth0
  sources: 
  services: cockpit dhcpv6-client http mdns samba-client ssh
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules: 

Use the following command to list the services currently available for inclusion in a firewalld rule:

# firewall-cmd --get-services
RH-Satellite-6 amanda-client amanda-k5-client amqp amqps apcupsd audit bacula bacula-client bgp bitcoin bitcoin-rpc bitcoin-testnet bitcoin-testnet-rpc ceph ceph-mon cfengine cockpit ...

To list the services currently enabled for a zone:

# firewall-cmd --zone=public --list-services cockpit dhcpv6-client ssh

A list of port rules can be obtained as follows:

# firewall-cmd --zone=public --list-ports
9090/tcp

1.3.3  Adding and Removing Zone Services

To add a service to a zone, in this case adding HTTPS to the public zone, the following command would be used:

# firewall-cmd --zone=public --add-service=https success

By default this is a runtime change, so the added rule will be lost after a system reboot. To add a service permanently so that it remains in effect next time the system restarts, use the –permanent flag:

# firewall-cmd --zone=public --permanent --add-service=https success

To verify that a service has been added permanently, be sure to include the –permanent flag when requesting the service list:

# firewall-cmd --zone=public --permanent --list-services cockpit dhcpv6-client http https ssh

Note that as a permanent change, this new rule will not take effect until the system restarts or firewalld reloads:

# firewall-cmd --reload

Remove a service from a zone using the –remove-service option. Since this is a runtime change, the rule will be re-instated the next time the system restarts:

# firewall-cmd --zone=public --remove-service=https

To remove a service permanently use the –permanent flag, remembering to reload firewalld if the change is required to take immediate effect:

# firewall-cmd --zone=public --permanent --remove-service=https

1.3.4  Working with Port-based Rules

To enable a specific port, use the –add-port option. Note that when manually defining the port, both the port number and protocol (TCP or UDP) will need to be provided:

# firewall-cmd --zone=public --permanent --add-port=5000/tcp

It is also possible to specify a range of ports when adding a rule to a zone:

# firewall-cmd --zone=public --permanent --add-port=5900-5999/udp

1.3.5  Creating a New Zone

An entirely new zone may be created by running the following command. Once created, the zone may be managed in the same way as any of the predefined zones:

# firewall-cmd --permanent --new-zone=myoffice success

After adding a new zone, firewalld will need to be restarted before the zone becomes available:

# firewall-cmd --reload success

1.3.6  Changing Zone/Interface Assignments

As previously discussed, each interface on the system must be assigned to a zone. The zone to which an interface is assigned can also be changed using the firewall-cmd tool. In the following example, the eth0 interface is assigned to the public zone:

# firewall-cmd --zone=public --change-interface=eth0 success

1.3.7  Masquerading

Masquerading is better known in networking administration circles as Network Address Translation (NAT). When using an Ubuntu system as a gateway to the internet for a network of computers, masquerading allows all of the internal systems to use the IP address of that Ubuntu system when communicating over the internet. This has the advantage of hiding the internal IP addresses of any systems from malicious external entities and also avoids the necessity to allocate a public IP address to every computer on the network.

Use the following command to check whether masquerading is already enabled on the firewall:

# firewall-cmd --zone=external --query-masquerade

Use the following command to enable masquerading (remembering to use the –permanent flag if the change is to be permanent):

# firewall-cmd --zone=external --add-masquerade

1.3.8  Adding ICMP Rules

The Internet Control Message Protocol (ICMP) is used by client systems on networks to send information such as error messages to each other. It is also the foundation of the ping command which is used by network administrators and users alike to detect whether a particular client is alive on a network. The ICMP category allows for the blocking of specific ICMP message types. For example, an administrator might choose to block incoming ping (Echo Request) ICMP messages to prevent the possibility of a ping based denial of service (DoS) attack (where a server is maliciously bombarded with so many ping messages that it becomes unable to respond to legitimate requests).

To view the ICMP types available for inclusion in firewalld rules, run the following command:

# firewall-cmd --get-icmptypes address-unreachable bad-header beyond-scope communication-prohibited destinationunreachable echo-reply ...

The following command, for example, permanently adds a rule to block echo-reply (ping request) messages for the public zone:

# firewall-cmd --zone=public --permanent --add-icmp-block=echo-reply

1.3.9  Implementing Port Forwarding

Port forwarding is used in conjunction with masquerading when the Ubuntu system is acting as a gateway to the internet for an internal network of computer systems. Port forwarding allows traffic arriving at the firewall via the internet on a specific port to be forwarded to a particular system on the internal network. This is perhaps best described by way of an example.

Suppose that an Ubuntu system is acting as the firewall for an internal network of computers and one of the systems on the network is configured as a web server. Let’s assume the web server system has an IP address of 192.168.2.20. The domain record for the web site hosted on this system is configured with the public IP address behind which the Ubuntu firewall system sits. When an HTTP web page request arrives on port 80 the Ubuntu system acting as the firewall needs to know what to do with it. By configuring port forwarding it is possible to direct all web traffic to the internal system hosting the web server (in this case, IP address 192.168.2.20), either continuing to use port 80 or diverting the traffic to a different port on the destination server. In fact, port forwarding can even be configured to forward the traffic to a different port on the same system as the firewall (a concept known as local forwarding).

To use port forwarding, begin by enabling masquerading as follows (in this case the assumption is made that the interface connected to the internet has been assigned to the external zone):

# firewall-cmd --zone=external --add-masquerade

To forward from a port to a different local port, a command similar to the following would be used:

# firewall-cmd --zone=external --add-forward-port=port=22:proto=tcp:toport=2750

In the above example, any TCP traffic arriving on port 22 will be forwarded to port 2750 on the local system. The following command, on the other hand, forwards port 20 on the local system to port 22 on the system with the IP address of 192.168.0.19:

# firewall-cmd --zone=external \
          --add-forward-port=port=20:proto=tcp:toport=22:toaddr=192.168.0.19

Similarly, the following command forwards local port 20 to port 2750 on the system with IP address 192.168.0.18:

# firewall-cmd --zone=external --add-forward-port=port=20:proto=tcp:toport=2750:to addr=192.168.0.18

1.4   Managing firewalld using firewall-config

If you have access to the graphical desktop environment, the firewall may also be configured using the firewall-config tool. Though not installed by default, firewall-config may be installed as follows:

# apt install firewall-config

When launched, the main firewall-config screen appears as illustrated in Figure 17-1:

Figure 17-1 The key areas of the tool can be summarized as follows:

  • A – Displays all of the currently active interfaces and the zones to which they are assigned. To assign an interface to a different zone, select it from this panel, click on the Change Zone button and select the required zone from the resulting dialog.
  • B – Controls whether the information displayed and any changes made within the tool apply to the runtime or permanent rules.
  • C – The list of zones, services or IPSets configured on the system. The information listed in this panel depends on the selection made from toolbar F. Selecting an item from the list in this panel updates the main panel marked D.
  • D – The main panel containing information about the current category selection in toolbar E. In this example, the panel is displaying services for the public zone. The checkboxes next to each service control whether the service is enabled or not within the firewall. It is within these category panels that new rules can be added or existing rules configured or removed.
  • E – Controls the content displayed in panel D. Selecting items from this bar displays the current rule for the chosen category.
  • F – Controls the list displayed in panel C.

The firewall-config tool is straightforward and intuitive to use and allows many of the tasks available with firewall-cmd to be performed in a visual environment.

1.5   Summary

A carefully planned and implemented firewall is a vital component of any secure system. In the case of Ubuntu, the firewalld service provides a firewall system that is both flexible and easy to administer.

The firewalld service uses the concept of zones to group together sets of firewall rules and includes a suite of pre-defined zones designed to meet a range of firewall protection requirements. These zones may be modified to add or remove rules, or entirely new zones created and configured. The network devices on the system that connect to networks or the internet are referred to as interfaces. Each interface, in turn, is assigned to a zone. The primary tools for working with firewalld are the firewall-cmd command-line tool and the firewall-config graphical utility.

Using gufw and ufw to Configure an Ubuntu 20.04 Firewall

In the previous chapter we looked at ports and services on an Ubuntu system. We also briefly looked at iptables firewall rules on Ubuntu including the creation of a few very simple rules from the command line. In this chapter we will look at a more user friendly approach to iptables configuration using two tools named gufw and ufw. As we will see, gufw and ufw provide a high level of control over both inbound and outbound network traffic and connections without the need to understand the lower level iptables syntax.

1.1  An Overview of gufw and ufw

Included with Ubuntu is a package called ufw which is an acronym for Uncomplicated Firewall. This package provides a command line interface for managing and configuring rules for the Netfilter iptables based firewall. The gufw tool provides a user friendly graphical interface to ufw designed to make firewall management possible without the need to issue ufw commands at the command line.

1.2  Installing gufw on Ubuntu

Whilst ufw is installed on Ubuntu by default, the gufw package is not. To install gufw, therefore, open a Terminal window (Ctrl-Alt-T) and enter the following command at the resulting prompt:

# apt install gufw

1.3  Running and Enabling gufw

Once installed, launch gufw by pressing Alt-F2 within the GNOME desktop and entering gufw into the Run a command text box. When invoked for the first time it is likely that the firewall will be disabled as illustrated in Figure 16-1.

To enabled the firewall, move the Status switch (A) to the on position. By default, the main panel (D) will be displaying the gufw home page containing some basic information about the tool. Selecting options from the row of buttons (C) will change the information displayed in the panel. For example, select the Rules button to add, remove and view rules.

The gufw tool is provided with a small set of pre-configured profiles for work, home and public environments. To change the profile and view the settings simply select the profile from the menu (B). To modify an existing profile, select it from the menu and use the Incoming and Outgoing menus to change the selections. To configure specific rules, display the Rules screen and add, remove and modify rules as required. These will then be applied to the currently selected profile.

Figure 16-1

The currently selected profile dictates how the firewall handles traffic in the absence of any specific policy rules. By default the Home profile, for example, is configured to deny all incoming traffic and allow all outgoing traffic. These default policy settings are changed using the Incoming: and Outgoing: menus (E).

Exceptions to the default policy are defined through the creation of additional rules. With the Home profile denying incoming traffic, for example, rules would need to be added to enable certain acceptable types of incoming connections. Such rules are referred to in the security community as a whitelist.

If, on the other hand, the incoming policy was changed to Allow all traffic then all incoming traffic would be permitted unless rules were created for specific types of connections that must be blocked. These rules, unsurprisingly, are referred to as a blacklist. The blacklist/whitelist approach applies equally to incoming and outgoing connections.

1.4  Creating a New Profile

While it is possible to modify the pre-defined profiles, it will typically make more sense to create one or more profiles to configure the firewall for your specific needs. New profiles are created by selecting the Edit -> Preferences… menu option to display the dialog shown in Figure 16-2:

Figure 16-2

To add a new profile, click on the ‘+’ button located beneath the list of profiles. A new profile named Profile 1 will appear in the list. To give the profile a more descriptive name, double-click on the entry to enter edit mode and enter a new name:

Figure 16-3

Once the profile has been created and named, click on the Close button to return to the main screen and select it from the Profile menu:

Figure 16-4

With the custom profile selected, it is ready to set up some custom rules that override the default incoming and outgoing settings.

1.5  Adding Preconfigured Firewall Rules

New rules are created by clicking on the Rules button followed by the + button located at the bottom of the rules panel as highlighted in Figure 16-5:

Figure 16-5

Once selected the dialog shown in Figure 16-6 will appear with the Preconfigured tab selected.

The Preconfigured panel allows rules to be selected that match specific applications and services. For example traffic from such tools as Skype, and BitTorrent may be enabled by selecting the entry from the Application menu and the Policy and Direction menus accordingly to restrict or allow traffic.

To help find a specific application or service, use the Category and Subcategory menus to filter the items that appear in the Application menu. Alternatively, simply enter the application or service name in the Application Filter field to filter the menu items:

Figure 16-6

 The Policy menu provides the following options for controlling traffic for the selected application or service:

  • Allow – Traffic is permitted on the port.
  • Deny – Traffic is not permitted on the port. The requesting system is not notified of the denial. The data packet is simply dropped.
  • Reject – Traffic is not permitted on the port. The data packet is dropped and the requesting system is notified of the rejection.
  • Limit – Connections are denied if the same IP address has attempted to establish 6 or more connections over a 30 second time frame.

Once a rule has been defined, clicking the Add button will implement the rule, dismiss the Add Rule dialog and the new rule will be listed in the main screen of the gufw tool (with rules for both IPv4 and IPv6 protocols):

Figure 16-7

To delete a rule, select it within the list and click on the ‘-’ button located at the bottom of the dialog. Similarly, edit and existing rule by selecting it and clicking on the gear button to the right of the ‘-’ button.

1.6  Adding Simple Firewall Rules

Whereas preconfigured rules allow the firewall to be configured based on well known services and applications, the Simple tab of the Add Rule dialog allows incoming and outgoing rules to be defined simply by referencing the corresponding TCP/IP port. The ports used by known applications and services represent only a small subset of the ports available for use by applications and for which firewall rules may need to be defined. A third party application might for example use port 5700 to communicate with a remote server. That being the case, it may be necessary to allow traffic on this specific port using the Simple panel:

Figure 16-8

The rule may be configured to filter either TCP, UDP or both traffic types. In addition the port may be specified as a single port number or as a range of ports with the start and end ports separated by a colon (1000:1500, for example, would apply the rule to all ports between 1000 and 1500). Alternatively, enter the name of the service associated with the port (for example https or ssh)

1.7  Adding Advanced Rules

So far we have looked at rules to control only the type of traffic to block (incoming traffic on port 22 for example) regardless of the source or destination of the traffic. It is often the case, however, that rules will need to be defined to allow or deny traffic based on an IP address or range of IP addresses.

For the purposes of an example, assume that the local system has an IP address of 192.168.0.102. The firewall may be configured to only allow access on port 22 from a system with the IP address of, for example, 192.168.0.105. To achieve this, the From: field of the Advanced settings panel should be set to the IP address of the system from which the connection request is originating (in this case 192.168.0.105).

The To: fields provide the option to specify the IP address and port of the system to which the connection is being made. In this example this would be port 22 on the local system (192.168.0.102). The To: IP address is actually optional and may be left blank:

Figure 16-9

Assuming that the incoming default policy is still set to Deny or Reject on the main screen, the above rule will allow SSH access via port 22 to the local system only from the remote system with the IP address of 192.168.0.105. SSH access attempts from systems with other IP addresses will fail. Note that if the target system is the local system the To: IP address field may be left blank. The Insert field in the above dialog allows the new rule to be inserted at the specified position in the list of existing rules, thereby allowing you to control the order in which the rules are applied within the firewall.

It is also possible to specify a range of addresses by using the IP address bitmask. For example, to create a rule for a range of IP addresses from 192.168.0.1 to 192.168.0.255 the IP address should be entered into the From: field as 192.168.0.0/24.

Similarly, to specify a rule covering IP address range 192.168.0.1 to 192.168.0.30, a bitmask of 27 would be used, i.e. 192.168.0.0/27.

A useful calculator for identifying the address range covered by each bit mask value is available online at http://subnet-calculator.com. 

1.8  Configuring the Firewall from the Command Line using ufw

All of the firewall configuration options available through the graphical gufw tool are also available from the underlying command line using ufw command. To enable or disable the firewall:

# ufw enable
# ufw disable

The current status of the firewall may be obtained using the status option:

# ufw status
Status: active
 
To                         Action      From
--                         ------      ----
22                         ALLOW       192.168.86.30  

For more details status information, combine the above command with the verbose option:

# ufw status verbose
Status: active
Logging: on (full)
Default: deny (incoming), allow (outgoing), deny (routed)
New profiles: skip
 
To                         Action      From
--                         ------      ----
22                         ALLOW IN    192.168.86.30

The output in the above example shows that the default policy for the firewall is to deny all incoming and routed connections while allowing all outgoing connections. To change any of these default settings, use the ufw command with the default option using the following syntax:

# ufw default <policy> <direction>

For example, to enable all incoming connections by default:

# ufw default allow incoming

To allow or block traffic on a specific port use the following syntax:

# ufw <policy> <port number>/<optional protocol>

For example, to allow both TCP and UDP incoming traffic on port 30:

# ufw allow 30

Similarly, to deny incoming TCP traffic on port 5700:

# ufw deny 5700/tcp

Rules may also be declared by referencing the name of the service corresponding to the port. For example to enable SSH access on port 22:

# ufw allow ssh

As with the gufw tool, ufw also allows access to be controlled from specific external IP addresses. For example, to allow all incoming traffic from IP address 192.168.0.20:

# ufw allow from 192.168.0.20

To specifically deny traffic from an IP address:

# ufw deny 192.168.0.20

To limit access for IP address 192.168.0.20 to only port 22:

# ufw allow from 192.168.0.22 to any port 22

To further restrict access for the IP address to only TCP packets, use the following syntax:

# ufw allow from 192.168.0.20 to any port 22 proto tcp

The position of a new rule within the list of existing rules may be declared using the insert option. For example to create a new rule and insert it at position 3 in the rules list:

# ufw insert 3 allow from 192.168.0.123 to any port 2 

To display the list of rules with associated numbers:

# ufw status numbered
Status: active
 
     To                         Action      From
     --                         ------      ----
[ 1] 22                         ALLOW IN    192.168.86.30             
[ 2] 30                         ALLOW IN    Anywhere                  
[ 3] 22                         ALLOW IN    192.168.0.123             
[ 4] 5700/tcp                   DENY IN     Anywhere                  
[ 5] 22/tcp                     ALLOW IN    Anywhere                  
[ 6] Anywhere                   ALLOW IN    192.168.0.20              
[ 7] 22                         ALLOW IN    192.168.0.4               
[ 8] 30 (v6)                    ALLOW IN    Anywhere (v6)             
[ 9] 5700/tcp (v6)              DENY IN     Anywhere (v6)             
[10] 22/tcp (v6)                ALLOW IN    Anywhere (v6)  

Having identified the number assigned to a rule, that number may be used to remove the rule from the firewall:

# ufw delete 4

To obtain a full listing of the capabilities of the ufw tool run the command with the –help argument:

# ufw help

Logging of firewall activity can be enabled and disabled using the logging command-line option:

# ufw logging on
# ufw logging off

The amount of logging performed by ufw may also be declared including a low, medium, high or full setting when turning on logging, for example:

# ufw logging on low

The ufw log file can be found at:

/var/log/ufw.log

Use the reload option to restart the firewall and reload all current settings:

# ufw reload

Finally, to reset the firewall to the default settings (thereby removing all existing rules and settings):

# ufw reset

1.9  Summary

Any computer system that is connected to other systems, either via an internal network or through an internet connection should implement some form of firewall to mitigate the risk of attack. The Uncomplicated Firewall provided with Ubuntu provides a simple yet effective firewall that allows basic rules to be defined either using the command-line or the graphical gufw tool. This chapter has outlined the basics of enabling the firewall and configuring firewall rules.

Ubuntu 20.04 Firewall Basics

A firewall is a vital component in protecting an individual computer system or network of computers from external attack (typically from an internet connection). Any computer connected directly to an internet connection should ideally run a firewall to protect against malicious activity. Similarly, any internal network must have some form of firewall between it and an external internet connection.

Ubuntu is supplied with powerful firewall technology known as iptables built-in. Entire books can, and indeed have, been written about configuring iptables. If you would like to learn about iptables we recommend:

https://www.linuxtopia.org/Linux_Firewall_iptables/index.html

The goal of this chapter is to cover some of the basic concepts of firewalls, TCP/IP ports and services. The configuration of a firewall on an Ubuntu system will be covered in “Using gufw and ufw to Configure an Ubuntu Firewall”. For more complex firewall requirements, a detailed overview of the firewalld firewall will be covered in the chapter entitled “Basic Ubuntu Firewall Configuration with firewalld”.

1.1  Understanding Ports and Services

The predominant network communications protocol in use these days is TCP/IP. It is the protocol used by the internet and as such has swept away most of the formerly popular protocols used for local area networks (LANs).

TCP/IP defines a total of 65,535 ports of which 1023 are considered to be well known ports. It is important to understand that these are not physical ports into which network cables are connected, but rather virtual ports on each network connection which can be used by applications and services to communicate over a TCP/IP network connection. In reality the number of ports that are used by popular network clients and services comprises an even smaller subset of the well known group of ports.

There are a number of different TCP/IP services that can be provided by an operating system. A comprehensive list of such services is provided in the table at the end of this chapter, but such services include HTTPS for running a secure web server, FTP for allowing file transfers, SSH for providing secure remote login access and file transfer and SMTP for the transport of email messages. Each service is, in turn, assigned to a standard TCP/IP port. For example, HTTPS is assigned to port 443 while SSH communication takes place on port 22.

1.2  Securing Ports and Services

A large part of securing servers involves defining roles, and based on the roles, defining which services and ports should be enabled. For example, a server that is to act solely as a web server should only run the HTTPS service (in addition to perhaps SSH for remote administration access). All other services should be disabled and, ideally, removed entirely from the operating system (thereby making it harder for an intruder to re-enable the service).

Securing a system involves both removing any unnecessary services from the operating system and ensuring that the ports associated with the non-essential services are blocked using a firewall. The rules that define which ports are accessible and under what circumstances are defined using iptables.

Many operating systems are installed with a number of services installed and activated by default. Before installing a new operating system it is essential that the installation be carefully planned. This involves deciding which services are not required and identifying which services have been installed and enabled by default. Deployment of new operating system installations should never be rushed. The fewer services and open ports available on a system the smaller the surface area and opportunities for attackers.

1.3  Ubuntu Services and iptables Rules

By default, a newly installed Ubuntu system does not have any iptables rules defined to restrict access to ports. To view the current iptables settings, the following command may be executed in a terminal window:

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
 
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
 
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

As illustrated in the above output, no rules are currently defined. Whilst this may appear to be an unsafe configuration it is important to keep in mind that a newly installed Ubuntu system also has few services running by default, making the ports essentially useless to a potential attacker. It is not possible, for example, to remotely log into a newly installed Ubuntu system or access a web server for the simple reason that neither the ssh nor web server services are installed or running by default. Once services begin to be activated on the system, however, it will be important to begin to establish a firewall strategy by defining iptables rules.

A number of methods are available for defining iptables rules, including the use of command line tools and configuration files. For example, to block access to port 25 (used by the SMTP mail transfer protocol) from IP address 192.168.2.76, the following command could be issued in a terminal window:

# iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
DROP       tcp  --  192.168.2.76         anywhere             tcp dpt:smtp
 
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         
 
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

The rule may subsequently be removed as follows:

# iptables -D INPUT -s 192.168.2.76 -p tcp --destination-port 25 -j DROP

Given the complexity of iptables it is not surprising that a number of user friendly graphical configuration tools have been created to ease the rule creation process. One such tool is the Uncomplicated Firewall with its ufw command-line tool and graphical equivalent (gufw) which will be covered in the chapter entitled “Using gufw and ufw to Configure an Ubuntu Firewall”. For more advanced firewall configurations, firewalld will be covered in “Basic Ubuntu Firewall Configuration with firewalld”.

1.4  Well Known Ports and Services

Before moving on to cover more complex firewall rules, it is first worth taking time to outline some of the key services that can be provided by an Ubuntu system, together with the corresponding port numbers:

PortAssignmentDescription
20FTPFile Transfer Protocol (Data) – The File Transfer protocol provides a mechanism for transferring specific files between network connected computer systems. Transfer is typically performed using the ftp client. Most modern web browsers also have the ability to browse and download files located on a remote FTP server. FTP uses TCP (rather than UDP) to transfer files so is considered to be a highly reliable transport mechanism. FTP does not encrypt data and is not considered to be a secure file transfer protocol. The use of Secure Copy Protocol (SCP) and Secure File Transfer Protocol (SFTP) is strongly recommended in place of FTP.
21FTPFile Transfer (Control) – Traditionally FTP has two ports assigned (port 20 and port 21). Port 20 was originally considered the data transfer port, while port 21 was assigned to communicate control information. In modern implementations port 20 is now rarely used, with all communication taking place on port 21.
22SSHSecure Shell – The Secure Shell is used to provide a secure, encrypted, remote logon session to a remote host over a TCP/IP network. The original mechanism for remote access was the Telnet protocol. Because Telnet transmits data in plain text its use is now strongly discouraged in favor of the secure shell, which encrypts all communications, including log-in and password credentials. SSH also provides the mechanism by which files can be securely transferred using the Secure Copy Protocol (SCP), and is also the basis for the Secure File Transfer Protocol (SFTP). SSH also replaces both the rsh and rlogin clients.
23TelnetTelnet – Telnet is a terminal emulation protocol that provides the ability to log into a remote system over a TCP/IP connection. The access is text based allowing the user to type into a command prompt on the remote host and text displayed by the remote host is displayed on the local Telnet client. Telnet encrypts neither the password nor the text communicated between the client and server. As such, the use of telnet is strongly discouraged. Most modern systems will have port 23 closed and the telnet service disabled to prevent its use. SSH should be used in place of Telnet.
25SMTPSimple Mail Transfer Protocol – SMTP defines the mechanism by which email messages are sent from one network host to another. SMTP is a very simple protocol and requires that the mail service always be available at the receiving host. Typically the receiving host will store incoming messages in a spool for subsequent access by the recipient using the POP3 or IMAP protocols. SMTP uses the TCP transport protocol to ensure error free message delivery.
53DNSDomain Name Server – The service used by TCP/IP networks to translate host names and Fully Qualified Domain Names (FQDN) to IP addresses.
69TFTPTrivial File Transfer Protocol – TFTP is a stripped down version of the File Transfer Protocol (FTP). It has a reduced command-set and lacks authentication. The most significant feature of TFTP is that it uses UDP to transfer data. This results in extremely fast transfer speeds but, consequently, lacks data reliability. TFTP is typically used in network based booting for diskless workstations.
80HTTPHypertext Text Transfer Protocol – HTTP is the protocol used to download text, graphics and multimedia from a web server and to a web browser. Essentially it defines the command and control mechanism between the browser and server defining client requests and server responses. HTTP is based on the TCP transport protocol and, as such, is a connection-oriented protocol.
110POP3Post Office Protocol – The POP3 protocol is a mechanism for storage and retrieval of incoming email messages from a server. In most corporate environments incoming email is stored on an email server and then downloaded to an email client running on the user’s desktop or laptop when the user checks email. POP3 downloads all new messages to the client, and does not provide the user the option of choosing which messages to download, view headers, or download only parts of messages. It is for this reason the IMAP protocol is increasingly being used in place of POP3.
119NNTPNetwork News Transfer Protocol – The protocol responsible for posting and retrieving messages to and from Usenet News Servers (i.e. newsgroups and discussion forums hosted on remote servers). NNTP operates at the Application layer of the OSI stack and uses TCP to ensure error free message retrieval and transmission.
123NTPNetwork Time Protocol – A protocol designed to synchronize computer clocks with an external time source. Using this protocol an operating system or application can request the current time from a remote NTP server. The remote NTP server is usually based on the time provided by a nuclear clock. NTP is useful for ensuring that all systems in a network are set to the same, accurate time of day. This is of particular importance in security situations when, for example, the time a file was accessed or modified on a client or server is in question.
143IMAP4Internet Message Access Protocol, Version 4 – IMAP4 is an advanced and secure email retrieval protocol. IMAP is similar to POP3 in that it provides a mechanism for users to access email messages stored on an email server, although IMAP includes many additional features such as the ability to selectively download messages, view message headers, search messages and download part of a message. IMAP4 uses authentication and fully supports Kerberos authentication.
161SNMPSimple Network Management Protocol – Provides a mechanism whereby network administrators are able to collect information about the devices (such as hubs, bridges, routers and switches) on a network. The SNMP protocol enables agents running on network devices to communicate their status to a central manager and, in turn, enables the manager to send new configuration parameters to the device agent. The agents can further be configured to notify the manager when certain events, known as traps, occur. SNMP uses UDP to send and receive data.
443HTTPSHypertext Transfer Protocol Secure – The standard HTTP (non-secure) protocol transfers data in clear text (i.e. with no encryption and visible to anyone who might intercept the traffic). Whilst this is acceptable for most web browsing purposes it poses a serious security risk when confidential information such as credit card details need to be transmitted from the browser to the web server. HTTPS addresses this by using the Secure Sockets Layer (SSL) to send encrypted data between the client and server.
 2049NFSNetwork File System – Originally developed by Sun Microsystems and subsequently widely adopted throughout the industry, NFS allows a file system on a remote system to be accessed over the network by another system as if the file system were on a local disk drive. NFS is widely used on UNIX and LINUX based systems. Later versions of Microsoft Windows possess the ability to also access NFS shared file systems on UNIX and LINUX based systems.

1.5  Summary

A newly install Ubuntu system is generally considered to be secure due to the absence of any services running on the system ports. Once the system begins to be configured for use, however, it is important to ensure that it is protected from attack through the implementation of a firewall. When configuring firewalls, it is important to have an understanding of the various ports and the corresponding services.

A number of firewall options are available, the most basic being command-line configuration of the iptables firewall interface. More intuitive and advanced options are available via the Uncomplicated Firewall (ufw) and firewalld, both of which will be covered in the chapters that follow.

Ubuntu 20.04 Network Management

It is difficult to envisage an Ubuntu system that does not have at least one network connection, and harder still to imagine how such an isolated system could be of much practical use. The simple fact is that Ubuntu is designed to provide enterprise level services over network and internet connections. A key part of learning how to administer an Ubuntu system involves learning how to configure and manage the network interfaces installed on the system.

This chapter is intended to provide an overview of network management on Ubuntu including the NetworkManager service and tools together with some other useful utilities.

1.1  An Introduction to NetworkManager

NetworkManager is a service and set of tools designed specifically to make it easier to manage the networking configuration on Linux systems and is the default network management service on Ubuntu desktop installations.

In addition to a service that runs in the background, NetworkManager also includes the following tools:

  • nmcli – A tool for working with NetworkManager via the command-line. This tool is useful when access to a graphical environment is not available and can also be used within scripts to make network configuration changes.
  • nmtui – A basic text-based user interface for managing NetworkManager. This tool can be run within any terminal window and allows changes to be made by making menu selections and entering data. While useful for performing basic tasks, nmtui lacks many of the features provided by the nmcli tool.
  • nm-connection-editor – A full graphical management tool providing access to most of the NetworkManager configuration options.
  • GNOME Settings – The Network screen of the GNOME desktop Settings application allows basic network management tasks to be performed.
  • Cockpit Network Settings – The Network screen of the Cockpit web interface allows a range of network management tasks to be performed.

Although there are a number of different ways to manage the network environment on an Ubuntu system, for the purposes of this chapter we will focus on the nmcli command. While the graphical tools are certainly useful when you have access to a desktop environment or Cockpit has been enabled, understanding the command-line interface is essential for situations where a command prompt is all that is available. Also, the graphical tools (Cockpit included) do not include all of the capabilities of the nmcli tool. Finally, once you have gained some familiarity with NetworkManager and nmcli, those skills will translate easily when using the more intuitive tool options. The same cannot be said of the graphical tool options. It is harder to use nmcli if, for example, you have only ever used nm-connection-editor.

1.2  Installing and Enabling NetworkManager

NetworkManager should be installed by default for most Ubuntu installations if the Desktop installation image was used. Use the apt command to find out if it needs to be installed:

# apt -qq list network-manager
network-manager/bionic-updates,now 1.22.10-1ubuntu1 amd64 [installed,automatic]

If necessary, install the package as follows:

# apt install network-manager

Once the package is installed, the NetworkManager daemon will need to be enabled so that it starts each time the system boots:

# systemctl status network-manager

Finally, start the service running and check the status to verify that the launch was successful:

# systemctl status network-manager
● NetworkManager.service - Network Manager
   Loaded: loaded (/lib/systemd/system/NetworkManager.service; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2020-04-08 14:31:58 EDT; 19h ago
     Docs: man:NetworkManager(8)
 Main PID: 704 (NetworkManager)
    Tasks: 4 (limit: 4915)
   CGroup: /system.slice/NetworkManager.service
           ├─704 /usr/sbin/NetworkManager --no-daemon
.
.

1.3  Basic nmcli Commands

The nmcli tool will have been installed as part of the NetworkManager package and can be executed from the command-line using the following syntax:

# nmcli [Options] Object {Command | help}

In the above syntax, Object will be one of general, networking, radio, connection, monitor, device or agent, all of which can be abbreviated to a few letters of the word (for example con, or even just the letter c, for connection). For example, all of the following commands will output help information relating to the device object:

# nmcli device help
# nmcli dev help
# nmcli d help

To check the overall status of NetworkManager on the system, use the following command:

# nmcli general status
STATE      CONNECTIVITY  WIFI-HW  WIFI     WWAN-HW  WWAN    
connected  full          enabled  enabled  enabled  enabled

To check the status of the devices installed on a system, the following command can be used:

# nmcli dev status
DEVICE           TYPE      STATE      CONNECTION         
eno1             ethernet  connected  Wired connection 1 
wlxc83a35cad517  wifi      connected  zoneone          
virbr0           bridge    connected  virbr0             
lo               loopback  unmanaged  --                 
virbr0-nic       tun       unmanaged  --

The output may also be modified by using the -p (pretty) option to make the output more human friendly:

# nmcli -p dev status
=====================
  Status of devices
=====================
DEVICE           TYPE      STATE      CONNECTION         
-------------------------------------------------------------------------------
eno1             ethernet  connected  Wired connection 1 
wlxc83a35cad517  wifi       connected  zoneone          
virbr0           bridge    connected  virbr0             
lo               loopback  unmanaged  --                 
virbr0-nic       tun       unmanaged  --

Conversely, the -t option may be used to make the output more terse and suitable for automated processing:

# nmcli -t dev status
eno1:ethernet:connected:Wired connection 1
wlxc83a35cad517:wifi:connected:EmilyZone
virbr0:bridge:connected:virbr0
lo:loopback:unmanaged:
virbr0-nic:tun:unmanaged:

From the status output, we can see that the system has two physical devices installed, one Ethernet and the other a WiFi device.

The bridge (virbr) entries are virtual devices used to provide networking for virtual machines (the topic of virtualization will be covered starting with the chapter entitled “An Overview of Virtualization Techniques”). The loopback interface is a special virtual device that allows the system to communicate with itself and is typically used to perform network diagnostics.

When working with NetworkManager, it is important to understand the difference between a device and a connection. As described above, a device is either a physical or virtual network device while a connection is a network configuration that the device connects to.

The following command displays information about the connections configured on the system:

# nmcli con show
NAME               UUID                                  TYPE     DEVICE          
zoneone            bbd6e294-5d0c-4eac-b3c2-4dfd44becc9c  wifi      wlxc83a35cad517 
Wired connection 1 56f32c14-a4d2-32c8-9391-f51967efa173  ethernet eno1            
virbr0             f2d3494f-6ea4-4c90-936c-5eda9ac96a85  bridge   virbr0          
zonetwo            f2a20df5-aa5e-4576-8379-579d154c3e0d  wifi      --              
zonethree          45beac50-8741-41a6-abff-415640e24071  wifi      --

From the above output, we can see that the WiFi device (wlxc83a35cad517) is connected to a wireless network named zoneone while the Ethernet device (eno1) is connected to a connection named Wired connection 1. In addition to zoneone, NetworkManager has also listed two other WiFi connections named zonetwo and zonethree, neither of which currently have a device connected.

To find out the IP address allocated to a connection, the ip tool can be used with the address option:

# ip address

This can also be abbreviated:

.
.
3: wlxc83a35cad517: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether c8:3a:35:ca:d5:17 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.121/24 brd 192.168.86.255 scope global dynamic noprefixroute wlxc83a35cad517
       valid_lft 86076sec preferred_lft 86076sec
.
.

The ip command will output information for all of the devices detected on the system. The above output shows that the WiFi device has been assigned an IP address of 192.168.1.121.

If we only wanted to list active connections, the nmcli command could have been used with the -a option:

# nmcli con show -a
NAME               UUID                                  TYPE     DEVICE          
zoneone            bbd6e294-5d0c-4eac-b3c2-4dfd44becc9c  wifi      wlxc83a35cad517 
Wired connection 1 56f32c14-a4d2-32c8-9391-f51967efa173  ethernet eno1            
virbr0             f2d3494f-6ea4-4c90-936c-5eda9ac96a85  bridge   virbr0 

To switch the WiFi device connection from zoneone to zonetwo, we can run the following command:

# nmcli device wifi connect zonetwo -ask
Password:

The -ask flag causes nmcli to prompt the user to enter the password for the WiFi network. To include the WiFi password on the command-line (particularly useful if the command is being executed in a script), use the password option:

# nmcli device wifi connect zonetwo password <password here>

The nmcli tool may also be used to scan for available WiFi networks as follows:

# nmcli device wifi list
IN-USE  SSID        MODE   CHAN  RATE        SIGNAL  BARS  SECURITY  
        zoneone     Infra  6     195 Mbit/s  80            WPA2      
*       zonetwo     Infra  11    130 Mbit/s  74            WPA1 WPA2

A currently active connection can be deactivated as follows:

# nmcli con down <connection name>

Similarly, an inactive connection can be brought back up at any time:

# nmcli con up <connection name>

When a connection is brought down, NetworkManager automatically searches for another connection, activates it and assigns it to the device to which the previous connection was established. To prevent a connection from being used in this situation, disable the autoconnect option as follows:

# nmcli con mod <connection name> connection.autoconnect no

The following command may be used to obtain additional information about a specific connection. This includes the current values for all the connection properties:

# nmcli con show "Wired connection 1"
connection.id:                          Wired connection 1
connection.uuid:                        56f32c14-a4d2-32c8-9391-f51967efa173
connection.stable-id:                   --
connection.type:                        802-3-ethernet
connection.interface-name:              --
connection.autoconnect:                 yes
connection.autoconnect-priority:        -999
connection.autoconnect-retries:         -1 (default)
connection.auth-retries:                -1
connection.timestamp:                   1586442354
connection.read-only:                   no
connection.permissions:                 --
connection.zone:                        --
connection.master:                      --
connection.slave-type:                  --
connection.autoconnect-slaves:          -1 (default)
.
.

All of these properties can be modified using nmcli with the modify option using the following syntax:

# nmcli con mod <connection name> connection.<property name> <setting>

1.4  Working with Connection Profiles

So far we have explored the use of connections without explaining how a connection is configured. The configuration of a connection is referred to as a connection profile and is stored in a file located in the /etc/NetworkManager/system-connections directory, the contents of which might read as follows:

# ls /etc/NetworkManager/system-connections
 zoneone.nmconnection   zonetwo.nmconnection  zonethree.nmconnection

Each of the files is an interface configuration file containing the connection profile for the corresponding connection.

Consider, for example, the contents of our hypothetical zoneone connection:

[connection]
id=zoneone
uuid=2842f180-1969-4dda-b473-6c641c25308d
type=wifi
permissions=
 
[wifi]
mac-address=C8:3A:35:CA:D5:17
mac-address-blacklist=
mode=infrastructure
ssid=zoneone
 
[wifi-security]
auth-alg=open
key-mgmt=wpa-psk
psk=MyPassword
 
[ipv4]
dns-search=
method=auto
 
[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=auto

The file contains basic information about the connection, including the type (wifi), and the SSID and WPA password key for the WiFi network. For both IPV4 and IPV6 the method property is set to auto (in other words the IP address for the connection will be obtained dynamically using DHCP). Changes to the connection profile can be implemented by modifying this file and instructing nmcli to reload the connection configuration files:

# nmcli con reload

New connection profiles can also be created manually or generated automatically by nmcli. As an example, assume that a new network device has been installed on the system. When this happens, the NetworkManager service will detect the new hardware and create a device for it. In the example below, the new device has been assigned the name enp0s8:

# nmcli dev status
DEVICE  TYPE      STATE      CONNECTION         
enp0s3  ethernet  connected  Wired connection 1 
enp0s8  ethernet  connected  Wired connection 2 

NetworkManager automatically detected the device, activated it and assigned it to a connection named “Wired connection 2”. This is a default connection over which we have no configuration control because there is no interface configuration file for it in /etc/NetworkManager/systemconnections. The next steps are to delete the “Wired connection 2” connection and use nmcli to create a new connection and assign it to the device. The command to delete a connection is as follows:

# nmcli con delete "Wired connection 2"

Next, nmcli can be used to create a new connection profile configured either with a static IP address, or a dynamic IP address obtained from a DHCP server. To create a dynamic connection profile named dyn_ip, the following command would be used:

# nmcli connection dd type ethernet con-name dyn_ip ifname enp0s8
Connection 'dyn_ip' (160d9e10-bbc8-439a-9c47-a2ec52990472) successfully added.

After the connection has been created, a file named dyn_ip will have been added to the /etc/ NetworkManager/system-connections directory and will read as follows:

[connection]
id=dyn_ip
uuid=3dc0bb6b-33dc-4cf8-b5da-5b9fd560342a
type=ethernet
interface-name=enp0s8
permissions=
 
[ethernet]
mac-address-blacklist=
 
[ipv4]
dns-search=
method=auto
 
[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=auto

Checking the device status should now verify that the enp0s8 device is now using the dyn_ip connection profile:

# nmcli dev status
DEVICE  TYPE      STATE      CONNECTION         
enp0s8  ethernet  connected  dyn_ip             
enp0s3  ethernet  connected  Wired connection 1 

At this point it is worth noting that the enp0s3 device is also using a default connection profile for which there is no interface file through which to modify the connection settings. The same steps used to create the dyn_ip profile can also be used for the enp0s3 device. For example, to create a connection named static_ip assigned a static IP address (in this case 192.168.1.200) assigned to the enp0s3 device, the following command would be used (keeping in mind that if you are connected remotely to the system via the Wired connection 1 interface you will lose the connection):

# nmcli con delete "Wired connection 1"
# nmcli con add type ethernet con-name static_ip ifname enp0s3 ip4 192.168.1.200/24 gw4 192.168.1.1
Connection 'static_ip' (3fccafb3-e761-4271-b310-ad0f28ee8606) successfully added.
# nmcli reload

The corresponding static_ip file will read as follows:

[connection]
id=static_ip
uuid=6e03666b-26a1-476e-b5b2-77c8eac6006c
type=ethernet
interface-name=enp0s3
permissions=
 
[ethernet]
mac-address-blacklist=
 
[ipv4]
address1=192.168.1.200/24,192.168.1.1
dns-search=
method=manual
 
[ipv6]
addr-gen-mode=stable-privacy
dns-search=
method=auto

The command to add a new connection may be altered slightly to also assign both IPv4 and IPv6 static addresses:

# nmcli con add type ethernet con-name static_ip ifname enp0s3 ip4 192.168.1.200/24 gw4 192.168.1.1  gw4 192.168.1.1 ip6 cabf::4532 gw6 2010:dfa::1

1.5  Interactive Editing

In addition to using nmcli with command-line options, the tool also includes an interactive mode that can be used to create and modify connection profiles. The following transcript, for example, shows interactive mode being used to create a new Ethernet connection named demo_con:

# nmcli con edit
Valid connection types: 6lowpan, 802-11-olpc-mesh (olpc-mesh), 802-11-wireless (wifi), 802-3-ethernet (ethernet), adsl, bluetooth, bond, bridge, cdma, dummy, generic, gsm, infiniband, ip-tunnel, macsec, macvlan, ovs-bridge, ovs-interface, ovs-port, pppoe, team, tun, vlan, vpn, vxlan, wimax, wpan, bond-slave, bridge-slave, team-slave
Enter connection type: ethernet
 
===| nmcli interactive connection editor |===
 
Adding a new '802-3-ethernet' connection
 
Type 'help' or '?' for available commands.
Type 'print' to show all the connection properties.
Type 'describe [<setting>.<prop>]' for detailed property description.
 
You may edit the following settings: connection, 802-3-ethernet (ethernet), 802-1x, dcb, sriov, ethtool, match, ipv4, ipv6, tc, proxy
nmcli> set connection.id demo_con
nmcli> set connection.interface enp0s8
nmcli> set connection.autoconnect yes
nmcli> set ipv4.method auto 
nmcli> set 802-3-ethernet.mtu auto
nmcli> set ipv6.method auto
nmcli> save
Saving the connection with 'autoconnect=yes'. That might result in an immediate activation of the connection.
Do you still want to save? (yes/no) [yes] yes
Connection 'demo_con' (cb837408-6c6f-4572-9548-4932f88b9275) successfully saved.
nmcli> quit

The following transcript, on the other hand, modifies the previously created static_ip connection profile to use a different static IP address to the one originally specified:

# nmcli con edit static_ip
 
===| nmcli interactive connection editor |===
 
Editing existing '802-3-ethernet' connection: 'static_ip'
 
Type 'help' or '?' for available commands.
Type 'print' to show all the connection properties.
Type 'describe [<setting>.<prop>]' for detailed property description.
 
You may edit the following settings: connection, 802-3-ethernet (ethernet), 802-1x, dcb, sriov, ethtool, match, ipv4, ipv6, tc, proxy
nmcli> print ipv4.addresses
ipv4.addresses: 192.168.1.200/24
nmcli> set ipv4.addresses 192.168.1.201/24
nmcli> save
Connection 'static_ip' (3fccafb3-e761-4271-b310-ad0f28ee8606) successfully updated.
nmcli> quit

After modifying an existing connection, remember to instruct NetworkManager to reload the configuration profiles:

# nmcli con reload

When using interactive mode, it is useful to know that there is an extensive built-in help system available to learn how to use the tool. The help topics can be accessed by typing help or ? at the nmcli > prompt:

nmcli> ?
------------------------------------------------------------------------------
---[ Main menu ]---
goto     [<setting> | <prop>]        :: go to a setting or property
remove   <setting>[.<prop>] | <prop> :: remove setting or reset property value
set      [<setting>.<prop> <value>]  :: set property value
describe [<setting>.<prop>]          :: describe property
print    [all | <setting>[.<prop>]]  :: print the connection
verify   [all | fix]                 :: verify the connection
save     [persistent|temporary]      :: save the connection
activate [<ifname>] [/<ap>|<nsp>]    :: activate the connection
back                                 :: go one level up (back)
help/?   [<command>]                 :: print this help
nmcli    <conf-option> <value>       :: nmcli configuration
quit                                 :: exit nmcli
------------------------------------------------------------------------------

1.6  Configuring NetworkManager Permissions

In addition to making it easier to manage networks on Ubuntu, NetworkManager also allows permissions to be specified for connections. The following command, for example, restricts a connection profile to root and user accounts named john and caitlyn:

# nmcli con mod static_ip connection.permissions user:root,john,caitlyn

Once the connection profiles have been reloaded by NetworkManager, the static_ip connection will only be active and accessible to other users when at least one of the designated users is logged in to an active session on the system. As soon as the last of these users logs out, the connection will go down and remain inactive until one of the users signs back in.

In addition, only users with permission are able to make changes to the connection status or configuration.

1.7  Summary

Network management on Ubuntu is handled by the NetworkManager service. NetworkManager views a network as consisting of network interface devices and connections. A network device can be a physical Ethernet or WiFi device or a virtual device used by a virtual machine guest. Connections represent the network to which the devices connect and are configured by connection profiles. A configuration profile will, among other settings, define whether the connection has a static or dynamic IP address, the IP address of any gateway used by the network and whether or not the connection should be established automatically each time the system starts up.

NetworkManager can be administered using a number of different tools including the nmcli and nmtui command-line tools, the nm-connection-editor graphical tool and the network settings section of the Cockpit web interface. In general, the nmcli command-line tool provides the most features and flexibility.

Ubuntu 20.04 Snap Package Management

The previous chapter explored the use of the Advanced Packaging Tool (APT) to install and update software packages on an Ubuntu system. In recent years a new package management system called Snap has been under development by the Ubuntu team at Canonical, Ltd. Although there are no official plans to replace APT entirely with Snap, the list of packages that can now be installed as “snaps” continues to grow.

The goal of this chapter is to introduce the Snap system, highlight the key advantages it has over the APT system and to outline how to use the snap command-line tool to install and manage snap -based software packages.

1.1  Managing Software with Snap

The apt tool installs software that is packaged in .deb files. A package installed using apt will often be dependent upon other packages that will also need to be installed in order to function. During an installation, apt will also download and install these additional package dependencies. Consider a graphics design app which depends on a particular imaging library. During installation, apt will install the graphics app package in addition to the package containing the library on which it depends. Now, assume that the user decides to install a different graphics tool that also relies on the same graphics library. Usually this would not be a problem since the apps will both share the same library, but problems may occur if the two apps rely on different versions of the library. Installing the second app may, therefore, stop the first app from working correctly. Another limitation of apt and .deb packages is that it is difficult to have two different versions of the same tool or app installed in parallel on a system. A user might, for example, want to keep version 1.0 of the graphics app installed while also trying out the latest beta release of the 2.0 version. After trying out version 2.0, the user may then want to remove version 1.0, leaving the new version installed, a task that would be hard to achieve using apt.

The snap system has been designed specifically to address these types of shortcomings. The snap tool installs .snap packages that contain all of the libraries and assets that are required for the software to function. This avoids the need to install any dependencies as separate, independent packages. Once a snap has been installed it is placed in a self-contained location so that no dependencies are shared with other packages. Our hypothetical graphics apps, for example, each have their own copies of the exact imaging library version used by the app developer which cannot be deleted, replaced with an incompatible version or overwritten by any other package installations.

Of course the use of snaps results in larger package files which leads to longer package download times, slower installation performance and increased disk space usage. That being said, these shortcomings are generally more than outweighed by the advantages of snap packages.

Snap also supports the concept of channels which allow app developers to publish different versions of the same app. Snap channels are the mechanism by which multiple versions of the same software are able to be installed in parallel.

1.2  Basic Snap Commands

Although many software packages are still provided in .deb format and installed using apt, the number of apps and tools now provided in snap format is increasing rapidly. In fact, all of the software listed in the Ubuntu Software tool (outlined previously in the chapter entitled “A Guided Tour of the GNOME 3 Desktop”) are packaged and installed using snap. Snap-based software may also be installed using the snap command-line tool, the basics of which will be covered in this section.

To list the snap packages that are available for a specific category of software, run a command similar to the following:

# snap find "image editor"
Name             Version       Publisher          Notes  Summary 
gimp             2.10.18       snapcrafters       -      GNU Image Manipulation Program 
paintsupreme-3d  1.0.41        braindistrict      -      PaintSupreme 3D
.
.

The above command will list all snap packages available for download and installation containing software related in some way to image editing. One such result will be the gimp image editor. Details about the gimp snap can be found as follows:

$ snap info gimp 
name:      gimp 
summary:   GNU Image Manipulation 
Program publisher: Snapcrafters 
store-url: https://snapcraft.io/gimp 
contact:   https://github.com/snapcrafters/gimp/issues 
license:   GPL-3.0+ description: |
  Whether you are a graphic designer, photographer, illustrator, or scientist, 
GIMP provides you with sophisticated tools to get your job done. You can further enhance your productivity with GIMP thanks to many customization options and 3rd party plugins.     This snap is maintained by the Snapcrafters community, and is not necessarily   endorsed or officially maintained by the upstream developers.
  
Upstream Project: https://www.gimp.org/snapcraft.yaml 
Build Definition: https://github.com/snapcrafters/gimp/blob/master/snap/snapcraft.yaml snap-id: KDHYbyuzZukmLhiogKiUksByRhXD2gYV 
channels:
   latest/stable:    2.10.18 2020-03-03 (252) 182MB   
   latest/candidate: ↑                                
   latest/beta:      ↑                                
   latest/edge:      2.11.02 2020-04-28 (265) 184MB -

The snap find command can also be used to find a specific package by name, together with other packages that provide similar features. Searching for the VLC media player app, for example, also lists similar software packages:

# snap find vlc
Name            Version                 Publisher  Notes  Summary
vlc             3.0.10                  videolan   -      The ultimate media player
mjpg-streamer   2.0                     ogra       -      UVC webcam streaming tool
audio-recorder  3.0.5+rev1432+pkg-7b07  brlin      -      A free audio-recorder for Linux (EXTREMELY BUGGY)
tundra          0.1.0                   m4tx       -      MyAnimeList scrobbler
dav1d           0.6.0                   videolan   -      AV1 decoder from VideoLAN
peerflix        v0.39.0+git1.df28e20    pmagill    -      Streaming torrent client for Node.js

The snap list command-line option can be used to obtain a list of snap packages that are already installed on a system:

$ snap list
Name                 Version                     Rev   Tracking         Publisher   Notes
canonical-livepatch  9.5.5                       95    latest/stable    canonical  
core                 16-2.44.3                   9066  latest/stable    canonical  core
core18               20200427                    1754  latest/stable    canonical  base
gnome-3-28-1804      3.28.0-16-g27c9498.27c9498  116   latest/stable    canonical 
.
.

To install a snap package (for example to install the Remmina remote desktop tool), run the snap command with the install option followed by the name of the package to be installed:

$ snap install remmina

To remove a snap package, simply specify the package name when running snap with the remove option:

# snap remove remmina

1.3  Working with Snap Channels

If no channel is specified when performing an installation, snap will default to the stable channel. This ensures that the latest reliable version of the software is installed. To perform the installation from a different channel, begin by identifying the channels that are currently available for the required package using the snap info option:

# snap info remmina
name:      remmina
summary:   Remote Desktop Client
.
.
channels:
  latest/stable:    v1.4.3+git13.688f5f75 2020-04-20 (4139) 37MB -
  latest/candidate: ↑                                            
  latest/beta:      ↑                                            
  latest/edge:      v1.4.3+git27.1bd753df 2020-05-01 (4150) 37MB -

From the above output we can see that while the stable version of the Remmina app is v1.4.3+git13.688f5f75 a more recent version is available in the edge channel.

Of course the candidate, beta and edge channels provide access to the software in increasingly unstable forms (referred to as risk level), but if you would like to try out an early access version of upcoming features of a package, install from a higher risk channel. For example:

# snap install --channel=edge remmina

The channel selection may also be abbreviated to –stable, –candidate, –beta or –edge, for example:

# snap install --edge remmina

If the package is already installed, the risk level can be changed using the switch option:

# snap switch channel=edge remmina

This will change the channel that snap is tracking for the specified package. The current channel being tracked for a package can be identified using the snap info command:

# snap info remmina 
name:      remmina 
. 
.
tracking:     latest/edge 
. 
.

Simply running a snap switch command will not immediately refresh the package to use the new channel. To understand how this works it will help to explore the snap refresh schedule.

1.4  Snap Refresh Schedule

The snap system includes a background service named snapd which is responsible for refreshing installed snaps based on the channels that they are tracking. By default, snapd performs refresh operations at regular intervals (typically four times a day). To identify when the last refresh was performed and the next is due, run the following command:

# snap refresh --time
timer: 00:00~24:00/4
last: today at 07:23 EDT
next: today at 14:25 EDT

The above output also includes timer information which indicates that the refresh will be performed four times within each 24 hour period:

.
.
timer: 00:00~24:00/4
.
.

The snap command can also be used to force a refresh of all installed snap packages as follows:

# snap refresh

Alternatively, to refresh a specific package:

# snap refresh remmina

To switch a package to a different channel without having to wait for the next snapd service refresh, simply run the snap refresh command as follows, specifying the target channel:

# snap refresh remmina --channel=edge

The snap system also has a set of four properties that may be modified to adjust the refresh schedule used by snapd:

  • refresh.timer: Stores the current refresh schedule and frequency.
  • refresh.hold: Used to delay refresh operations until the specified day and time (in RFC 3339 format).
  • refresh.metered: Pauses refresh operations when network access is via a metered connection (such as a mobile data connection).
  • refresh.retain: Used to configure the number of revisions of each snap installation that are to be retained.

For example, to schedule the refresh to occur on weekdays between 1:00am and 2:00am:

# snap set system refresh.timer=mon-fri,1:00-2:00

Similarly, the following command will configure refreshes twice every day to occur between the hours of 6:00am and 7:00am, and 10:00pm and 11:00pm:

snap set system refresh.timer=6:00-7:00,22:00-23:00

A full explanation of the timer format and syntax can be found online at the following URL:

https://snapcraft.io/docs/timer-string-format

After making a change to the timer, be sure to check the settings as follows:

# snap refresh --time
timer: mon-fri,1:00-2:00
last: today at 07:23 EDT
next: tomorrow at 01:00 EDT

To pause refreshes, the date and time at which refreshing is to resume must be specified using the RFC 3339 format, details of which can be found at the following URL:

https://tools.ietf.org/html/rfc3339

In summary, the date and time should use the following format:

YYYY-MM-DDTHH:MM.SS<UTC offset>

For example, to specify a hold until October 12, 2020 at 3:20am for a system located in New York, the date and time would be formatted as follows:

2020-10-12T03:20:50.0-05:00

Note that since New York uses Eastern Standard Time (EST) it has a -5 hour offset from Coordinated Universal Time (UTC-5:00). Having formatted the date and time, the following command would be used to set the hold:

# snap set system refresh.hold="2020-10-12T03:20:50.0-05:00"

To check the current hold setting, use snap with the system get option:

# snap get system refresh.hold
2020-10-12T03:20:50.0-04:00

To remove the hold, simply assign a null value to the property:

# snap set system refresh.hold=null

The refresh.retain property can be set to any value between 0 and 20, for example:

# snap set system refresh.retain=10

Finally, to pause refresh updates while the system is on a metered connection, set the refresh. metered property to hold as follows:

# snap set system refresh.metered=hold

As with the hold property, disable this setting by assigning a null value to the property:

# snap set system refresh.metered=null

1.5  Snap Services

It is worth noting that some snap packages include their own services which run in the background when the package is installed (much like the systemd services described in the chapter entitled “Managing Ubuntu systemd Units”). To obtain a list of snap services that are currently running on a system, execute the snap command with the services option:

# snap services
Service                                   Startup  Current  Notes
canonical-livepatch.canonical-livepatchd  enabled  active   -

The above output indicated that the LivePatch snap service is currently enabled and active. To stop or stop a service the following snap commands can be used:

# snap start canonical-livepatch.canonical-livepatch
# snap stop canonical-livepatch.canonical-livepatch

Similarly the snap enable and disable options may to used to control whether or not a service starts automatically on system startup:

# snap enable canonical-livepatch.canonical-livepatch
# snap disable canonical-livepatch.canonical-livepatch

If the snap service generates a log file, that file can be viewed as follows:

# snap logs canonical-livepatch
2020-05-06T13:21:58Z canonical-livepatch[763]: No payload available.
2020-05-06T13:21:58Z canonical-livepatch[763]: during refresh: cannot check: No machine-token. Please run 'canonical-livepatch enable'!
.
.

It is also still possible to manage snap services using the systemctl command. This usually involves prefixing the service name with “snap.”. For example:

# systemctl status snap.canonical-livepatch.canonical-livepatchd

1.6  Summary

Until recently, all Ubuntu software packages were stored in .deb files and installed using the Advanced Packaging Tool (APT). An increasing number of packages are now available for installation using Snap, a package management system developed by Canonical, Ltd. Unlike apt packages, snap bundles all of the dependencies for a package into a single .snap file. This ensures that the software package is self-contained with its own copy of all of the libraries and assets needed to run. This avoids the potential conflicts of packages relying on different versions of the same shared assets and libraries. The Snap system also allows different versions of the same packages to be installed in parallel. All of the software listed in the Ubuntu Software tool are supplied as snap packages. In addition, snap can be used to install, remove and manage packages from the command-line.