Friday, December 23, 2016

Configuring OpenSSH

Configuring OpenSSH



SSH server (sshd)




/etc/ssh/sshd_config 


Directives:

- AllowUsers

- DenyUsers

- HostKey

- ListenAddress

- PermitRootLogin

- Port

- Protocol







SSH client 





/etc/ssh/ssh_config





file.







The





/etc/ ssh/ssh_config 






file is used to specify default parameters for all users running ssh on the system.





A user can override these defaults using the





~/.ssh/ssh_config 






file in his or her home directory.






The precedence for ssh client configuration settings are as follows:





1. Any command-line options included with the ssh command at the shell prompt



2. Settings in the ~/.ssh/ssh_config file



3. Settings in the /etc/ssh/ssh_config file










Connect




ssh –l user_name ip_address





Don’t forget the –l parameter. If you don’t, the SSH client will
attempt to authenticate you to the remote system using the same
credentials you used to authenticate to the local system.







Encryption III

Tunnel your X server traffic to remote X clients using an SSH

Tunnel your X server traffic to remote X clients using an SSH



To configure a remote X client without encryption, you can use the
following procedure:





1. On the remote X client, enter



   xhost +X_server_hostname



   This tells the client to accept connections from the X server.





2. On the X server, enter



   DISPLAY=X_client_hostname:0.0



   and then enter



   export DISPLAY



   This tells the X server to display its output on the remote X client.






3. From the X client, use the ssh client to access the shell prompt on
   the X server and then run the graphical application you want displayed
   on the X client. For example, you could enter gedit at the shell
   prompt to remotely display the gedit text editor. You could also enter
   office at the shell prompt to remotely display the OpenOffice.org
   suite.







Encrypted

This procedure works, but all the X traffic is transmitted
unencrypted. This isn’t good. Instead, you should use SSH to tunnel
the X server traffic between the X server and the X client. You can do
this using one of the following options:




On the X client system:



• Use the –X option with the ssh client program.




• Set the



  ForwardX11 



  option to a value of




  yes




  in the




   /etc/ssh/ssh_config 





    file








On the X server system:





Once this is done, you then need to set the




X11Forwarding 




option to





yes 





in the





/etc/ssh/sshd_config 





file






Encryption IV

SSH to tunnel POP3 traffic

SSH to tunnel POP3 traffic





Let’s walk through an example of how you can use SSH to tunnel POP3 traffic:





1. Make sure the ssh client is installed on the local system where the

   e-mail client will run.



2. Make sure the sshd daemon is installed and running on the POP3 server.



3. Ensure IP port 22 is open on the server where sshd is running.



4. On the system where sshd is running, switch to root and edit the





  /etc/ssh/sshd_config 





   file.




5. Locate the AllowTcpForwarding parameter, uncomment it if necessary,
   and then set it to a value of yes. An example is shown here:




    AllowTcpForwarding  yes




6. Save your changes to the file and exit the editor.



7. Restart the sshd daemon by entering systemctl restart sshd at the
   shell prompt (as root).




8. Switch to the client system.




9. Create a local ssh tunnel from a local high IP port (in this
   example, port 2345) to port 110 on the POP3 server using the following
   command (enter the remote user’s password when prompted):




    ssh -f -N -L 2345:pop3_host_address:110 user_name@pop3_host_address





   The options specified in this command do the following:




   • –N and –f 


      Tell ssh not to execute a command remotely on the server
      and to run in the background after prompting for the remote user’s
      password

 


   • –L

      Specifies three things:

      • The local port to be used for the client end of the tunnel (in
        this case, 2345)


      • The hostname or IP address of the remote POP3 server


      • The port on the remote server that will be used for the server
        end of the tunnel (in this case, 110)



   You don’t have to use port 2345. You can use the same port on both
   ends if desired. However, be aware that you will need to switch to the
   root user if you want to use a port number less than 1024 on the
   client side of the tunnel. These are called privileged ports.




10. With the tunnel established, configure the local e-mail client
    program to retrieve mail from the local system on the port you
    configured for the client end of the SSH tunnel. In this example, you
    would configure it to get mail from the local system’s IP address on
    port 2345. An example of how to do this with the Evolution e-mail
    client is shown in Figure 18-6.



    Note that I used the hostname of the local host, not the POP3 server, in the Server field.
    I also added the port number of the workstation end of the tunnel to the end of the
    hostname.







At this point, when the client uses the POP3 protocol to download new
messages, the SSH client on the local system will encrypt the request
and forward it to the SSH server through the SSH tunnel you
established. The SSH server will receive the request, decrypt it, and
then pass the data on to the local port 110, where the POP3 daemon is
listening. The cool thing about this process is that it is completely
transparent to the e-mail client software. As far as it’s concerned,
it’s retrieving e-mail from a local POP3 server.



You can test the tunnel you created using the telnet command from the
client end of the tunnel. The syntax is




telnet localhost client_tunnel_port



Here’s an example:





telnet localhost 2345






Encryption IV

SSH to Use Public Key Authentication


SSH to Use Public Key Authentication



1. At the shell prompt of the client system



ssh-keygen –t rsa




or



ssh-keygen –t dsa





2. When prompted for the file in which the private key will be saved,
press enter to use the default filename of



~/.ssh/id_rsa 



or



~/.ssh/id_dsa




The associated public key will be saved as




~/.ssh/id_rsa.pub 



or



~/.ssh/id_dsa.pub






The next thing you need to do is to copy the public key you just
created to the SSH server. 




scp ~/.ssh/key_name.pub   user_name@address_of_SSH_server:filename






At this point, the contents of the key file you just copied need to be
appended to the end of the 





~/.ssh/authorized_keys 




file in the home directory of the user you will connect to the SSH server as.







If desired, you can use the



ssh-agent 



command to eliminate the need to enter the passphrase every time you establish 
an SSH connection.





1. At the shell prompt of your client system, enter



ssh-agent bash


2. At the shell prompt, enter




   ssh-add ~/.ssh/id_rsa 




   or




   ssh-add ~/.ssh/id_dsa






   depending on which key file you have created.




3. When prompted, enter the key file’s passphrase. When you do, you
   should be prompted that the identity has been added. An example
   follows:



      rtracy@ws1:~> ssh-agent bash
      rtracy@ws1:~> ssh-add ~/.ssh/id_rsa
      Enter passphrase for /home/rtracy/.ssh/id_rsa:
      Identity added: /home/rtracy/.ssh/id_rsa (/home/rtracy/.ssh/id_rsa)
      rtracy@ws1:~>






    Once this is done, the ssh-agent process stores the passphrase in
    memory. It then listens for SSH requests and automatically provides
    the key passphrase for you when requested.





Encryption V



Using GPG to encrypt files : Revoke

Using GPG to encrypt files : Revoke


1) To create (not issue) the key revocation certificate, enter





gpg --output revoke.asc --gen-revoke key_ID



gpg --output revoke.asc --gen-revoke 899AB9E6









Remember, you can use the




--fingerprint option


gpg --fingerprint student@fedora






with the gpg command to view the key ID number. 







2) Issue Revocation





gpg --import revocation_certificate_filename




gpg --import revoke.asc









Encryption VI

Using GPG to encrypt files : Symmetric

Using GPG to encrypt files : Symmetric


With the public keys imported, we could exchange encrypted files and be able to decrypt them.



The syntax for doing this is





gpg --output output_filename --symmetric encrypted_filename








For example, if I sent the mytest- file.txt.gpg encrypted document
from my openSUSE system to my fedora system, I would enter the
following command to decrypt it







gpg --output mytestfile.txt.decrypted --symmetric  mytestfile.txt.gpg











Encryption VI

Using GPG to encrypt files : View Keys

Using GPG to encrypt files : View Keys



You can view the keys in your GPG keyring using the




gpg --list-keys





The keyring file itself is located in the




~/.gnupg/




directory within my home directory and is named




pubring.gpg







Encryption VI

Using GPG to encrypt files : exchange

Using GPG to encrypt files : exchange 






But what do you do if you want to be able to exchange
encrypted files with someone else and both of you be able to decrypt them?





1)  Copy your public keys to a public key server on 
the Internet.



gpg --keyserver hkp://subkeys.pgp.net --send-key key_ID 



gpg --keyserver hkp://subkeys.pgp.net --send-key 9DF54AB2 






key ID is: 

gpg --fingerprint key_owner_email 













2) You can also just directly exchange keys between systems. 




a)

gpg --export --armor key_owner_email > public_key_filename 



gpg --export rtracy@openSUSE > gpg.pub 





b)

Each user can then copy their key file to the other users. 


scp gpg.pub student@fedora: 





c)

Once this is done, each user should import the other users’ public 
keys into their GPG keyring using the 



gpg --import public_key_filename 



pg --import gpg.pub 









Encryption VI

Using GPG to encrypt files : decrypt

Using GPG to encrypt files : decrypt



gpg 




gpg --output output_filename --decrypt encrypted_filename




gpg --output mytestfile.txt.decrypted --decrypt mytestfile.txt.gpg









Encryption VI

Using GPG to encrypt files : encrypt

Using GPG to encrypt files : encrypt



gpg 




use your key pair to encrypt files and messages





gpg –e –r key_user_name filename





gpg -e -r rtracy mytestfile.txt










Encryption VI

Using GPG to encrypt files

Using GPG to encrypt files: backup


gpg 



To create a backup of your gpg key pair




gpg --export-secret-keys --armor key_owner_ email_address > filename.asc





gpg --export-secret-keys --armor rtracy@openSUSE > rtracy-privatekey.asc










Encryption VI

Using GPG to encrypt files

Using GPG to encrypt files


gpg 



1. Use GPG to generate your keys.


gpg --gen-key



At this point, your key pair has been generated! The key files are
stored in the


~/.gnupg 



he following files are created in this directory:

secring.gpg   This file is the GPG secret keyring.

pubring.gpg   This file is the GPG public keyring.

trustdb.gpg   This file is the GPG trust database.







Encryption VI

Thursday, December 22, 2016

Encryption VI

Encrypting Linux Files

Just as you can encrypt network transmissions between Linux systems
using OpenSSH, you can also use encryption to protect files in the
Linux file system. You can use a wide variety of tools to do this.
Some are open source; others are proprietary. For your Linux+/LPIC-1
exam, you need to know how to use the open source GNU Privacy Guard
(GPG) utility to encrypt files. Therefore, that’s the tool we will use
here. We’ll discuss the following:


• How GPG works
• Using GPG to encrypt files
.


How GPG Works


GNU Privacy Guard (GPG) is an open source implementation of the
OpenPGP standard (RFC 4880). It allows you to encrypt and digitally
sign your data and communications. For example, you can encrypt files
in your Linux file system. You can also encrypt and digitally sign
e-mail messages.


GPG provides a cryptographic engine that can be used directly from the
shell prompt using the


gpg 


command-line utility. It can also be called from within shell scripts or other programs running on the system.


For example, GPG support has been integrated into several popular Linux
e-mail clients such as Evolution and KMail. It has also been
integrated into instant messaging applications such as Psi.


A variety of graphical front ends are available for GPG as well. Some
of the more popular front ends include KGPG and Seahorse. However, for
your Linux+/LPIC-1 exam, you need to know how to use GPG from the
shell prompt, so that’s what we’ll focus on in this chapter.


GPG functions in a manner similar to OpenSSH in that it uses both
asymmetric and symmetric cryptography. 


a) GPG first generates a random symmetric key and uses it to encrypt the message to be transferred.


b) The symmetric key itself is then encrypted using the recipient’s
    public key and sent along with the message that was encrypted using
    the symmetric key.


c) When the recipient receives a message, GPG first decrypts the symmetric key 
    using the user’s private key. GPG then uses the decrypted symmetric key to decrypt 
    the rest of the message.



GPG supports many encryption algorithms, including the following:


Symmetric encryption:
• AES
• 3DES
• Blowfish


Asymmetric encryption:
• Elgamal
• RSA

Hashes:
• MD5
• SHA-1 and -2 
• RIPEMD-160

Digital signatures:
• DSA
• RSA


Now that you understand how GPG works, let’s review how you can use
GPG to encrypt files.


Using GPG to Encrypt Files

To encrypt a file using GPG, do the following:

1. Use GPG to generate your keys. To do this, enter

gpg --gen-key

at the shell prompt. An example is shown here:


      rtracy@openSUSE:~> gpg --gen-key
      gpg (GnuPG) 2.0.22; Copyright (C) 2013 Free Software Foundation, Inc.
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
      Please select what kind of key you want:
         (1) RSA and RSA (default)
         (2) DSA and Elgamal
         (3) DSA (sign only)
         (4) RSA (sign only)
      Your selection?


2. Select the type of key you want to create. Usually you will use the
default option (1), which uses RSA and RSA. You are prompted to
specify the size of the key, as shown here:


      RSA keys may be between 1024 and 4096 bits long.
      What keysize do you want? (2048)


3. Specify the size of key you want to create. Using the default size
of 2048 bits is usually sufficient. You are prompted to configure the
key lifetime, as shown here:


      Please specify how long the key should be valid.
               0 = key does not expire
            <n>  = key expires in n days
            <n>w = key expires in n weeks
            <n>m = key expires in n months
            <n>y = key expires in n years
      Key is valid for? (0)


4. Specify when the key will expire. As shown in step 3, you can
specify that the key expire in a certain number of days, weeks,
months, or years.


5. Construct your user ID for the key. The first parameter you need to
specify is your real name. The name you specify is very important
because it will be used later during the encryption process. In the
next example, I entered rtracy for my real name:


      GnuPG needs to construct a user ID to identify your key.
      Real name: rtracy


6. When prompted, enter your e-mail address.


7. When prompted, enter a comment of your choosing. You are prompted
to confirm the user ID you have created for the key. An example is
shown here:


       You selected this USER-ID:
           "rtracy <rtracy@openSUSE>"
       Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit?


8. If the information is correct, enter O to confirm the ID. You are
prompted to enter a passphrase for the key, as shown in Figure 18-10.


9. Enter a unique passphrase for the key. After doing so, you are
prompted to perform various actions on the system while the key is
generated. An example is shown here:


       We need to generate a lot of random bytes. It is a good idea to perform
       some other action (type on the keyboard, move the mouse, utilize the
       disks) during the prime generation; this gives the random number
       generator a better chance to gain enough entropy.
       .+++++.++++++++++++++++++++++++++++++.+++++.++++++++++++++++++++++++++++++++
       +++.++++++++++.++++++++++.++++++++++.++++++++++..+++++.++++++++++>++++++++++
       ...................................>+++++............................<+++++.
       ................................>+++++...........<+++++.......+++++


10. Move the mouse, type characters on your keyboard, or open and
close your optical disc drive door. GPG uses these actions to generate
random numbers to create your key. Be aware that if you’re not doing
enough, you’ll be prompted to increase your activity to generate
enough entropy to create the key. An example is shown here:


       Not enough random bytes available.  Please do some other work to give
       the OS a chance to collect more entropy! (Need 137 more bytes)



At this point, your key pair has been generated! The key files are
stored in the


~/.gnupg 


directory in your user’s home directory. The following files are created in this directory:

• secring.gpg   This file is the GPG secret keyring.

• pubring.gpg   This file is the GPG public keyring.

• trustdb.gpg   This file is the GPG trust database.




Before going any further, you should seriously consider creating a
backup of your private key in case it gets corrupted. This is very
important because if you encrypt files with your key pair and then
lose your key, you will never be able to decrypt them. They are toast!
Even if you re-create  your key pair, you will not be able to decrypt
the files because they were encrypted with a different key pair.



To create a backup of your gpg key pair, enter



gpg --export-secret-keys --armor key_owner_ email_address > filename.asc



at the shell prompt. This is shown in the following example:


gpg --export-secret-keys --armor rtracy@openSUSE >
rtracy-privatekey.asc rtracy@openSUSE:~> ls
addnum firstnames mytestfile.txt rtracy-privatekey.asc


For security reasons, you probably shouldn’t leave this file on your
hard disk. Instead, consider burning it to an optical disc or copying
it to a USB flash drive and locking it away in a file cabinet
somewhere. This will allow you to restore your private key should the
original copy on the hard drive get mangled.


You can now use your key pair to encrypt files and messages. For
example, if you wanted to encrypt a file in your Linux file system,
you would do the following:


1. At the shell prompt, enter

gpg –e –r key_user_name filename

In the example shown here, I’m encrypting the mytestfile.txt file
using the key I generated previously. The –e option tells gpg to
encrypt the specified file. Remember that I specified a key username
of rtracy when I created the key user ID, so that’s what I entered
here.

      rtracy@openSUSE:~> gpg -e -r rtracy mytestfile.txt

2. At the shell prompt, use the ls command to view the new encrypted
version of the file gpg created. The original file is left intact.


The new file will have the same filename as the original file with a“.gpg” extension added. 


In the example here, the name of the new file is mytestfile.txt.gpg. 


In Figure 18-11, the differences between the
original file and the encrypted file are shown.


Once the file has been encrypted, it can then be decrypted using the
gpg command. The syntax is



gpg --output output_filename --decrypt encrypted_filename



For example, to decrypt  the mytestfile.txt.gpg file I created
earlier, I would enter



gpg --output mytestfile.txt.decrypted --decrypt mytestfile.txt.gpg



This is shown in the example here:



rtracy@openSUSE:~> gpg --output mytestfile.txt.decrypted --decrypt mytestfile.txt.gpg


You need a passphrase to unlock the secret key for

user: "rtracy (<rtracy@openSUSE>"
2048-bit RSA key, ID FB8BF16C, created 2015-01-24 (main key ID 9DF54AB2)
gpg: encrypted with 2048-bit RSA key, ID FB8BF16C, created 2015-01-24
      "rtracy (<rtracy@openSUSE>"
rtracy@openSUSE:~> cat mytestfile.txt.decrypted
This is a text file that I wrote.
rtracy@openSUSE:~>


At this point, you are able to encrypt and decrypt files on your local
system.


But what do you do if you want to be able to exchange
encrypted files with someone else and both of you be able to decrypt them? 


To do this, you must exchange and install gpg public keys on your systems. There are two ways to do this.


The first option is to copy your public keys to a public key server on
the Internet. This is done by entering


gpg --keyserver hkp://subkeys.pgp.net --send-key key_ID



at the shell prompt. Notice that this command requires you to know the
ID number associated with your gpg public key. This number is actually
displayed when you initially create your gpg key pair, but if you’re
like me, you probably didn’t take note of it. That’s not a problem
because you can generate it again from the command line. To do this,
enter


gpg --fingerprint key_owner_email

An example is shown here:

rtracy@openSUSE:~> gpg --fingerprint rtracy@openSUSE > key_ID.txt
rtracy@openSUSE:~> cat key_ID.txt
pub  2048R/9DF54AB2 2015-01-24
     Key fingerprint = AF46 4AB3 1397 B88E BC6A FBDA 465F 82C4 9DF5 4AB2
uid    rtracy       <rtracy@openSUSE>

sub  2048R/FB8BF16C 2015-01-24




In this example, I actually saved the output from the command to afile named key_ID.txt to keep it handy, but this is optional. The ID
number of the key is contained in the first line of output from the
command. Note that I’ve bolded the number needed.


Once you have the ID number, you can then copy your gpg public key to
a public key server on the Internet. Using the preceding information
for my system, I would enter



gpg --keyserver hkp://subkeys.pgp.net --send-key 9DF54AB2



at the command prompt.




This option works great if you want to be able to exchange keys with a
large number of other users.





However, if you are only concerned about doing this with a limited number of people, you can also just directly exchange keys between systems.


To do this, you (and the other users) can export your public keys and
send them to each other. To do this, you enter



gpg --export --armor key_owner_email > public_key_filename



at the shell prompt.



For example, to export the public key to a file named gpg.pub from the
key pair I created earlier, I would enter the following:


rtracy@openSUSE:~> gpg --export rtracy@openSUSE > gpg.pub




Each user can then copy their key file to the other users. For
example, if I wanted to send my key to the student user account on
another Linux host named fedora, I would enter the following:

rtracy@openSUSE:~> scp gpg.pub student@fedora:

Once this is done, each user should import the other users’ public
keys into their GPG keyring using the



gpg --import public_key_filename



command at the shell prompt. In the example shown next, I first used
scp to copy the public key file from the openSUSE system to the fedora
system. I then used gpg to import the public key.



[student@fedora ~]$ gpg --import gpg.pub
gpg: key 9DF54AB2: public key "rtracy <rtracy@openSUSE>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
[student@fedora ~]



Remember, each user needs to repeat this process. Then they can use
each other’s gpg keys to encrypt and decrypt files.




You can view the keys in your GPG keyring using the


gpg --list-keys



command, as shown in the following example:

[student@fedora ~]$ gpg --list-keys
/home/student/.gnupg/pubring.gpg
--------------------------------

pub   2048R/9DF54AB2 2015-01-24
uid   rtracy <rtracy@openSUSE>
sub   2048R/FB8BF16C 2015-01-24
[student@fedora ~]$



In this example, you can see that the public key I created earlier on
openSUSE is now imported into the student user’s GPG keyring on
fedora.




The keyring file itself is located in the 


~/.gnupg/ 


directory within my home directory and is named 


pubring.gpg


NOTE

The


gpg.conf 


file is also located in the 


~/.gnupg 


directory. You can use this file to customize the way gpg works on your system.


I would then need to do the same thing in reverse. I should import the
public key from the student user on the fedora system to my system
(openSUSE).


With the public keys imported, we could exchange encrypted files and be able to decrypt them. 

The syntax for doing this is



gpg --output output_filename --symmetric encrypted_filename




For example, if I sent the mytest- file.txt.gpg encrypted document
from my openSUSE system to my fedora system, I would enter the
following command to decrypt it



gpg --output mytestfile.txt.decrypted --symmetric  mytestfile.txt.gpg


When I do, I am prompted to enter the passphrase I assigned to the
private key when I initially generated it. Once this is done, the
decrypted version of the file is created and is accessible to the
local user.



Before we end this chapter, we need to discuss the topic of key
revocation. From time to time, you may need to revoke a key, which
withdraws it from public use. This should be done if the key becomes
compromised, gets lost, or if you forget the passphrase associated
with the key.


NOTE     Forgetting the passphrase associated with the key pair is a very
common problem that results in key revocation.



To revoke a key, you create a key revocation certificate. As a best
practice, you should create a key revocation certificate immediately
after initially creating your key pair. This is done in case something
gets corrupted and the revocation certificate can’t be created should
it be required for some reason later on. Here’s a key thing to
remember: creating the key revocation certificate doesn’t actually
revoke the key pair. Only when you actually issue the key revocation
certificate does the key get revoked. Basically, you create the key
revocation certificate and save it in a secure location just in case
it’s needed later.



To create (not issue) the key revocation certificate, enter



gpg --output revoke.asc --gen-revoke key_ID



at the shell prompt. Remember, you can use the


--fingerprint option


with the gpg command to view the key ID number. In the example that
follows, I create a key revocation certificate for the gpg key pair I
generated for the student user on my fedora system:



[student@fedora ~]$ gpg --fingerprint student@fedora
  pub
2048R/899AB9E6 2015-01-24
Key fingerprint = A469 942C F5C9 555A B4A4 F975 1B3A CB26 899A B9E6
uid
sub
[student@fedora ~]$ gpg --output revoke.asc --gen-revoke 899AB9E6
sec  2048R/899AB9E6 2015-01-24 student <student@fedora>
Create a revocation certificate for this key? (y/N) y
Please select the reason for the revocation:
  0 = No reason specified
  1 = Key has been compromised
  2 = Key is superseded
  3 = Key is no longer used
  Q = Cancel
(Probably you want to select 1 here)
Your decision? 1
Enter an optional description; end it with an empty line:
> This key has been compromised
>
Reason for revocation: Key has been compromised
This key has been compromised
Is this okay? (y/N) y
You need a passphrase to unlock the secret key for
user: "student <student@fedora>"
2048-bit RSA key, ID 899AB9E6, created 2015-01-24
ASCII armored output forced.
Revocation certificate created.
Please move it to a medium which you can hide away; if Mallory gets
access to this certificate he can use it to make your key unusable.



It is smart to print this certificate and store it away, just in case
your media become unreadable.  But have some caution:  The print system of
your machine might store the data and make it available to others!

Notice in this example that I had to specify a reason why the key is
to be revoked along with a more detailed description. I also had to
provide the passphrase that was used when the key pair was originally
created. Also notice the warning message at the end of the command
output. You probably should avoid keeping the key revocation
certificate on your system’s hard disk. Instead, burn it to the same
optical disc or copy it to the same flash drive as your key pairbackup and lock it away! If someone were to get a hold of this file,
they could revoke your certificate without your knowledge or consent,
which would be a bad thing.


TIP     The output of the command says to print out the key revocation
certificate. I think that is a cumbersome way to archive it. If you
ever needed to issue it, you would have to type the revocation
certificate in manually.


Yuck! I much prefer copying it to a flash drive that I keep locked in a cabinet. Don’t forget to delete the file off the hard disk, no matter what archival mechanism you choose to use.


So what should you do if your certificate actually does get
compromised and you end up needing to revoke it? The process is
actually pretty easy. All you have to do is import the revocation
certificate in the same manner we talked about for standard
certificates. You enter gpg --import revocation_certificate_filename
at the shell prompt. An example is shown here:


[student@fedora ~]$ gpg --import revoke.asc
gpg: key 899AB9E6: "student <student@fedora>" revocation certificate imported
gpg: Total number processed: 1
gpg:    new key revocations: 1
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u


Once this is done, you can verify that the key was revoked by entering



gpg --list-keys key_ID 


at the shell prompt. You should see that the revoked key is now gone from your keyring.
If you used the manual method discussed earlier in this chapter to distribute your public
key, you must import the key revocation certificate on any other
systems where your public key was imported.


If you are using a public key server on the Internet to distribute
your keys to other users, you would need to issue the key revocation
certificate there as well. Enter 



gpg --keyserver public_key_server_URL --send key_ID 



at the shell prompt. This lets everyone who is using your public key know that the key has been 
compromised and should no longer be used.




















LX0-104 Exam Objectives (X)

Encryption V

Configuring SSH to Use Public Key Authentication



In addition to authenticating to the SSH server with a username and
password combination, you can also configure your sshd daemon to allow
authentication using an RSA or DSA public key.


For this to work, the public key of the user on the client system must be stored in the



~/.ssh/ authorized_keys 



file in the home directory of the user on the server system that you will 
authenticate as. To do this, you need to securely copy the public key from 
the client system to the server system. The private key, of course, remains
on the client system.




If you configure the SSH server to use public key authentication:


a) The SSH client tells the SSH server which public key should be used for
    authentication when the SSH session is initially established.

b)The SSH server then checks to see if it has that client’s public key; if
    it does, it will generate a random number and encrypt it with that
    public key.

c) It then sends the encrypted number to the client, which
   decrypts it using the private key associated with the public key.

d) The client then calculates an MD5 checksum of the number it received from
     the server.

e) It sends the checksum back over the SSH server system,
    which then calculates its own MD5 checksum of the number it originally
    sent. If the two checksums match, the user is automatically logged in.



To configure public key authentication, the first thing you need to do
is create the public/ private key pair on the client system so that
you can send the public key to the SSH server. This can be done using
the



ssh-keygen 



command. Complete the following:





1. At the shell prompt of the client system, enter



ssh-keygen –t rsa




or



ssh-keygen –t dsa



depending on which encryption method your SSH server supports. To be safe,
you can simply use both commands to make two key pairs—one for RSA encryption
and the other for DSA encryption.





2. When prompted for the file in which the private key will be saved,
press enter to use the default filename of 



~/.ssh/id_rsa 



or



~/.ssh/id_dsa




The associated public key will be saved as




~/.ssh/id_rsa.pub 



or



~/.ssh/id_dsa.pub




respectively.


3. When prompted, enter a passphrase for the key. It is important that
    you use a passphrase. If you don’t, then anyone who manages to get a
    copy of your key files could authenticate to the SSH server without
    being required to enter a passphrase. Assigning a passphrase to the
    key renders the key useless if someone doesn’t know it.


    At this point, your key pair is created. An example of creating an RSA
    key pair is shown here:


    rtracy@ws1:~> ssh-keygen -t rsa
   Generating public/private rsa key pair.
   Enter file in which to save the key (/home/rtracy/.ssh/id_rsa): Enter
   passphrase (empty for no passphrase):
   Enter same passphrase again:
   Your identification has been saved in /home/rtracy/.ssh/id_rsa. Your
   public key has been saved in /home/rtracy/.ssh/id_rsa.pub. The key
   fingerprint is: ba:14:48:14:de:fd:42:40:f2:4b:c8:8b:03:a4:6d:fc
   rtracy@ws1
   The key's randomart image is:


+--[ RSA 2048]----+
| . +oo |
|oo + = o |
|o + = + o |
|o++o . |
| oEoS. |
| . o. |
|o|
| .. |
|.|
+-----------------+



rtracy@ws1:~>



The next thing you need to do is to copy the public key you just
created to the SSH server. 



An easy (and secure) way to do this is to use the 



scp 


command you learned about earlier in this chapter. The
syntax is




scp ~/.ssh/key_name.pub   user_name@address_of_SSH_server:filename





In the example shown here, the RSA public key for the local rtracy
user on WS1 is copied to the home directory of the rtracy user on WS3
and saved in a file named keyfile:



rtracy@ws1:~> scp ~/.ssh/id_rsa.pub ws3:keyfile
Password:



id_rsa.pub                                    100%  392     0.4KB/s   00:00
rtracy@ws1:~>


At this point, the contents of the key file you just copied need to be
appended to the end of the 



~/.ssh/authorized_keys 


file in the home directory of the user you will connect to the SSH server as.



An easy way to do this is to connect to the SSH server system using a standard
(password- authenticated) SSH session and then use the cat command to
append the contents of the key file to the end of the



~/.ssh/authorized_keys 



file in the user’s home directory. An example of how to do this is shown here:


rtracy@ws1:~> ssh -l rtracy ws3
Password:
Last login: Thu Jun  2 15:05:34 2011 from 192.168.1.84
rtracy@WS3:~> mkdir ~/.ssh
rtracy@WS3:~> cat keyfile >> ~/.ssh/authorized_keys
rtracy@WS3:~>


In this example, I logged in to the WS3 system via an SSH connection
as the remote rtracy user and then created the hidden .ssh directory
in that user’s home directory. I had to create the directory because
it didn’t exist yet. If the .ssh directory already exists, you can
skip this step and just append the contents of the key file to the end
of the authorized_keys file. 



Notice in the example that I used the cat command with the >> redirection 
characters to add the contents of the file named keyfile to the end of the authorized_keys file.





In this example, the authorized_keys file didn’t exist yet, so the
redirection process automatically created it for me. Because of this,
I could have actually just used a single > redirection character
because the file didn’t exist.




If, on the other hand, the authorized _keys file does already exist,
it’s very important that you remember to use the >> redirection
characters instead of >. Remember, using >> will append the output of
the command to the end of the specified file. Using a single
redirection > character will overwrite the entire file with the output
of the command. That wouldn’t be a good thing if the authorized_keys
file already had several keys in it that you wanted to keep.




You can now test the configuration to see if public key authentication
works. If you’re still logged in to an SSH session with the SSH
server, exit out of it. Then establish a new SSH session with the
server. You should be prompted for the key file’s passphrase instead
of a username and password, as shown in Figure 18-9.





Once you enter the passphrase, you will be authenticated to the SSH
server. Notice in the next example that no password was requested to
establish the SSH session:




rtracy@ws1:~> ssh -l rtracy ws3
Last login: Thu Jun  2 16:13:39 2011 from 192.168.1.84
rtracy@WS3:~>



If desired, you can use the



ssh-agent 



command to eliminate the need to enter the passphrase every time you establish 
an SSH connection.



Complete the following:




1. At the shell prompt of your client system, enter



ssh-agent bash


2. At the shell prompt, enter


   ssh-add ~/.ssh/id_rsa 


   or


   ssh-add ~/.ssh/id_dsa



   depending on which key file you have created.




3. When prompted, enter the key file’s passphrase. When you do, you
   should be prompted that the identity has been added. An example
   follows:

      rtracy@ws1:~> ssh-agent bash
      rtracy@ws1:~> ssh-add ~/.ssh/id_rsa
      Enter passphrase for /home/rtracy/.ssh/id_rsa:
      Identity added: /home/rtracy/.ssh/id_rsa (/home/rtracy/.ssh/id_rsa)
      rtracy@ws1:~>



    Once this is done, the ssh-agent process stores the passphrase in
    memory. It then listens for SSH requests and automatically provides
    the key passphrase for you when requested.














LX0-104 Exam Objectives (X)

Encryption IV

Tunneling Traffic Through SSH

One of the key security issues you must deal with as a system
administrator is the fact that many commonly used network protocols
transfer information as clear text. Good examples of this are the POP3
and IMAP daemons we discussed in the preceding chapter. We noted that
for your Linux MTA to download e-mail messages to client systems, you
must first enable either your POP3 or IMAP daemon via xinetd. Once
done, end users can use an e-mail client to connect to the MTA and
download their mail using the appropriate protocol. The problem,
however, is the fact that both of these daemons transfer data as clear
text by default. That means the usernames and passwords users send to
authenticate to the MTA are sent as clear text along with all the con-
tents of their e-mail messages. This allows anyone with a sniffer to
capture packets and view the contents of the transmissions.



The good news is SSH can be used to encrypt clear-text traffic by
tunneling it through an SSH connection. When client software for the
tunneled protocol (such as an e-mail client using POP3) establishes a
connection with the local SSH client, the traffic is encrypted using
SSH and tunneled through to the SSH server. On the SSH server end, the
traffic is decrypted and then forwarded to the appropriate target
service (in this case, the POP3 daemon). This is great, because the
information is encrypted before being transmitted, even though the
original protocol (in this case, POP3) doesn’t support encryption.




Let’s walk through an example of how you can use SSH to tunnel POP3 traffic:



1. Make sure the ssh client is installed on the local system where the
   e-mail client will run.


2. Make sure the sshd daemon is installed and running on the POP3 server.


3. Ensure IP port 22 is open on the server where sshd is running.


4. On the system where sshd is running, switch to root and edit the
 


  /etc/ssh/sshd_config 



   file.


5. Locate the AllowTcpForwarding parameter, uncomment it if necessary,
   and then set it to a value of yes. An example is shown here:



    AllowTcpForwarding  yes



6. Save your changes to the file and exit the editor.


7. Restart the sshd daemon by entering systemctl restart sshd at the
   shell prompt (as root).



8. Switch to the client system.



9. Create a local ssh tunnel from a local high IP port (in this
   example, port 2345) to port 110 on the POP3 server using the following
   command (enter the remote user’s password when prompted):



    ssh -f -N -L 2345:pop3_host_address:110 user_name@pop3_host_address




   The options specified in this command do the following:



   • –N and –f 

     Tell ssh not to execute a command remotely on the server
      and to run in the background after prompting for the remote user’s
      password


   • –L 

      Specifies three things:

      • The local port to be used for the client end of the tunnel (in
        this case, 2345)

      • The hostname or IP address of the remote POP3 server

      • The port on the remote server that will be used for the server
        end of the tunnel (in this case, 110)



   You don’t have to use port 2345. You can use the same port on both
   ends if desired. However, be aware that you will need to switch to the
   root user if you want to use a port number less than 1024 on the
   client side of the tunnel. These are called privileged ports.



10. With the tunnel established, configure the local e-mail client
    program to retrieve mail from the local system on the port you
    configured for the client end of the SSH tunnel. In this example, you
    would configure it to get mail from the local system’s IP address on
    port 2345. An example of how to do this with the Evolution e-mail
    client is shown in Figure 18-6.


    Note that I used the hostname of the local host, not the POP3 server, in the Server field.
    I also added the port number of the workstation end of the tunnel to the end of the
    hostname.




At this point, when the client uses the POP3 protocol to download new
messages, the SSH client on the local system will encrypt the request
and forward it to the SSH server through the SSH tunnel you
established. The SSH server will receive the request, decrypt it, and
then pass the data on to the local port 110, where the POP3 daemon is
listening. The cool thing about this process is that it is completely
transparent to the e-mail client software. As far as it’s concerned,
it’s retrieving e-mail from a local POP3 server.



You can test the tunnel you created using the telnet command from the
client end of the tunnel. The syntax is


telnet localhost client_tunnel_port


Here’s an example:



telnet localhost 2345



When you do this, you should see a connection established with the remote system where the POP3 daemon is running. An example is shown in Figure 18-7.




You can also tunnel your X server traffic to remote X clients using an SSH connection. This is important because unencrypted X traffic provides an attacker with a gold mine of information that he or she can use to compromise your systems.






To configure a remote X client without encryption, you can use the
following procedure:


1. On the remote X client, enter



   xhost +X_server_hostname



   This tells the client to accept connections from the X server.



2. On the X server, enter



   DISPLAY=X_client_hostname:0.0



   and then enter



   export DISPLAY



   This tells the X server to display its output on the remote X client.




3. From the X client, use the ssh client to access the shell prompt on
   the X server and then run the graphical application you want displayed
   on the X client. For example, you could enter gedit at the shell
   prompt to remotely display the gedit text editor. You could also enter
   office at the shell prompt to remotely display the OpenOffice.org
   suite.








This procedure works, but all the X traffic is transmitted
unencrypted. This isn’t good. Instead, you should use SSH to tunnel
the X server traffic between the X server and the X client. You can do
this using one of the following options:



• Use the –X option with the ssh client program.


• Set the 


ForwardX11 


option to a value of 


yes


in the


 /etc/ssh/ssh_config 


  file on the X client system.




Once this is done, you then need to set the


X11Forwarding 


option to



yes 



in the



/etc/ssh/sshd_config 



file on the X server system.












LX0-104 Exam Objectives (X)

Wednesday, December 21, 2016

Encryption III

Configuring OpenSSH


To use ssh, you must first install the openssh package on your system
from your distribution media. This package includes both the sshd
daemon and the ssh client. OpenSSH is usually installed by default on
most Linux distributions. You can use the package management utility
of your choice to verify that it has been installed on your system.



The process of configuring OpenSSH involves configuring both the SSH
server and the SSH client.


You configure the sshd daemon using the



/etc/ssh/sshd_config 



file.



The ssh client, on the other hand, is configured using the



/etc/ssh/ssh_config file 



or the



~/.ssh/ssh_config




file.





Let’s look at configuring the SSH server (sshd) first. There are many
directives within the



/etc/ssh/sshd_config 



file. The good news is that after you install the openssh package, the default parameters work
very well in most circumstances. To get sshd up and running, you shouldn’t have to make many changes to the sshd_config file. Some of the more useful parameters in this file include those shown in Table 18-1.









The ssh client on a Linux system is configured using the



/etc/ssh/ssh_config



file.





The



/etc/ ssh/ssh_config 




file is used to specify default parameters for all users running ssh on the system.



A user can override these defaults using the



~/.ssh/ssh_config 




file in his or her home directory. The precedence for ssh client configuration
settings are as follows:





1. Any command-line options included with the ssh command at the shell prompt


2. Settings in the ~/.ssh/ssh_config file


3. Settings in the /etc/ssh/ssh_config file






As with the sshd daemon, the default parameters used in the ssh_config
file usually work without a lot of customization. However, some of the
more useful parameters that you can use to customize the way the ssh
client works are listed in Table 18-2.









Of course, before you can connect to an SSH server, you must open up



port 22 


in the host- based firewall of the system where sshd is
running. For example, in Figure 18-4, the YaST Firewall module has
been loaded on a SUSE Linux Enterprise Server 10 system and configured
to allow SSH traffic through.






After configuring your firewall, you can load the ssh client on your
local computer and connect to the sshd daemon on the remote Linux
system by entering




ssh –l user_name ip_address




TIP


Don’t forget the –l parameter. If you don’t, the SSH client will
attempt to authenticate you to the remote system using the same
credentials you used to authenticate to the local system. If the
credentials are the same onboth the client and server systems, you’ll
still be able to authenticate. But if they aren’t, you won’t be able
to authenticate.






For example, if I wanted to connect to a remote Linux system with a
hostname of fedora (which has an IP address of 10.0.0.85) as the user
student using the ssh client on a local computer system, I would enter



ssh –l student fedora 



at the shell prompt. This is shown in Figure 18-5.






Notice in Figure 18-5 that I was prompted to accept the public key
from the fedora host because this was the first time I connected to
this particular SSH server. Once done, I was authenticated to the
remote system as the student user (notice the change in the shell
prompt). At this point, I have full access to the shell prompt on
fedora and I can complete any task that I could if I were sitting
right at the console of the remote system. To close the connection, I
just enter exit at the shell prompt.









LX0-104 Exam Objectives (X)

Encryption II

How OpenSSH Works


OpenSSH provides the functionality of Telnet, rlogin, rsh, rcp, and
FTP, but it does so using encryption. To do this, OpenSSH provides the
following encryption-enabled components:



sshd   This is the ssh daemon that allows remote access to the shell prompt.

ssh  This is the ssh client used to connect to the sshd daemon on
another system.

scp  This utility can be used to securely copy files between systems.

sftp  This utility can be used to securely FTP files between systems.

slogin  This utility can also be used to access the shell prompt remotely.





To establish a secure connection, OpenSSH actually uses both
private/public key encryption along with secret key encryption. First,
the SSH client creates a connection with the system where the SSH
server is running on


IP port 22


The SSH server then sends its publickeys to the SSH client. The SSH server uses the
host key pair to store its private and public keys, which identify the host where the SSH
server is running. The keys are stored in the following files:




Private key /etc/ssh/ssh_host_key

Public key /etc/ssh/ssh_host_key.pub




The client system receives the public key from the SSH server and
checks to see if it already has a copy of that key. The SSH client
stores keys from other systems in the following files:


• /etc/ssh/ssh_known_hosts

• ~/.ssh/known_hosts




By default, if it doesn’t have the server’s public key in either of
these files, it will ask the user to add it. 


Having done this, the client now trusts the server system and generates a 256-bit secret
key.


It then uses the server’s public key to encrypt the new secret
key and sends it to the server. 


Because the secret key was encrypted with the public key, the server can decrypt it using 
its private key.

Once this is done, both systems have the same secret key and can now
use symmetric encryption during the duration of the SSH session.


The user is presented with a login prompt and can now authenticate
securely because everything she types is sent in encrypted format.




NOTE  

In SSH version 2, several things are a little different. First of
all, the host key files used on the server are different. The


/etc/ssh/ssh_host_rsa_key 


and



/etc/ssh/ssh_host_dsa_key 



files are used (along with their associated public keys) instead of
/etc/ssh/ssh_host_key. The key pair used depends on
which encryption mechanism (RSA or DSA) the client and server have
been configured to use. In addition, the secret key is not actually
transmitted from the client to the server system. A Diffie-Hellman key
agreement is used instead to negotiate a secret key to be used for the
session without actually sending it over the network medium.




After this secure channel has been negotiated and the user has been
authenticated through the SSH server, data can be securely transferred
between both systems.







LX0-104 Exam Objectives (X)

Tuesday, December 20, 2016

Encryption I

Encrypting Remote Access with OpenSSH

In the early days of UNIX/Linux, we used a variety of tools to
establish network connections between systems. You could access the
shell prompt of a remote system using Telnet, rlogin, or rshell. You
could copy files back and forth between systems using rcp or FTP.
However, these utilities had one glaring weakness. Network services
such as Telnet, rlogin, rcp, rshell, and FTP transmit data as clear
text. Anyone running a sniffer could easily capture usernames and
passwords along with the contents of the transmissions.



For example, suppose I remotely accessed my Linux system via Telnet.
After authenticating to the remote system, I decided that I needed to
switch to root using the su command to complete several tasks. If
someone were sniffing the network wire while I was doing this, they
would be able to easily grab the following information:


My username and password

The root user password



This is not a good thing! The attacker would have everything he needs
to gain unfettered access to my Linux system.



To prevent this from happening, you can use the OpenSSH package to
accomplish these same management tasks using encryption. In this part
of the chapter, you will learn how to use OpenSSH. The following
topics are addressed:



How OpenSSH works

Configuring OpenSSH

Tunneling traffic through SSH

• Configuring SSH to use public key authentication














LX0-104 Exam Objectives (X)

Securing I

Configuring xinetd and inetd


In this part of this chapter, you learn how to configure Linux
“super-daemons.” Most Linux distributions install a wide variety of
network services during the system installation process. Most of these
services, such as Telnet, are very handy and provide a valuable
service. However, they aren’t needed most of the time. We need a way
to provide these services when requested but then unload them when
they aren’t needed, saving memory, reducing CPU utilization, and
increasing the overall security of the system.



Depending on your distribution, there are two ways to do this. The
following options are discussed here:



Configuring xinetd
Configuring inetd





Configuring xinetd

Many Linux distributions include a special daemon called xinetd that
can be used to manage a number of different network services. In this
part of this chapter, you learn how to configure and use xinetd. We’ll
discuss the following topics:



How xinetd works
Configuring xinetd network services
Using TCP Wrappers



Let’s begin by discussing how the xinetd daemon works.



How xinetd Works

The xinetd daemon is a super-daemon. It’s called a super-daemon
because it acts as an intermediary between the user requesting network
services and the daemons on the system that provide the actual
service. This is shown in Figure 17-26.




When a request for one of the network services managed by xinetd
arrives at the system, it is received and processed by xinetd, not the
network daemon being requested. The xinetd daemon then starts the
daemon for the requested service and forwards the request to it. When
the request has been fulfilled and the network service is no longer
needed, xinetd unloads the daemon from memory.




Some of the network services managed by xinetd include the following:



• chargen
• daytime
• echo
• ftp
• pop3
• rsync
• smtp




Configuring xinetd Network Services


As with all the network services I’ve discussed, the xinetd
configuration files are stored in /etc. The xinetd daemon itself is
configured using the



/etc/xinetd.conf 


file. Generally speaking, you won’t need to make many changes to this file. The default
configuration usually works very well.



At the end of this file you will notice a directive that reads



includedir /etc/xinetd.d



This line tells the xinetd daemon to use the configuration files in
/etc/xinetd.d. These files tell xinetd how to start each service when
requested. Each of these files is used to configure the startup of a
particular service managed by xinetd.


For example, the 


vsftpd file 


in


/etc/xinetd.d 


is used to configure the vsftpd FTP server daemon. The xinetd configuration
settings for vsftpd in this file are shown here:



service ftp
{

socket_type     = stream
protocol        = tcp
wait            = no
user            = root
server          = /usr/sbin/vsftpd

# server_args        =
# log_on_success     += DURATION USERID
# log_on_failure     += USERID
# nice               = 10
   disable            = yes


}




This file doesn’t configure the daemon itself. It only tells xinetd
how to start up the daemon. The actual configuration file for the
vsftpd daemon itself is in /etc/vsftpd.conf.


One of the most important parameters in the 


/etc/xinetd.d/vsftpd



file is the disable directive.


This directive specifies whether or not xinetd is allowed to start the
daemon when requested.



In the preceding example, this directive is set to yes, which means the daemon will not 
be started when requested. 



The daemon to actually start is specified by the 


server = directive


In the example, xinetd will start the /usr/sbin/vsftpd daemon. 


server          = /usr/sbin/vsftpd



To enable this daemon, you need to edit this file and change the disable
parameter to a value of no.



disable            = no




After changing a value in any of the files in /etc/xinetd.d, you need to 
restart the xinetd daemon using its init script in /etc/rc.d/init.d or /etc/init.d.



TIP  If you enable a service provided by xinetd, you’ll need to create
an exception in your Linux system’s host firewall to allow traffic for
the IP port used by the daemon.






Using TCP Wrappers


If you enable a particular service using its configuration file in the
/etc/xinetd.d/ directory, any host can connect to it through xinetd.



However, depending on how your system is deployed, you may need to
control access to these network services. You may want to limit access
to only a specific set of hosts and deny access to everyone else.



If this is the case, you need to configure these services to use TCP
Wrappers, which are used by xinetd to start and run the network
services using a set of configuration files that specify who can and
who can’t access the service.




To use TCP Wrappers, you first need to enable the functionality in
each service’s configuration file in /etc/xinetd.d. Do the following:



1. Verify that the tcpd package has been installed on your Linux system.


2. Open the appropriate configuration file in a text editor.


3. Comment out the existing server = line from the file.


4. Add the following line:


     server      = /usr/sbin/tcpd


This will cause xinetd to start the tcpd daemon instead of the service
daemon itself.



5. Add the following line:


     server_args       = path_to_daemon


This tells the tcpd daemon to then run the requested network daemon.
In the example shown here, the /etc/xinetd.d/telnet file has been
configured to run the vsftpd daemon within a TCP Wrapper:



service ftp
     {
# #
socket_type     = stream
protocol        = tcp
wait            = no
user            = root
# server          = /usr/sbin/vsftpd


server          = /usr/sbin/tcpd
server_args     = /usr/sbin/vsftpd


# log_on_success     += DURATION USERID
# log_on_failure     += USERID
# nice               = 10
   disable            = no

}



6. Save the file and restart the xinetd daemon.





Next, you need to create your access controls. The tcpd daemon uses
the 



/etc/hosts.allow 


and


/etc/hosts.deny 



files to specify who can access the services it manages.



Entries in /etc/hosts.allow are allowed access.


Hosts in /etc/hosts.deny are not allowed access. 



The syntax for both of these files is



service: host_addresses



As these files are processed, the search stops as soon as a matching
condition is found in a file. Files are no longer processed after this
occurs. The following steps occur in the order shown:


Access will be granted if a matching entry is found in the
  /etc/hosts.allow file.

If not, access will be denied if a matching entry is found in the
  /etc/hosts.deny file.

If this does not occur, access will be granted.




For example, suppose you needed to configure the /etc/hosts.allow file
to allow access to the vsftpd daemon for just a few specific hosts.
The following entry grants access to the vsftpd service to hosts with
the IP addresses of 192.168.1.10 and 192.168.1.102.



vsftpd:     192.168.1.10, 192.168.1.102




Some distributions use the inetd daemon instead of xinetd. This daemon
works in much the same manner as xinetd. Let’s learn how it works
next.







Configuring inetd


The inetd daemon is a super-daemon like xinetd, but it is typically
used on older Linux distributions. Like xinetd, the inetd daemon acts
as a mediator for connection requests to network services running on
the Linux host. It accepts connection requests from client systems,
starts the requested service, and then forwards the requests from
clients to the newly started daemon. When the transaction is complete
and the connection from the client is terminated, the daemon is
stopped on the Linux host.



As we discussed with xinetd, managing the network services on your
Linux host in this way has advantages and disadvantages. Key among
these is the fact that it conserves system memory and CPU resources.
The network daemon is started only when it is needed. When it isn’t
needed, it’s removed from memory until it is requested again. However,
this benefit comes at a cost in terms of latency. When a service is
requested by a client, the client must wait for a short period of time
while the necessary daemon is loaded and the connection established.
Therefore, inetd (and xinetd) should only be used to manage network
services that are needed only occasionally on the system.




The inetd daemon is configured using the 



/etc/inetd.conf 



file. Unlike the xinetd daemon, all the services managed by inetd are configured in
this single configuration file. 



Each line in this file configures a single service to be managed by inetd. The syntax used in 
inted.conf is shown here:



service_name  socket_type  protocol  flags  user  executable  arguments




Each of the parameters in this line is described in Table 17-6. Here
is a sample entry in inetd.conf for the vsftpd daemon:








ftp    stream    tcp    nowait    ftp      /usr/sbin/tcpd     vsftpd




Notice in this example that you can use TCP Wrappers with inetd just
as you did with xinetd. In this example, when a client tries to
establish an FTP connection with this Linux host, the inetd daemon
will start the tcpd daemon and pass to it the name of the actual
daemon to be started (vsftpd) as a server argument. As with xinetd,
using TCP Wrappers with inetd allows you to control access to the
network services running on the host using




the



/etc/hosts.allow 



and



/etc/ hosts.deny 



files.










LX0-104 Exam Objectives (V and U, 323, 647 - 689)

Network Access I

Defending Against Network Attacks


It would be nice if we lived in a world where we could connect
networks together and be able to trust others to respect our systems.
Unfortunately, such a world doesn’t exist. If your Linux systems are
connected to a network, you need to be very concerned about network
attacks. If your network is connected to a public network, such as the
Internet, you need to be extremely concerned about network attacks.



As with most of the topics discussed in this book, network security is
a huge topic that can fill many volumes. We really don’t have the time
or space here to do the topic justice. Instead, I’m going to discuss
some basic things you can do to defend against network attacks. I’ll
discuss the following:



Mitigating network vulnerabilities

Implementing a firewall with iptables




Let’s begin by discussing some things you can do to mitigate network
vulnerabilities.





Mitigating Network Vulnerabilities


The good news is that there are some simple things you can do to
mitigate the threat to your Linux systems from network attacks. These
include the following:


Staying abreast of current threats

Unloading unneeded services

Installing updates





Let’s first discuss staying abreast of current network threats.


Staying Abreast of Current Threats


One of the biggest problems with network security threats is the fact
that we’re always one step behind the guys wearing black hats. No
sooner do we implement a fix to protect our systems from the latest
exploit than they hit us with a new one. Therefore, it’s critical that
you stay up to date with the latest network threats. You’ll soon see
that they change week to week, and sometimes even day to day! The only
way you can keep your systems safe is to be aware of what the current
threats are.



The best way to do this is to visit security-related websites on a
regular basis. These sites inform you of the latest exploits and how
to defend yourself against them. One of the best sites to visit is
www.cert.org, which is maintained by the Computer Emergency Response
Team (CERT) at the Carnegie Mellon Software Engineering Institute. The
CERT website contains links to the latest security advisories.



Another excellent resource is www.us-cert.gov. Maintained by the
United States government’s Computer Emergency Readiness Team, the
US-CERT website provides tons of information about current
cyber-attacks.



Of course, there are hundreds of other security-related websites out
there. However, those I’ve listed here are among the most
authoritative sites around. Most of the other security-related
websites derive their content from these sites. If you visit these
sites religiously, you can stay abreast of what’s happening in the
security world and hopefully prevent an attack on your systems.




In addition to staying current with these sites, you should also
review your systems to see if all the services they provide are really
necessary. Let’s talk about how to do that next.




Unloading Unneeded Services


One of the easiest things you can do to mitigate the threat from a
network attack is to simply unload network services running on your
system that aren’t needed. Depending on your distribution and how you
installed it, you probably have a number of services running on your
system that you didn’t know were there and that you don’t need. You
can view a list of installed services and whether or not they are
running by entering chkconfig at the shell prompt. This command will
list each service and its status, as shown in Figure 17-9.




As a word of caution, however, don’t disable a service unless you know
what it actually does. Some daemons are required for the system to run
properly. If you don’t know what a particular service is, use the man
utility, the info utility, or the Internet to research it and
determine whether or not it is necessary.


In addition to chkconfig, you can also use the


nmap 

command to view open IP ports on your Linux system. This information is really useful.
Each port that is open on your Linux system represents a potential vulnerability. Some open
ports are necessary. Others, however, may not be necessary. You can close the port by unloading the service that is using it.




The syntax for using nmap is



nmap –sT host_IP_address 



for a TCP port scan



and




nmap –sU host_IP_address 




for a UDP port scan. 




In Figure 17-10, the nmap utility has been used to scan for open TCP ports.




As you can see in this figure, a number of services are running on the
host that was scanned. You can use this output to determine what
should and shouldn’t be left running on the system. To disable a
service, you can use its init script in your init directory to shut it
down. You should also use the chkconfig or systemctl command to
configure the service to not automatically start.




TIP You should run nmap both locally and from a different system
against the same host. This will tell you what ports are open on your
system and which services are allowed through your host’s firewall.




In addition to the nmap utility, you can also use the


netstat utility to scan for open ports. 


The netstat utility is another powerful toolin your virtual toolbox. The syntax for
using netstat is to enter




netstat option 



at the shell prompt of the system you want to scan. You can use the options listed in Table 17-2.








An example of using netstat with the –l option to view a list of
listening sockets on a Linux host is shown in Figure 17-11.




netstat -l 





Installing Updates

One of the most important things you can do to defend against network
attacks is to regularly install operating system updates. A simple
fact of life that we have to deal with in the IT world is that
software isn’t written perfectly. Most programs and services have some
defects. Even your Linux kernel has defects in it. Some of these
defects are inconsequential, some are just annoying, and others
represent serious security risks.




As software is released and used, these defects are discovered by
system administrators, users, and (unfortunately) hackers. As they are
discovered, updates are written and released that fix the defects.
With most distributions, you can configure the operating system to
automatically go out on the Internet and periodically check for the
availability of updates. For example, with SUSE Linux, you can use the
YaST Online Update module, shown in Figure 17-12, to do this. You can
configure the system to either automatically install them for you or
prompt you to install them. The tool you use to update your system
will vary depending on which Linux distribution you are using.




Implementing a Firewall with iptables


Today, most organizations connect their corporate networks to the
Internet. Doing so enhances communications and provides access to a
wealth of information. Unfortunately, it also exposes their network to
a serious security threat. If users can go out on the Internet, an
uninvited person from the Internet can also get into the network,
unless measures are taken to keep this from happening. To do this, the
organization needs to implement a network firewall as well as host-
based firewalls on each system.




A network firewall is very different from a host-based firewall. A
host-based firewall controls traffic in and out of a single computer
system. A network firewall, on the other hand, is used to control
traffic in and out of a network segment or an entire network.



In this part of the chapter, we’re going to spend some time learning
how to use Linux in both capacities. We’ll discuss the following
topics:



• How firewalls work

Implementing a packet-filtering firewall

Let’s begin by discussing how firewalls work.



How Firewalls Work

So what exactly is a firewall? A firewall is a combination of hardware
and software that acts like a gatekeeper between your network and
another network. Usually, a firewall has two or more network
interfaces installed. One is connected to the internal network; the
other, connected to the public network, acts much like a router.
However, a firewall is not a router (although it may be implemented in
conjunction with one).


The job of a firewall is to monitor the traffic that flows between the
networks, both inbound and outbound. You configure the firewall with
rules that define the type of traffic that is allowed through. Any
traffic that violates the rules is not allowed, as shown in Figure
17-13




Firewalls can be implemented in a variety of ways. One of the most
common types is a packet-filtering firewall, where all traffic moving
between the private and public networks must go through the firewall.
As it does, the firewall captures all incoming and outgoing packets
and compares them against the rules you’ve configured.


The firewall can filter traffic based on the origin address, the
destination address, the origin port, the destination port, the
protocol used, or the type of packet. If a packet abides by the rules,
it is forwarded on to the next network. If it doesn’t, it is dropped,
as shown in Figure 17-14.




Packet-filtering firewalls don’t necessarily have to be implemented
between your network and the Internet. They can also be implemented
between a network segment and a backbone segment to increase your
internal network security.




To use a packet-filtering firewall, you must be familiar with which
port numbers are used by default by specific services. IP ports 0
through 1023 are assigned by the IANA organization to network services
and are called well-known ports. Some of the more common port
assignments that you need to be familiar with are shown in Table 17-3.
Packet-filtering firewalls are widely used. They cost less than other
types of firewalls. They also require relatively little processing.
Data moves through very quickly, making them much faster than other
firewalls.






Implementing a Packet-Filtering Firewall


Just as Linux can act as a router, it can also be configured to
function as a firewall. In fact, it can be used to configure a very
robust, very powerful firewall. Currently, there are many firewall
appliances on the market based on the Linux operating system. There
are also many downloadable Linux ISOs, such as from Untangle, that you
can install on standard PC hardware to turn it into a router. For our
purposes here, we’re going to focus on creating a basic
packet-filtering firewall using iptables.



The first step in setting up a packet-filtering firewall on a Linux
system is to design your implementation. You should answer the
following questions when designing in your firewall:



Will you allow all incoming traffic by default, establishing rules
for specific types of traffic that you don’t want to allow in?


• Will your firewall deny all incoming traffic except for specific
types of traffic that you want to allow?


• Will you allow all outgoing traffic by default, blocking only
specific types or destinations?


• Will you block all outgoing traffic except for specific types or destinations?


• What ports must be opened on the firewall to allow traffic through
from the outside? For example, are you going to implement a web server
that needs to be publicly accessible behind the firewall? If so, you
will need to open up ports 80 and probably 443 on your boundary
firewall.




How you decide to configure your firewall depends on your
organization’s security policy. However, I recommend that you err on
the side of caution. Given a choice, I’d rather deal with a user who’s
upset because the firewall won’t let him share bootlegged music files
over the Internet than deal with a major attack that has worked its
way deep into my network.



Once your firewall has been designed, you’re ready to implement it.
After installing and configuring the required network boards, you can
configure a firewall on your Linux system using the iptables utility.
Many Linux distributions include graphical front ends for iptables
that you can use to build your firewall. These front ends are usually
not as flexible as the command-line utility, but they make the setup
process much faster and easier!




The heart of the Linux firewall is the iptables package. Most
distributions include it. If yours didn’t, it can be downloaded from
www.netfilter.org. Versions of the Linux kernel prior to 2.4 used
ipfwadm or ipchains instead of iptables. If you visit The Linux
Documentation Project at www.tldp.org, you’ll see that many of the
firewall HOWTOs are still written to help with these older packages.



NOTE The iptables package will be replaced in the future by a new
package called nftables.




The Linux kernel itself completes packet-filtering tasks on Linux. In
order to use iptables, your kernel must comply with the netfilter
infrastructure. The netfilter infrastructure is included by default
when most distributions are installed.



The netfilter infrastructure uses the concept of “tables and chains”
to create firewall rules. A chain is simply a rule that you implement
to determine what the firewall will do with an incoming packet. The
netfilter infrastructure uses the filter table to create
packet-filtering rules. 


Within the filter table are three default chains:


FORWARD 

The FORWARD chain contains rules for packets being
transferred between networks through the Linux system.


INPUT 

The INPUT chain contains rules for packets that are being sent
to the local Linux system.


OUTPUT 

The OUTPUT chain contains rules for packets that are being
sent from the local Linux system.


If you don’t explicitly specify a table name when using the iptables
utility, it will default to the filter table.


Each chain in the filter table has four policies that you can configure:


ACCEPT

DROP

QUEUE

REJECT



You can use iptables to create rules within a chain. A chain can
contain multiple rules. Each rule in a chain is assigned a number. The
first rule you add is assigned the number 1. The iptables utility can
add rules, delete rules, insert rules, and append rules. The syntax
for using


iptables 


is




iptables –t table command chain options




You can use the following commands with iptables:



–L 

Lists all rules in the chain

–N 

Creates a new chain



You can work with either the default chains listed previously or
create your own chain. You create your own chain by entering iptables
–N chain_name. You can add rules to a chain by simply using the –A
option. 




You can also use one of the other options listed here:


• –I 

Inserts a rule into the chain

• –R 

Replaces a rule in the chain

• –D 

Deletes a rule from the chain

• –F 

Deletes all the rules from the chain (called flushing)

• –P 

Sets the default policy for the chain




You can also use the following options with iptables:

–p 

Specifies the protocol to be checked by the rule. You can specify
all, tcp, udp, or icmp. If you specify tcp or udp, you can also use
the following extensions for matching:


    • --sport  Specifies a single port to match on

    • --dport  Specifies a single destination port to match on

    • --sports Specifies multiple source ports to match on

    • --dports Specifies multiple destination ports to match on




 • –s ip_address/mask 

Specifies the source address to be checked. If you want to check all IP addresses, use 0/0.


–d ip_address/mask

Specifies the destination address to be checked. If you want to check all IP addresses, use 0/0.


–j target 

Specifies what to do if the packet matches the rule. You can specify ACCEPT, REJECT, DROP, or LOG actions.


–i interface 

Specifies the interface where a packet is received. This only applies to INPUT and FORWARD chains.


–o interface 

Specifies the interface where a packet is to be sent. This applies only to OUTPUT and FORWARD chains.






The best way to learn how to use iptables is to look at some examples.
Table 17-4 has some sample iptables commands that you can start with.







You can use iptables to create a sophisticated array of rules that
control how data flows through the firewall. Most administrators use
the

–P option with iptables to set up the firewall’s default filtering
rules. 


Once the default is in place, you use iptables to configure
exceptions to the default behavior needed by your particular network.




Remember that any rules you create with iptables are not persistent.
If you reboot the system, they will be lost by default.


To save your rules, you use the


iptables-save 


command to write your tables out to a file. You can then use the


iptables-restore


command to restore the tables from the file you created.

















LX0-104 Exam Objectives (V and U, 323, 647 - 689)